'The gulf between human reasoning and computer logic is still enormous'
Don't Miss
Most Read
Trending on GB News
We need to talk about... Artificial intelligence.
If, like me, the peak of your IT ability begins and ends with Windows 98 and even a basic formula on a spreadsheet is cause for despair, you’ve probably allowed the stupefyingly rapid pace of technological advancements to wash over you and are the equivalent to Y2K technological flotsam and jetsam.
Yes, you may now use Face ID to unlock your phone, ask your in car satnav to figure out your journey rather than leaf through the pages of Ordnance Survey and get peeved trying to prove to Google you aren’t a robot by selecting pixelated boxes of traffic lights.
But the latter may be an eerie harbinger of what lies ahead. Because Artificial Intelligence is on the rampage, and there is a serious arms race taking place around the world which will unleash a largely unregulated, untested and potentially limitless technology upon us all.
Sci Fi films and robot movies almost always explore the concept that automatons may one day escape the grip of human control, evolve free will and turn against us. But actually, the reality may be quite the opposite.
The risk of AI is that however clever it gets, however much machine learning can outpace the human brain with processing speeds - get this, ten million times faster, the gulf between human reasoning and computer logic is still enormous.
A computer can win a game of chess once all the potential rules and moves are programmed in, calculating at split second speeds an infinite number of outcomes, but the nuances of rational thought can be rather lost on our pre-programmed friend, leading to petty frustrations or catastrophic disasters.
Ask Artificial Intelligence to build a machine to get between point A and B in the fastest time possible, and rather than develop a flying craft to speed through the skies at a record pace, it could simply decide to forge a gigantic tower and topple across the gap, indeed joining the two points at the speed of gravity, yet causing quite a lot of mess in the process.
AI will do exactly as you say, without limitation, unless you, the human, can foresee all of the potential interpretations and ensure unwanted ones are coded out. This is potentially the real risk of AI, especially in a military setting, where the response times of machine learning far outstrip considered reason - essential in warfare to allow for de-escalation and avert disaster.
AI is now being applied everywhere in ways most mortal beings struggle to comprehend. It is an unprecedented and potent technology fast being harnessed by a tiny few without regulation or restriction.
It’s being applied to healthcare, with artificial intelligence able to crunch an individual's biometric data to accurately predict future health issues and precisely target bespoke treatment in truly incredible advancements.
It has even been used to develop the first AI driven synthetic antibiotic, perfectly rendered to attack unwanted microbes. Yet the same capability could also allow a hostile actor to create both an artificial disease and its antidote to unleash global disaster.
Covid is surely a deafening klaxon on how easily things can go terribly wrong. When driverless cars have a glitch, usually down to an unpredicted factor not anticipated during the technology’s development, the outcome could be fatal.
In one such experiment an AI vehicle programmed for use on motorways ploughed straight into the side of a lorry in an urban street as it was only able to recognise an HGV from behind, and calculated the side view of the truck to instead be a bridge it could pass under.
The idea that AI could therefore rise up and take over the world is evidently rather fanciful. At least for now. Yet we are already seeing how algorithms are having a hugely harmful impact on societies when interwoven with human emotional intelligence and complex social structures.
Accusations today abound that Facebook prioritises and hyper-concentrates extreme and polarising content that is messing with people’s minds.
As the world tools up for a major arms race to develop this incredible, God-like technology at an astonishing speed, how do we regulate something that can wield so much power and outstrip mortal competencies?
Who are the AI puppeteers, do they know what they are messing with and are their intentions pure? Far from the adorable domestic robot in The Jetsons, AI has the potential to unburden humanity from laborious calculations and decisions bound by human limitation.
But is enthusiastically gifting our agency to a machine really a good thing? Or are we headed for a dystopian disaster?
Today, we really need to talk about AI