AI take over: MPs beware, robots could do a far better job, says Nigel Nelson
By Nigel Nelson
Published: 12/05/2023- 16:05
Updated: 12/05/2023- 16:05
Along with death and taxes another of life’s certs is politicians being invariably behind the curve when it comes to technological developments.
But now that a national debate is raging over whether artificial intelligence will be humanity’s saviour or destroyer, MPs are finally catching up.
The Science, Innovation and Technology Select Committee took evidence on the implications for the music industry of AI’s ability to nick any singing voice it chooses from Paul McCartney to Taylor Swift.
At PMQ’s on the same day Rishi Sunak promised to “ensure there are appropriate guardrails in place” as we embrace the opportunities AI offers.
AI has thrown up all sorts of ethical and moral dilemmas
MPs should have thought about this before the PM became an MP eight years ago. Although there were no guardrails back then warning flags were popping up, not least an extraordinary survey of 1,000 British adults which showed 26 per cent of them would be happy to go on a date with a silicone-skinned sexbot pre-programmed with emotions and personalities.
Before we move onto the moral and ethical dimensions this weird finding throws up, there’s the ticklish question of where exactly to take a robot on a first date to ensure they feel at home. London’s Science Museum perhaps where there’s plenty of futuristic wizardry to marvel at - and then split the tab for dinner afterwards.
The big legal questions as sexbots are programmed with feelings and become ever more lifelike are what rights they should have? Having splashed out around £10k on one are you its partner or owner? Would you need a robot’s consent to have sex with it? And if it is programmed to plead a headache and refuse could you get your money back? These are the kind of issues the Eighth International Congress on Love and Sex with Robots may address. It will be held - virtually of course - between 28 – 30 August.
A previous conference featured the likes of Portland, Oregon nomad Regulo Guzman Jr, aka Reggie, who “invited” the 163cm doll Annie into his life in 2018 followed by Judith a year later and has since used “imagination, creativity and personal experience” to develop their characters and personality. One wonders what might happen if one of his machine companions gets jealous of the other. Is Annie (or Judith) free to walk away from Reggie’s camper van and the relationship?
On a more down-to-earth level driverless vehicle developers have employed moral philosophers to advise on how a car should behave if, say, a pedestrian suddenly steps off the pavement in front of it.
A human driver would react instinctively and wrench the wheel. A robot car will just do what it is told. It must be programmed to swerve, otherwise its hardwiring would prioritise not putting its own non-driving driver and passengers at risk from oncoming traffic.
A driverless car would stick rigidly to the legal speed limit. It shouldn’t if breaking it is required to accelerate out of the way of an ambulance or fire engine on an emergency call. Will it recognise horses and riders so as to slow down when passing them? But as nine in ten accidents are caused by human error, the philosopher might argue that robot cars serve a greater good even if they make mistakes which humans would not.
Yet because our legislators have come so late to the AI party the regulatory framework needed to allow it to do its best and stop it doing its worst is not in place.
MPs have been slow to take action on AI
The World Wide Web was founded by Tim Berners-Lee in 1991, yet more than three decades later Britain is still struggling with online safety laws to control it.
So far policing AI has been entrusted to an obscure Whitehall organisation called the Centre for Data Ethics and Innovation. Outgoing chair Edwina Dunn said lives could be transformed with "the promise of AI-enabled drug discoveries to cure life-threatening illnesses".
But she acknowledges that only by using AI wisely and responsibly will it earn public trust. A tough sell given that half of AI researchers reckon there is a 10 per cent chance it will lead to the extinction of the human race. AI pioneer Geoffrey Hinton has just quit Google because of such fears, saying "it is hard to see how you can prevent the bad actors from using it for bad things".
It is why MPs need to spend more time understanding AI. After all, their own livelihoods would be under threat should voters discover robots could do a better job than them.