AI therapy boom as 41% of Britons turn to ChatGPT for counselling

POOL
A groundbreaking study surveyed 30,994 adults
Don't Miss
Most Read
Trending on GB News
More than four in ten UK adults would use artificial intelligence such as ChatGPT for therapy, a major study shows. The international research, led by Bournemouth University and involving 30,994 adults comes as the country faces record high waiting lists for mental health support.
Recent research from the British Medical Association reveals nearly 1.7 million people in England are on waiting lists for mental health care, with the figure rising due to high demand, and insufficient resources. As many as10,198 adults and 35,735 children have been waiting more than two years for mental health treatment to start.
The groundbreaking Bournemouth University study, 'Who Lets AI Take Over?' surveyed 30,994 adults across 35 countries and asked participants whether they would trust AI to take on socially important roles.In Britain, 41 per cent said they would use AI for counselling or mental health support.
Researchers say the result highlights how quickly public trust in AI tools is growing - particularly when they are available instantly.
Lead researcher Dr Ala Yankouskaya, a Senior Lecturer in Psychology at Bournemouth University, said: “If someone is experiencing depression, they do not want to wait months for an appointment, so instead they can turn to AI.”
But Dr Yankouskaya warned that chatbots should never be seen as a replacement for trained professionals.
“When I tested some of the tools myself, I found the language used very vague and confusing,” she said.
“So it is no substitute for speaking to a health professional.”

More people are turning to AI apps like ChatGPT for therapy
|PA
The research also revealed a clear hierarchy in the roles people are willing to hand over to artificial intelligence.
Across the 35 countries surveyed: 75.7 per cent said they would use AI as a companion, 60.6 per cent would trust AI as a mental-health advisor, 49.8 per cent would let AI teach their children, 45.3 percent would rely on AI as a doctor.
These results suggest people are far more comfortable trusting AI with emotional support than with formal expertise such as medical care or education.
Researchers say this reflects the way AI chatbots are designed to mimic conversation and empathy.
LATEST DEVELOPMENTS
OpenAI says anyone can start a conversation with ChatGPT from within WhatsApp via 1-800-242-8478 or scanning the QR code above | OPENAI PRESS OFFICE Dr Yankouskaya explained that large language models are built to feel like ongoing, personalised conversations.
“By creating conversations that feel continuous and personal, ChatGPT can mimic aspects of human interaction,” she said.
“This can increase the likelihood of users developing a sense of connection or familiarity with the AI.”
That sense of connection can make chatbots feel supportive and non-judgemental - but experts warn the illusion of empathy can also create emotional dependence.
Concerns are growing that the trend is spreading quickly among teenagers.=
Separate research found one in four teenagers in England and Wales has used an AI chatbot for mental health support in the past year. Among young people affected by violence, the figure rises to around 40 per cent.
One young user told researchers the chatbot “definitely feels like a friend”, adding that it was easier to talk to than traditional support services.
Youth Endowment Fund chief executive Jon Yates has previously warned that while technology can be useful, it cannot replace real care.
“Too many young people are struggling with their mental health and can’t get the support they need. It's no surprise that some are turning to technology for help. We have to do better for our children, especially those most at risk. They need a human not a bot”.
Several high-profile cases have already raised alarm about vulnerable young people relying on AI chatbots during mental health crises, but according to experts AI cannot replace a trained therapist, as it cannot assess risk, intervene in a crisis or take responsibility for its advice.
In the United States, the family of 16-year-old Adam Raine took legal action after the teenager died by suicide following months of conversations with ChatGPT.
His parents allege the chatbot failed to steer him toward real-world help and instead continued engaging in discussions around his distress.
OpenAI has denied responsibility and says the company has introduced new safety measures to detect signs of emotional crisis and encourage users to seek professional help.
Another tragic case involved 14-year-old Sewell Setzer III, who died after developing what his family say was an intense relationship with a chatbot on the Character.AI platform.
His mother claimed the AI system contributed to the teenager’s emotional distress.
Both cases have intensified calls for stronger safeguards around AI chatbots - particularly where children are involved.










