Charities 'damaging public trust' relying on artificial intelligence to create emotional campaign images

Charities 'damaging public trust' relying on artificial intelligence to create emotional campaign images
Donald Trump declares America will ‘win the AI race’ as he unveils radical action plan |

GB NEWS

Lucy  Johnston

By Lucy Johnston


Published: 05/03/2026

- 08:30

Generated pictures in fundraising efforts could be sparking suspicion and shifting attention away from important causes

Charities could be damaging public trust by relying on artificial intelligence to create emotional campaign images, new research has warned.

A report from the University of East Anglia (UEA) suggests the growing use of AI-generated pictures in fundraising campaigns could be sparking suspicion and shifting attention away from the causes charities are trying to highlight. Researchers said an AI-generated “high-tech shortcut” to empathy may be undermining the emotional connection that drives people to donate.


The study comes as charities and humanitarian organisations face tightening budgets and growing pressure to produce compelling campaign material quickly and affordably. Many organisations are experimenting with AI image generators because they offer speed, lower costs and creative flexibility.

But the report suggests the technology risks breaking the bond of trust between charities and the public. Co-author David Girling, from UEA’s School of Global Development, said: “Charities exist because people care about other people. The moment when audiences start questioning whether what they are seeing is real, the emotional connection that drives support is put at risk.”



He added: “The debate about the ethics of AI is increasingly polarised. AI is not inherently wrong, but if it begins to overshadow the human story at the heart of charitable work, organisations could lose far more in trust than they gain in efficiency.”

The report, entitled Artificial Authenticity, analysed 171 AI-generated images and more than 400 public comments linked to charity campaigns. The campaigns came from 17 organisations, including Amnesty International, Plan International, the World Health Organization and the World Wildlife Fund.

Researchers say the findings show once AI images appear, discussions online often move away from the humanitarian issue and towards debates about AI itself. Of the comments analysed, 141 focused on AI ethics and authenticity concerns, while 122 criticised the technical execution or visual quality of the images.

Only 80 comments – less than 20 per cent – engaged with the humanitarian issue the campaign was about. The study suggests when audiences start questioning whether an image is real, attention shifts from the cause to the technology behind it.

Phone

Charities could be damaging public trust by relying on artificial intelligence

|

GETTY


Researchers say this can undermine the effectiveness of campaigns designed to generate empathy and support. The analysis also examined what kind of AI images charities are producing.

Nearly 70 per cent of the images studied were designed to closely resemble real photographs. Poverty was the most common theme, appearing in 51 of the 171 images analysed, often featuring children.

Environmental issues appeared in 35 images, while human rights themes featured in 32. Researchers say the choice of themes reflects the kinds of humanitarian crises charities typically highlight in their campaigns.

The report also found transparency about the use of AI did not prevent criticism. Around 85 per cent of the images were clearly labelled as AI-generated, yet the disclosure did not protect charities from backlash.

Campus

A report has been published by the University of East Anglia

|

GETTY

In campaigns where AI images were not clearly disclosed, commenters often adopted what the researchers described as an “investigative tone”, focusing on trying to determine whether the images were real. Instead of discussing the charity’s work, conversations centred on whether the visuals were fake.

The study also highlighted cases where audiences accused charities of hypocrisy. For example, environmental organisations such as the World Wildlife Fund in Denmark were criticised for using energy-intensive AI tools to promote sustainability.

Researchers said the contradiction was quickly picked up by climate-conscious audiences, with some commenters describing the approach as “ecocidal". Despite the criticism, the report notes there are reasons why charities are exploring the technology.

AI-generated images can sometimes help protect vulnerable people, such as survivors of conflict, illness or abuse, from being photographed or filmed in difficult circumstances. Organisations working with survivors of conflict, illness or abuse say synthetic imagery can allow them to tell a story without retraumatising individuals or exposing their identities.

However, the study found donors often struggle to accept these “mock” visuals. Many people still want to see what researchers describe as an “authentic witness" – real images of real people affected by crises.

This creates a tension between protecting individuals’ dignity and meeting public expectations for authenticity. The researchers say the public reaction to AI-generated images is complex.

In some cases audiences welcomed the technology because it could help protect vulnerable individuals from exploitation. In others, viewers criticised it as a distraction from the real problems charities are trying to solve.

Negative reactions were particularly strong when AI was used in emotionally sensitive campaigns, such as those involving serious illness, famine or poverty. Co-author Deborah Adesina, a consultant at the School of Global Development, believes people must think carefully about how they use the technology.

She said: “Ultimately, the future of charity storytelling will not hinge on technological capability alone. It will depend on whether organisations can maintain legitimacy, transparency and moral coherence in an environment where audiences are increasingly media literate and increasingly sceptical.”

Ms Adesina added organisations choosing to use generative AI need training and ethical guidance.

She said: “For communications teams who opt to include generative AI in their workflow, proper training in ethical prompt engineering will be crucial to avoid reputational harm and unintended bias.”

The report also sets out recommendations for charities navigating the rapidly evolving digital landscape. This included advice that charities should involve local communities directly in the creation of campaign imagery.

More From GB News