Google suspends 'woke' Gemini AI image creator after generating 1940s Asian Nazis and Black Vikings

examples of the diverse versions of history presented by Google Gemini AI image creation tool

Google has addressed criticism of its Gemini AI system and promised a fix soon

GOOGLE PRESS OFFICE | X
Aaron Brown

By Aaron Brown


Published: 22/02/2024

- 18:57

Updated: 01/03/2024

- 12:36

Gemini was mocked for inaccurately adding diversity to some historical pictures, including America's founding fathers

  • Google Gemini is a suite of AI tools, including a new image generator
  • The AI dreams up a unique picture based on your written prompt
  • Google added women and people of colour into inaccurate historical context
  • It seems the AI system was over-correcting for racial bias
  • Google has promised a fix and suspended all historical prompts in Gemini

Google has pressed pause on its latest innovation ― an AI-powered image creation tool capable of dreaming up never-before-seen pictures based on a brief written prompt. The tool was released earlier this month under the banner of Google Gemini, which includes a slew of different Artificial Intelligence (AI) features.

But the Californian company was left red-faced after people used Google Gemini to create images of historical figures and found a slew of inaccuracies. The problems seem to stem from an effort to swerve the long-standing racial bias issues that have plagued previous AI systems.


However, Google has over-corrected with its Gemini tool, which inaccurately generates images of women and people of colour in historical contexts. For example, a prompt to show America’s founding fathers added a woman of colour signing the Constitution of the United States.

Other examples shared widely across social media showed people of colour as Vikings, Nazi soldiers from the 1940s, and the Pope — despite the written prompts not asking for these tweaks.

Google has admitted that its new Gemini system has issues and has promised a fix soon.

“We’re working to improve these kinds of depictions immediately,” the company, which is worth $1.8 trillion, posted to Elon Musk’s X social network. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it.

“But it’s missing the mark here.”

For now, Google Gemini will refuse to generate images around some of the historical prompts that kickstarted the controversy. It’s unclear when it will be back online.

You can test out Google Gemini here.

Critics quickly lambasted the AI image creation tool as “woke” when the historical inaccuracies first surfaced, while others praised Google for trying to avoid repeating previous incidents involving artificial intelligence, racial bias and diversity.

"It's embarrassingly hard to get Google Gemini to acknowledge that white people exist," computer scientist Debarghya Das posted on X, formerly Twitter, in response to the Google Gemini launch.

Image generators, like Google Gemini and rival systems from OpenAI’s ChatGPT, are trained on vast databases of pictures and written captions. Over time, the system builds associations and can work-out the best fit for any given prompt. However, this can also amplify stereotypes in the data.

OpenAI was accused of spreading harmful stereotypes when its image generation tool, known as Dall-E, responded to queries for Chief Executive with results dominated by pictures of white men.

It’s one of a long list of examples in recent years involving technology and bias, including facial recognition software struggling to recognise ― or mislabelling ― black faces, and voice recognition services failing to understand different accents.

Research from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University published last year found that AI models, including OpenAI’s ChatGPT-4 and Meta’s LLaMA, can often have political biases baked into them during development.

According to the researchers, products from OpenAI tend to be left-leaning, while those of Mark Zuckerberg’s Meta company were closest to a conservative position.

Rob Leathern, an ex-Google employee who worked on several products related to privacy and security until last year, shared his thoughts on X. He posted: “It absolutely should not assume that certain generic queries are a particular gender or race, (eg software engineer) and I have been glad to see that change.”

“But when it explicitly adds [a gender or race] for more specific queries it comes across as inaccurate. And it may actually hurt the positive intention for the former case when folks get upset,” he added in a follow-up tweet.

The incident comes as debate around the safety and influence of AI continues, with industry experts and safety groups warning that AI-generated disinformation campaigns will likely be deployed to disrupt elections throughout 2024, as well as to sow division between people online.

LATEST DEVELOPMENTS

Jack Krawczyk, senior director for Gemini experiences at Google, said in a post on X: “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately. As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.

“We will continue to do this for open ended prompts (images of a person walking a dog are universal!). Historical contexts have more nuance to them and we will further tune to accommodate that.”

He added that it was part of the “alignment process” of rolling out AI technology, and thanked users for their feedback.

Additional Reporting By Martyn Landi, PA Technology Correspondent

You may like