'We got it wrong': Google CEO breaks silence on 'biased' pictures created by Gemini AI

examples of the diverse versions of history presented by Google Gemini AI image creation tool

Google CEO Sundar Pichai has spoken out about 'unacceptable' pictures produced by his company's Gemini AI image creation system and promised changes to stop it happening again

GOOGLE PRESS OFFICE | X
Aaron Brown

By Aaron Brown


Published: 28/02/2024

- 18:04

Updated: 01/03/2024

- 12:35

Gemini was mocked for inaccurately adding diversity to some historical pictures, including America's founding fathers

  • Google CEO Sundar Pichai has addressed the fallout from its Gemini AI tools
  • In an internal memo, Pichai admitted pictures were "unacceptable"
  • Google has been working “around the clock” to fix the problem with its AI
  • "Some of its responses offended users and shown bias," confessed Pichai
  • Gemini image generator dreams up pictures based on a short written prompt
  • Google added women and people of colour into inaccurate historical context
  • It seems the AI system was over-correcting for racial bias

Google CEO Sundar Pichai has addressed the fallout from his company's Gemini AI image creator in an internal memo to 160,000 employees. He admitted that inaccurate pictures dreamt up by Google's all-new Artificial Intelligence (AI) were "unacceptable", showed "bias", and "offended our users".

Last week, Google was forced to slap the brakes on its latest innovation ― an AI-powered image creation tool capable of producing never-before-seen images based on a brief written prompt. The tool was first made available worldwide earlier this month under the banner of Google Gemini, which includes a slew of different AI features to compete with the likes of OpenAI's ChatGPT.


Early Gemini users sounded the alarm when the chatbot started to generate images showing a range of ethnicities and genders, even when doing so was historically inaccurate — for example, prompts to generate images of certain historical figures, such as the US founding fathers, returned images of a woman of colour signing the Constitution of the United States.

Other examples shared across social media showed people of colour as Vikings, Nazi soldiers from the 1940s, and the Pope — despite the written prompts not asking for these tweaks.

Some critics accused Google of anti-white bias. However, those with knowledge of how these AI image creation tools work suggested the company appeared to have over-corrected over concerns around longstanding racial bias issues within AI technology, which had previously seen facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.

In his memo, CEO Sundar Pichai said Gemini's responses were “problematic” and that Google had been working “around the clock” to address the issue. For now, Google Gemini will refuse to generate images around some of the historical prompts that kickstarted the controversy. It’s unclear when it will be back online.

You can test out Google Gemini here.

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” the 51-year-old Silicon Valley executive wrote.

“No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

Mr Pichai said Google had “always sought to give users helpful, accurate and unbiased information” in its products and this was why “people trust them”.

“This has to be our approach for all our products, including our emerging AI products”, he reiterates in the memo, which was leaked to journalists working at Press Association.

Image generation tools, like Google Gemini and rival systems from OpenAI’s ChatGPT, are trained on vast databases of pictures and written captions. Over time, the system builds associations and can work out the best fit for any given prompt. It's not thinking for itself, but using associations built from trawling through big data-sets. Unfortunately, this can have the unintended consequence of amplifying stereotypes within the data.

OpenAI was accused of spreading harmful stereotypes when its image generation tool, known as Dall-E, responded to queries for Chief Executive with results dominated by pictures of white men.

Turning to how his company plans to address the issues, Google boss Sundar Pichai said “necessary changes” would be made inside the company to prevent similar problems occurring again.

“We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals (sic) and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes,” he said.

The incident comes as debate around the safety and influence of AI continues, with industry experts and safety groups warning that AI-generated disinformation campaigns will likely be deployed to disrupt elections throughout 2024, as well as to sow division between people online.

Research from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University published last year found that AI models, including OpenAI’s ChatGPT-4 and Meta’s LLaMA, can often have political biases baked into them during development.

According to the researchers, products from OpenAI tend to be left-leaning, while those of Mark Zuckerberg’s Meta company were closest to a conservative position.

Rob Leathern, an ex-Google employee who worked on several products related to privacy and security until last year, shared his thoughts on the images produced by Gemini: “It absolutely should not assume that certain generic queries are a particular gender or race, (eg software engineer) and I have been glad to see that change.”

“But when it explicitly adds [a gender or race] for more specific queries it comes across as inaccurate. And it may actually hurt the positive intention for the former case when folks get upset,” he added in a follow-up tweet.

LATEST DEVELOPMENTS

Documenting the issues during the initial fallout, Jack Krawczyk, senior director for Gemini experiences at Google, posted on X: “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.

"As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open ended prompts (images of a person walking a dog are universal!). Historical contexts have more nuance to them and we will further tune to accommodate that.”

He added that it was part of the “alignment process” of rolling out AI technology, and thanked users for their feedback on Gemini.

Additional Reporting By Martyn Landi, PA Technology Correspondent

You may like