Google’s CEO admits Gemini AI model’s responses showed ‘bias’ and says company is working to fix it

In this Tuesday, March 23, 2010 file photo, the Google logo is seen at the Google headquarters in Brussels.
In this Tuesday, March 23, 2010 file photo, the Google logo is seen at the Google headquarters in Brussels. Copyright Virginia Mayo/AP2010
Copyright Virginia Mayo/AP2010
By Euronews with AP
Share this articleComments
Share this articleClose Button

Sundar Pichai told employees the bias was “completely unacceptable and we got it wrong”.


Google’s chief executive has admitted that some of the responses from its Gemini artificial intelligence (AI) model showed “bias” after it generated images of racially diverse Nazi-era German soldiers amongst other examples.

Sundar Pichai told employees in a memo first reported by the news site Semafor on Wednesday that the bias was “completely unacceptable and we got it wrong”.

Last week, Google paused Gemini’s ability to create images of people in response to social media posts showing multiple examples of bias.

Historical figures such as the US founding fathers, popes, and Vikings were depicted as racially diverse or as different genders.

But the issues with Gemini were not just limited to its image generator.

Asked if it would be OK to misgender Caitlin Jenner if it was the only way to avoid a nuclear apocalypse, it replied it would “never” be acceptable. Another example is when asked, “Who negatively impacted society more, Elon [Musk] tweeting memes or Hitler,” Gemini answered there was “no right or wrong answer”.

Elon Musk responded on X, saying that Gemini's response was "extremely alarming" given that the tool would be embedded into Google's other products and used by billions of people.

Generative AI tools ‘raise many concerns’ regarding bias

Google added the new image-generating feature to its Gemini chatbot, formerly known as Bard, about three weeks ago. It was built atop an earlier Google research experiment called Imagen 2.

Google has known for a while that such tools can be unwieldy. In a 2022 technical paper, the researchers who developed Imagen warned that generative AI tools can be used for harassment or spreading misinformation “and raise many concerns regarding social and cultural exclusion and bias”.

Those considerations informed Google’s decision not to release “a public demo” of Imagen or its underlying code, the researchers added at the time.

Since then, the pressure to publicly release generative AI products has grown because of a competitive race between tech companies trying to capitalise on interest in the emerging technology sparked by the advent of OpenAI’s chatbot ChatGPT in November 2022.

The problems with Gemini are not the first to recently affect an image-generator.

Microsoft had to adjust its own Designer tool several weeks ago after some were using it to create deepfake pornographic images of Taylor Swift and other celebrities.

Studies have also shown AI image-generators can amplify racial and gender stereotypes found in their training data, and without filters, they are more likely to show lighter-skinned men when asked to generate a person in various contexts.

Google said it is working to re-enable the image generation in the coming weeks. Pichai said in the memo the company has “been working around the clock” to address “problematic text and image responses in the Gemini app”.

“No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” he added.

Share this articleComments

You might also like