December 24, 2024

Google Pulls Gemini’s AI Image Generator After ‘Woke’ Photos

Google #Google

Google’s Gemini AI is off to a rocky start. Despite the fact that it’s Google’s most powerful chatbot available to the public, it’s run straight into a hurdle that’s insurmountable for many big tech companies: the culture war.

The company has announced Thursday that it would be pausing Gemini’s image generation capabilities after it created historically-inaccurate images such as Black Nazi soldiers and Native American U.S. senators. Google acknowledged in a statement that Gemini was “offering inaccuracies in some historical image generation depictions,” and that they would re-release the generator with an update in the future.

“We’re working to improve these kinds of depictions immediately,” the company said. “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Darghya Das, a former Google search engineer, wrote on X that it was “embarrassingly hard to get Google Gemini to acknowledge that white people exist.” He included screenshots of the bot generating images of American, Australian, British, and German women that depicted mostly people of color.

Right-wing account End Wokeness also shared examples from Gemini that included racially diverse images of America’s founding fathers, popes, and vikings. The episode has become yet another arena in the culture war, allowing largely right-wing users to paint the images as an example of Google being woke.

“What’s happening with Google’s woke Gemini AI has been happening for years in Media and Hollywood and everybody who called it out was called racist and shunned from society,” Ashley St. Clair, a writer for BabylonBee, wrote on X. Her criticism was even tweeted by Elon Musk who agreed with the assessment.

The problem of the historically inaccurate photos seem to largely stem from a training and labeling issue on the part of Google. Users have found that Gemini would tag words like “diverse” and “various ethnicities” to unrelated prompts in order to create a set of ethnically varied images of people in an attempt to avoid issues surrounding bias that has long plagued image generators. However, the company appears to have overcorrected in this case—allowing the “diverse” label to be included in image generation prompts that were unrelated to diversity.

In other examples, users also discovered that Gemini would create images of Black people but refuse to generate images of white people specifically. This sparked criticism from right wing actors that Google was being racist against white people.

While many hailed Gemini as an overall impressive chatbot from Google, it’s clear that there are plenty of kinks to be worked out. Regardless, it’s a clear example of how pernicious the problem of bias can be when it comes to generative AI—even if the company behind it has the best intentions.

Leave a Reply