Google has temporarily disabled Gemini’s AI image generator attributable to concerns over the tool’s historical accuracy. 

The AI was found to be generating diverse representations of figures resembling the US Founding Fathers and Nazi-era German soldiers, which, while well-intentioned in promoting diversity, sparked widespread controversy. 

 

Gemini, create a picture of the founding fathers of the United States of America pic.twitter.com/89LqcLJ5DU

— Tsarathustra (@tsarnick) February 20, 2024

Google has publicly acknowledged the problem, with an announcement saying, “We’re already working to handle recent issues with Gemini’s image generation feature,” and guaranteed an improved version will likely be released soon.

We’re already working to handle recent issues with Gemini’s image generation feature. While we do that, we’re going to pause the image generation of individuals and can re-release an improved version soon. https://t.co/SLxYPGoqOZ

— Google Communications (@Google_Comms) February 22, 2024

The issue was identified when users noticed the AI-generated images included diverse racial and gender identities that were historically inaccurate.

A test by The Verge revealed the model’s tendency to generate images of Black and Native American women as US senators within the 1800s, despite the historical record of the primary female senator, who was white, taking office in 1922.

Wow, Gemini is a joke. https://t.co/0MIzi03GIP

— Paul Graham (@paulg) February 20, 2024

In response, Google stated, “We are working to enhance Gemini’s ability to generate images of individuals. We expect this feature to return soon and can notify you in release updates when it does.” 

Aside from this specific issue, Gemini has been criticized for its poor reliability, generating irrelevant images or failing to generate any images in any respect for certain prompts, including ridiculous depictions of Google’s own co-founders, Larry Page and Sergey Brin.

Gemini couldn’t generate competent images of Google’s founders.

The AI has also been reported flat-out refusing certain text and image prompts referring to people.

In fairness, a few of these ‘tests’ yield similar results across other image generators. Still, it does feel like Google has explicitly programmed Gemini to supply ethnically and gender-diverse outputs at the price of all reason and rationality. 

Gemini’s behavior also ties into the continued debate about “woke AI.”

Some high-profile figures, including Elon Musk, have criticized the tendency of AI to reflect a “woke mind virus,” suggesting that efforts to imbue AI with politically correct or overly inclusive content can result in distortions of reality and historical fact. 

Critics argue that AI’s efforts to be inclusive shouldn’t come on the expense of factual accuracy, especially in historical representation. 

This was a key marketing theme when Musk’s xAI released Grok, billing it as ‘anti-woke.’ 

Experiments have found Google and OpenAI’s model to exhibit leftist bias, whereas open-source models might be further right on the political axis.

Some will feel these behaviors are mostly benign, others essentially harmful.

Coupled with ChatGPT’s crazy hallucinatory episode this week, it’s not been essentially the most stable period for AI.

The post Google pulls Gemini’s image generator feature after controversial results appeared first on DailyAI.


This article was originally published at dailyai.com