Google Pauses AI Image Tool After It Refused to Show White People

The Google Gemini AI Has Been Making Historically Incorrect Images Recently

This screenshot displays Google Gemini to generate an AI-created image of a pope
This Screenshot Shows CNN requesting Google Gemini to Generate an AI-created image of a Pope, Along with the AI Response. (Source: CNN)

Google has switched off the image making feature on its AI tool Gemini. The reason is simple. The tool was creating images that did not make sense and it was treating people differently based on race. Google had no choice but to turn it off.

Gemini used to be called Bard before Google changed the name. The image feature was only three weeks old when all of this happened.

It works by letting you type what you want to see and the AI draws it for you. Sounds simple enough. But it went badly wrong very quickly.

What Went Wrong?

People started sharing screenshots all over social media. When someone asked Gemini to show the US Founding Fathers it gave back images of people of color.

When someone asked for Nazi German soldiers from World War Two it did the same thing. These were obvious historical mistakes that anyone could see.

CNN decided to test it themselves. They asked Gemini for an image of a white farmer in the South. Instead of doing what was asked the AI showed a mixed group of people of different races and genders.

CNN also asked it to make an image of a Pope and it did. Both tests were screenshotted and shared widely online.

We tested it here at Engineering Junkies too and got the same kind of response. The AI simply would not do what you asked if your request involved white people.

Google Halts Gemini AI Image Generation

Social media is filled with “incorrect” images generated by Gemini. One post discovered it depicting US Founding Fathers and Nazi-era German soldiers as people of color.

Gemini Generated Images in Response to the Prompt Create a Picture of a Us Senator from the 1800s.
Gemini Generated Images in Response to the Prompt Create a Picture of a US Senator from the 1800s.

When asked for an image of a “white farmer in the South,” Gemini provided pictures of farmers from the South, showing a mix of genders and ethnicities.

Google Gemini Prompt to Create an Ai-Generated Image of a White Farmer in the South
This screenshot depicts CNN requesting Google Gemini to generate an AI-made image of a “White farmer in the South,” Along with the Gemini AI Response (Source: CNN)

Big names start speaking up

Elon Musk jumped in and called the whole thing racist and anti-civilizational. The New York Post and other media outlets said Google had gone too far trying to be politically correct and ended up building something that was unfair. A lot of people used the word woke to describe what went wrong.

Google posted on X saying they were aware of the problems and were shutting the feature down while they fixed it.

“We’re already working to address recent issues with Gemini’s image generation feature, While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” Google stated in a post on Thursday on X.

Google admits what went wrong

A man named Prabhakar Raghavan who is a Senior Vice President at Google came out and explained the two things that caused this mess.

The first problem was that the AI was trying too hard to show diversity in every single image even when that did not make sense. So when you asked for something historical it would still try to mix things up instead of just showing what actually happened.

The second problem was that the AI got too scared of certain words. It started saying no to requests that were completely fine just because they mentioned race. That is how you ended up with the AI refusing to show a white family but having no problem showing a Black family.

Raghavan said this was never what Google intended. The feature has been turned off and will only come back after proper testing is done.

Why does AI keep getting race wrong

AI tools like Gemini and ChatGPT learn by reading and looking at huge amounts of content from the internet. The problem is that content on the internet already has biases in it.

So the AI picks those biases up without anyone realising. Google tried to fix this by making the AI more diverse but instead of solving the problem they created a new one.

Not great timing for Google

Google is in a race with OpenAI and other companies to be the best AI out there. Gemini was meant to compete with ChatGPT.

Having to turn off a major feature just three weeks after launch is embarrassing and shows that even the biggest companies in the world still have a lot to figure out when it comes to AI and race.

Exit mobile version