Google has temporarily stopped its AI tool Gemini previously known as “Bard” from creating images of people due to criticism on social media. The tool was criticized for generating historically inaccurate images, predominantly portraying people of color instead of White people.
Gemini launched just three weeks ago, unveiled a new image generation feature using an AI model called Imagen 2. This feature enables users to create images of people based on prompts.
However, it soon became evident that the tool was producing unintended and sometimes offensive images.
The embarrassing mistake highlights the ongoing challenge AI tools face in understanding race. Google’s effort to address this issue seems to have failed, making it challenging for the AI chatbot to produce images of White people.
On X, a user alleged that Google shows bias against white individuals. When asked Gemini AI to create an image of a black family, the tool complied.
Yet, when requested a picture of a white family, Gemini AI mentioned it cannot generate images specifying race or ethnicity
When Engineering Junkies requested the AI tool to generate an image, we received the following message:
Social media is filled with “incorrect” images generated by Gemini. One post discovered it depicting US Founding Fathers and Nazi-era German soldiers as people of color.
When asked for an image of a “white farmer in the South,” Gemini provided pictures of farmers from the South, showing a mix of genders and ethnicities.
Gemini, similar to AI tools like ChatGPT, learns from extensive online data. Experts have warned that these AI tools can inadvertently mirror the racial and gender biases in the data they are trained on.
“We’re already working to address recent issues with Gemini’s image generation feature, While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” Google stated in a post on Thursday on X.
The debate over Gemini’s image generation feature escalated when notable figures like Elon Musk labeled the tool’s mistakes as “racist and anti-civilizational.” The New York Post and other critics accused Google of being excessively “Woke.”
This incident is another setback for Google as it competes with OpenAI and other players in the competitive field of generative AI.
Prabhakar Raghavan, Google’s Senior Vice President, acknowledged the error and thanked users for their feedback. In a statement on Friday, he outlined two main issues behind the inaccurate results.
Firstly, Gemini’s focus on diversity overlooked situations where specificity was crucial. Secondly, the AI model became overly cautious, hesitating to respond to certain prompts, even harmless ones.
Raghavan stressed that Gemini never intended to refuse images for any group or generate historically inaccurate ones. To address the issues, Google temporarily disabled the image generation feature and pledged to enhance it significantly before reactivating it. This involves rigorous testing for accuracy and appropriateness.
Gemini, as a creativity and productivity tool, requires ongoing refinement. Raghavan cautioned that, like any large AI model, Gemini might occasionally produce embarrassing, inaccurate, or offensive results. Google aims to swiftly address such issues as they arise.
In summary, Google’s handling of the Gemini image generation issue highlights the ongoing challenges in creating and using AI technologies.
This incident underscores the need for constant refinement of AI models, managing biases, and addressing ethical concerns related to these advancing technologies.