Google has announced it is temporarily disabling the image generation of people within its Gemini AI tool in some countries. This decision follows criticism surrounding the generation of historically inaccurate images that appeared to subvert conventional gender and racial stereotypes.
In an official statement, Google acknowledges the issue, stating: “We’re already working toaddress recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”
This action comes less than 24 hours after Google apologized for inaccuracies generated by the AI model. Users requesting images of historical figures, such as the Nazi-era German soldiers, encountered results that included non-white individuals. These results sparked online conspiracy theories alleging a deliberate bias against depicting white people.

In light of these issues, Google has disabled Gemini’s ability to generate images of people within the European Union, UK, or Switzerland. Users requesting such images will receive the following message:
“We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does.”
Google launched its image generation capabilities within Gemini (formerly Bard) earlier this month to compete with similar offerings from OpenAI and Microsoft’s Copilot. Like its competitors, Gemini’s image generation tool creates images based on text prompts.

