Google's Gemini Image Generator Still Faces Bias Issues: What's Holding Back Fixes?

In February, Google halted its AI-driven chatbot, Gemini, from generating images of people after users raised concerns about historical inaccuracies. For instance, when asked to illustrate “a Roman legion,” Gemini depicted anachronistic racially diverse soldiers, while “Zulu warriors” were rendered in stereotypical Black imagery.

Google CEO Sundar Pichai issued an apology, and Demis Hassabis, co-founder of Google’s AI research division DeepMind, asserted that a resolution would come “in very short order” — ideally within the next few weeks. However, as we enter May, the expected fix remains elusive.

At its recent I/O developer conference, Google showcased various Gemini features, including custom chatbots, a vacation itinerary planner, and integrations with Google Calendar, Keep, and YouTube Music. Yet, the capability to generate images of people within Gemini apps, both on web and mobile platforms, remains disabled, as confirmed by a Google spokesperson.

So, what’s causing the delay? The issue appears to be more intricate than Hassabis suggested. The datasets used to train image generators like Gemini's often contain a higher proportion of images depicting white individuals, while images of people from other races tend to reinforce negative stereotypes. In an effort to address these biases, Google has implemented somewhat clumsy hardcoding adjustments but is now facing challenges in finding a balanced solution that avoids past mistakes.

Will Google resolve this issue? It’s uncertain. Regardless, this ongoing situation highlights the complexities involved in fixing AI biases — particularly when they are deeply ingrained in the system.

Stay tuned for our upcoming AI newsletter! Sign up here to receive it in your inbox starting June 5.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles