A Year After AI 'Code Red,' Google Faces Backlash Over Gemini: Was It Inevitable? | The AI Beat

All weekend, my social media feed was flooded with screenshots, memes, and headlines that either mocked or criticized Google’s so-called ‘woke’ Gemini AI model.

This uproar followed Google’s admission that it had “missed the mark” by producing historically inaccurate and misleading images. On X (formerly Twitter), users shared screenshots showcasing Gemini’s output, including one that absurdly claimed, “it is not possible to definitely say who negatively impacted society more, Elon tweeting memes or Hitler.” VC Marc Andreessen particularly reveled in reposting erroneous and offensive content, alleging it was “deliberately programmed with the list of people and ideas its creators hate.”

The stark contrast between the initial excitement surrounding Gemini's launch in December—where it was seen as a competitor to GPT-4—and the subsequent backlash is striking. Last year, the New York Times revealed that Google had declared a “code red” following ChatGPT's launch in November 2022, which triggered a generative AI boom that threatened to overshadow the tech giant.

Despite having contributed to the technology developers behind ChatGPT, Google was cautious about protecting its brand. Startups like OpenAI seemed more willing to accept risks for rapid growth. In response to ChatGPT’s success, Google CEO Sundar Pichai was reportedly deeply involved in redefining the company’s AI strategy.

The tension between the need for rapid innovation and maintaining user satisfaction likely contributed to the backlash against Gemini. Google had been hesitant to release its more advanced large language models (LLMs) precisely to avoid the kind of fallout it now faces: criticism for inappropriate AI outputs.

This is not Google’s first encounter with LLM controversy. In June 2022, engineer Blake Lemoine claimed that LaMDA, Google’s chatbot technology, was sentient. Lemoine, who had worked in Google’s Responsible AI team, tested LaMDA for signs of discriminatory language but ultimately leaked transcripts of his interactions, which included asking the AI about its preferred pronouns.

At that time, both Google and its research lab, DeepMind, were cautious in the LLM domain. DeepMind planned to release its Sparrow chatbot in private beta but highlighted that Sparrow was “not immune to making mistakes.”

Startups like OpenAI and Anthropic don’t face the same pressures as Google. As noted in a previous New York Times piece, Google struggles to deploy advanced AI responsibly, with internal discussions acknowledging that smaller companies operate with fewer constraints. Google recognized that it had to engage with the industry or risk being left behind.

Now, Google is fully committed to generative AI, yet this commitment brings its own set of challenges. Unlike OpenAI, which doesn’t contend with stockholder demands or vast user bases reliant on its traditional products, Google has a complex web of obligations.

All LLM companies grapple with issues like hallucinations; just last week, ChatGPT produced incoherent responses and issued a statement acknowledging the issue was being addressed. While these outputs may not spark the same controversy as Gemini’s, expectations for Google remain high. As the established player in the AI field, it faces scrutiny that smaller firms do not.

Achieving a balance that satisfies diverse social, cultural, and political values is virtually impossible. With hallucinations as an inherent part of current AI technology, Google finds itself in a challenging position: navigating the demands of a critical audience while trying to innovate responsibly.

Most people like

Find AI tools in YBX