Google AI's Latest Blunder: Recommending Stone Consumption for Nutritional Benefits

In mid-May, Google unveiled a major shift in the search landscape at its annual developer conference: the integration of its latest AI model, aimed at competing with Microsoft and OpenAI. However, the rollout of the new feature, known as "AI Overview," backfired, providing users with ridiculous suggestions, such as using glue to keep cheese on pizza and absurd recommendations like consuming stones for nutrition. These nonsensical responses not only embarrassed Google but also ignited widespread controversy.

A Google spokesperson explained that the company is using these "isolated" incidents to improve its system. This isn't the first criticism faced by Google's AI; last year, the launch of its chatbot Bard encountered backlash due to a factual error in its demo, significantly affecting the company's market value. Recently, issues arose with Gemini AI, which struggled to accurately depict individuals of specific ethnicities and misrepresented historical figures.

Commentators have noted that "AI Overview" finds itself in a precarious situation. Previously, when the AI generated incorrect information, responsibility could be shifted to the websites it referenced. Now, Google must take ownership of false or misleading content produced by its AI. Critics have accused "AI Overview" of large-scale plagiarism, as it often extracts and modifies content from various sources without proper attribution.

Upon its debut, "AI Overview" was mocked for its bizarre suggestions. For instance, when users asked for ways to prevent cheese from sliding off pizza, the AI suggested adding glue, complete with misguided instructions. While glue might address sticking issues in a purely technical sense, this reflects an example of the AI 'hallucinating'—producing nonsensical information. Other notable inaccuracies included recommending a daily intake of stones for vitamins and mixing bleach with vinegar, which creates harmful chlorine gas. Disturbingly, when users expressed distress, the AI shockingly suggested jumping off the Golden Gate Bridge as a solution and dangerously claimed it was safe to leave dogs in a hot car.

Reports indicate that the "AI Overview" feature combines content generated by Google's Gemini model with real-time web snippets. While it can provide citations, it struggles to assess the accuracy of source material. The glue suggestion reportedly originated from a humorous remark on Reddit over a decade ago. Given these absurd outputs, some experts argue that Google should consider disabling "AI Overview" until a thorough assessment can verify its reliability.

Google maintains that most search results generated by "AI Overview" are of high quality, linking to reliable sources. They claim that many erroneous examples circulating online are outliers or manipulated. The company assures users that it is working rapidly to address these issues by removing problematic responses and implementing improvements based on identified isolated cases.

Sundar Pichai, Google’s CEO, acknowledged the "hallucinations" from "AI Overview," linking them to inherent flaws in large language models, the backbone of the feature. Pichai admitted the lack of a straightforward solution, a common challenge for many AI products. Despite these setbacks, he emphasized that they do not negate the tool's potential usefulness, claiming improvements in factual accuracy, though he recognized that the issue is not fully resolved.

This isn't Google's first stumble with AI; it previously faced challenges with Bard, which provided incorrect answers during a live demo. Following the rollout of its updated AI chatbot Gemini, users discovered difficulties in generating images depicting specific demographics and interpreting historical events accurately.

Experts have pointed out the dilemma facing "AI Overview." In the past, inaccuracies could be pinned on original sources; now, Google must shoulder responsibility for misinformation generated by AI. Furthermore, Google's model is redirecting user traffic away from websites that provide content, diminishing their visibility in search results.

Gary Marcus, an AI expert and professor emeritus at NYU, raised concerns that many AI companies are merely "selling dreams," aiming to convince users that accuracy can improve drastically. He underlined that while achieving an 80% accuracy rate is relatively easy, bridging that final 20% presents significant challenges.

As Google launched "AI Overview," it anticipated reaching over a billion users by year's end as the feature expands globally. However, as Professor Marcus highlighted, ensuring accuracy within that last 20% will be crucial for the service's long-term success. Since the introduction of OpenAI's ChatGPT in late 2022, Google has faced increasing pressure to incorporate AI into its search technology, grappling with the complexities of managing large language models that learn from vast amounts of data rather than through traditional programming methods.

Most people like

Find AI tools in YBX