A Reminder About AI-Powered Chatbots
AI-powered chatbots, like ChatGPT, can confidently present fabricated information, much like a GPS mistakenly directing you to drive through a lake. This reminder is rooted in a recent experiment conducted by Nieman Lab, which aimed to determine whether ChatGPT could provide accurate links to articles from news organizations that have lucrative contracts with OpenAI.
The findings were concerning. Instead of delivering correct URLs, ChatGPT produced entirely fictional links that led to 404 error pages. This phenomenon is often referred to in the AI industry as “hallucinating”—a term that aptly describes a system confidently misrepresenting information. Andrew Deck from Nieman Lab asked ChatGPT to generate links to exclusive articles from ten major publishers. The results mirrored previous tests, notably one involving Business Insider, which yielded similarly incorrect URLs.
An OpenAI representative acknowledged to Nieman Lab that the company is still developing a system designed to merge conversational AI capabilities with real-time news content. This system aims to ensure proper attribution and accurate linking to source materials, but details on the timeline for release and reliability remain unclear.
Despite these uncertainties, news organizations continue to feed substantial content into OpenAI's models for monetary compensation, often sacrificing their integrity in the process. Meanwhile, AI firms leverage uncontracted content from various sources to train their models. Mustafa Suleyman, head of AI at Microsoft, described all published internet content as “freeware,” reinforcing a culture of exploitation. Microsoft’s market valuation stood at $3.36 trillion at the time of this writing.
The key takeaway is clear: If ChatGPT is fabricating URLs, it is also likely fabricating facts. At its core, generative AI functions as an advanced autocomplete, merely predicting the next logical word in a sequence without true comprehension. For example, when I tested leading chatbots to help solve the New York Times Spelling Bee, they struggled significantly. This indicates that relying on generative AI for factual information is risky at best.