Why AI is an Overconfident Misleader: Exploring Its Limits and Knowledge Gaps

Trusting AI: The Importance of Justification in AI Outputs

Every month, over 500 million individuals rely on Gemini and ChatGPT for information ranging from cooking pasta to navigating complex homework topics. However, if AI recommends cooking pasta in petrol, it raises questions about its reliability in other areas like birth control or algebra.

At the World Economic Forum this January, OpenAI CEO Sam Altman emphasized the need for transparency in AI outputs: “I can’t look in your brain to understand your thoughts, but I can ask you to explain your reasoning and determine if it sounds reasonable. I believe our AI systems will be able to do the same."

Knowledge Requires Justification

Altman aims to instill confidence in large language models (LLMs) like ChatGPT by suggesting they can provide clear explanations for their outputs. Without valid justification, beliefs can’t be considered knowledge. When do we feel justified in what we know? Typically, it’s when our beliefs are supported by credible evidence, logical arguments, or the testimony of trusted sources.

LLMs are intended to be reliable sources of information. However, without the ability to explain their reasoning, we lack assurance that their claims meet our criteria for justification. For instance, if you assert that today’s haze in Tennessee is caused by wildfires in Canada, I might accept your statement. But if you previously claimed that snake fights are common in dissertation defenses, your credibility is jeopardized. I would then seek clarification on your reasoning regarding the haze.

The Limitations of AI Understanding

Today’s AI systems cannot earn our trust through reasoning, as they lack the capacity for it. Instead, LLMs are trained on extensive datasets to detect and predict language patterns. When a prompt is given, the tool generates a response based on these patterns, often mimicking knowledgeable human speech. However, this process does not validate the accuracy or justification of the content. As Hicks, Humphries, and Slater assert in “ChatGPT is Bullshit,” LLMs produce text that appears convincing but lacks an actual concern for truth.

If AI-generated content is not the equivalent of human knowledge, what is it? While it may seem inaccurate to categorize all outputs as “bullshit,” many LLM responses are factually correct, leading to what philosophers term Gettier cases. These situations occur when true beliefs exist alongside a lack of understanding about their justification.

AI Outputs as Illusions

To illustrate this, consider a scenario inspired by 8th-century Indian Buddhist philosopher Dharmottara: Imagine searching for water on a scorching day. You spot what looks like water, only to discover it's a mirage. When you reach that spot, however, you find real water under a rock. Can you claim genuine knowledge of the kind of water you sought?

Most would agree that those travelers possess no real knowledge; they merely stumbled upon water despite having no solid reasoning to expect its presence.

When we claim to know something learned from an LLM, we place ourselves in a similar predicament as Dharmottara’s travelers. If the LLM was trained effectively, its outputs are likely true, akin to discovering water where predicted. Yet, the justification that substantiates the assertion exists somewhere within the dataset but plays no role in generating the output.

Therefore, Altman’s assurances can be misleading. If you request an LLM to justify its output, it will create a convincing but superficial justification—a “Gettier justification,” as Hicks et al. describe. This mimicry of justification lacks real foundation.

The Risk of Misleading Justifications

Currently, AI systems often misinterpret or "hallucinate" factual information, leading to inconsistency. As the illusion of justification becomes increasingly convincing, we face two potential outcomes:

1. Informed Users: Those aware of AI’s inherent limitations will recognize that LLMs produce deceptive assertions.

2. Unaware Users: Individuals who don’t understand the nature of AI outputs may be misled, living in a state where distinguishing fact from fiction becomes challenging.

The Need for Justification in AI Outputs

While LLMs serve as powerful tools, they generate outputs that require scrutiny. Users, especially those without expertise, rely on AI for critical knowledge—teens seeking help with algebra or advice on safe sex. To ensure accountability and trust in AI outputs, we must understand the justification behind each claim.

Fortunately, seasoned individuals recognize that olive oil is a better choice than petrol for cooking spaghetti. But how many potentially harmful recipes for reality have we accepted blindly from AI without questioning their justification?

Contributors:

Hunter Kallay, PhD Student in Philosophy, University of Tennessee

Kristina Gehrman, PhD, Associate Professor of Philosophy, University of Tennessee

Most people like

Find AI tools in YBX