Can AI Chatbots Be Bound by a Legal Obligation to Tell the Truth? Exploring the Accountability of Artificial Intelligence in Communication.

Can we expect artificial intelligence (AI) to tell the truth? Perhaps not entirely, but a team of ethicists argues that AI developers should legally bear the responsibility of minimizing errors. Brent Mittelstadt, an ethicist from the University of Oxford, emphasizes, "We’re trying to create an incentive structure that encourages companies to prioritize truthfulness and accuracy when developing large language model (LLM) systems."

Large language models like ChatGPT generate human-like responses based on extensive text analysis. However, despite often appearing convincing, these models can make significant errors—a phenomenon known as "hallucination." Mittelstadt notes, "We have these impressively advanced generative AI systems, but they frequently err, and there’s no fundamental solution from our understanding of their basic functions." This issue is particularly critical as LLMs are being integrated into government decision-making processes, where acknowledging their knowledge limitations is essential.

To tackle this problem, the team suggests specific measures. When asked factual questions, AI should respond in a manner akin to humans. This means they should uphold honesty, embodying the principle of "knowing what you know and knowing what you don’t." Mittelstadt states, "The key is to take necessary steps to ensure they act with caution. If there’s uncertainty, they shouldn’t fabricate answers to appear credible. Instead, they should say, ‘Hey, you know what? I don’t know. Let me research this and get back to you.’"

While this ambitious goal seems commendable, Ilke Bouyten, a cybersecurity professor at De Montfort University in the UK, questions its technical feasibility. Developers aim to make AI realistic, but so far, this has proven to be an exceedingly "labor-intensive" and perhaps unrealistic task. Bouyten remarks, "I don’t understand how they expect laws to enforce this mandate; it’s fundamentally impractical."

In light of this, Mittelstadt and his colleagues propose more direct actions to enhance AI's "honesty." He suggests that these models should list their information sources, a step some models already take to enhance answer credibility. Furthermore, employing a technique known as "retrieval augmentation" may reduce the likelihood of hallucinations. Mittelstadt believes that the scale of AI deployment in high-risk areas, such as governmental decision-making, should be limited, or at least the information sources accessible to these models should be restricted. "If we have a language model used solely in medicine, we should confine it to searching high-quality medical journals."

Mittelstadt also emphasizes the importance of shifting perspectives: “If we could move away from the notion that LLMs are adept at answering factual questions or that they will at least provide reliable answers, and instead view them as tools that help provide more relevant information, that would be fantastic.”

However, Katharina Gonta, an associate professor at the Faculty of Law, Utrecht University, argues that these researchers focus too much on technology, neglecting the persistent issue of dishonesty in public discourse. "In this context, solely pointing fingers at LLMs creates the illusion that humans are impeccable and never make such mistakes. Ask any judge you encounter; they've experienced attorneys who are neglectful—and vice versa—this isn’t just a machine problem.”

Most people like

Find AI tools in YBX