ChatGPT Faces Libel Lawsuit: Exploring the Boundaries of AI and Freedom of Speech

**Do First Amendment Protections Apply to ChatGPT?**

This intriguing question emerges from a defamation lawsuit filed in Georgia against OpenAI, the developer of ChatGPT, by Mark Walters, a radio host and founder of Armed American Radio. Walters, a prominent advocate for gun rights, claims that the AI chatbot generated “false and malicious” statements about him, prompting the legal action. This lawsuit represents a significant legal confrontation as it marks one of the first instances in which OpenAI may have to defend its AI technology against claims of libel in a courtroom.

The lawsuit stems from a summary generated by ChatGPT that alleged Walters was embezzling funds from the Second Amendment Foundation, a nonprofit dedicated to gun rights. However, Walters has no official ties to this organization. The situation took shape when journalist Fred Riehl requested this summary from the chatbot and later informed Walters that it had fabricated damaging information regarding him. Feeling that the generated content posed a threat to his professional integrity, Walters pursued legal action against OpenAI.

A recent development in the case involved Gwinnett County Superior Court Judge Tracie Cason denying OpenAI’s motion to dismiss the lawsuit. The judge’s one-page decision did not divulge the reasoning behind her ruling, aside from acknowledging her review of relevant legal frameworks. It's important to note that this decision does not necessarily predict the case's outcomes; legal expert Matthew D. Kohel advises against overinterpreting the motion’s dismissal, stating that such motions hinge upon the plaintiff’s allegations rather than the concrete facts of the case.

**Legal Implications and Key Considerations**

OpenAI contends that the outputs produced by ChatGPT should not be classified as libel since the chatbot does not qualify as a traditional publication. Furthermore, Walters must demonstrate that the AI's generated statements were created with actual malice, a crucial element in libel law.

All generative AI systems, including ChatGPT, are known to occasionally produce inaccuracies or misleading content. Notably, GPT-4, the model underlying ChatGPT, reportedly has a lower hallucination rate than many large language models; however, the potential for error remains.

This case could set a key precedent concerning whether ChatGPT itself can be viewed as a “speaker” or “publisher” entitled to constitutional protections. Alternatively, it will examine whether OpenAI bears responsibility for any false outputs produced by its AI.

OpenAI has sought to dismiss the case by asserting that no “publication” occurred. The company argues that Riehl was responsible for alerting Walters about the erroneous summary and that Riehl potentially misused the tool, being aware of its inaccuracies. OpenAI claims this misuse violated their terms of service, putting responsibility for the dissemination of false information on Riehl.

Under Georgia law, libel is defined as a false communication presented in print, writing, pictures, or signs. OpenAI argues that because the assertions made about Walters were not published in any traditional sense, it should not be held liable.

**Addressing Malice and the Challenges Ahead**

OpenAI further argues that the case lacks proof of malice. Landmark Supreme Court rulings, particularly in New York Times v. Sullivan, dictate that a plaintiff must establish that any libelous statements were made with "actual malice." As per Georgia law, libel is characterized by “false and malicious defamation” that damages a public figure’s reputation.

In OpenAI's defense, the company notes that its generated content is probabilistic and not always precise, advocating for the responsible use of AI that emphasizes verifying information before dissemination. Legal analyst Kohel emphasizes the uphill battle Walters faces in proving actual malice, especially since ChatGPT operates as a tool without intent or knowledge. To establish malice, Walters may need to show that someone with OpenAI purposely injected defamatory information into ChatGPT's training data.

Additionally, OpenAI questioned Riehl's usage of ChatGPT, pointing out that the AI repeatedly indicated it could not access or accurately summarize specific legal documents. This scenario raises further doubts about the reputational damage Walters claims to have endured, especially since Riehl was aware of the inaccuracies before presenting the information to Walters.

Finally, OpenAI highlighted the disclaimers within its terms of service, which caution users that AI-generated responses may not be reliable and should be confirmed before sharing.

As OpenAI prepares to defend itself against this significant defamation suit, it faces parallel challenges, including ongoing legal scrutiny from other entities like the New York Times and the FTC regarding its practices and data acquisition. The unfolding developments in these legal contexts could have profound implications for the future of AI and its integration into public discourse.

Most people like

Find AI tools in YBX