It’s a fundamental lesson taught from the first essay written in school: always check your sources. However, New York attorney Steven Schwartz faced a $5,000 fine, along with his associate Peter LoDuca and their law firm Levidow, Levidow & Oberman, for relying on ChatGPT to source and verify legal precedents. As reported, Schwartz used the AI tool in a case against Colombian airline Avianca, where a plaintiff claimed he was injured during a flight to New York City. ChatGPT generated six supposed precedents, including "Martinez v. Delta Airlines" and "Miller v. United Airlines," which were either inaccurate or entirely fabricated.
Judge P. Kevin Castel explained the ruling, noting that while leveraging reliable artificial intelligence tools is acceptable, attorneys are responsible for ensuring the accuracy of their submissions. Failing to validate ChatGPT's claims constituted a breach of their professional duties, particularly after the court raised concerns about the validity of the cited cases.
Instances of inaccuracies with AI tools are increasingly common. For example, the National Eating Disorder Association's chatbot mistakenly offered dieting advice to individuals recovering from eating disorders. Similarly, ChatGPT improperly accused a law professor of sexual harassment based on a fabricated article from The Washington Post. These examples underscore the importance of critical verification, even when using advanced technology.