OpenAI: NY Times 'Manipulated' ChatGPT to File a Lawsuit Against Us

OpenAI has made serious allegations against The New York Times, claiming that the publication hired an individual to “hack” its products in order to gather evidence for a lawsuit against the company. In a court filing, OpenAI contends that the newspaper intentionally exploited a vulnerability in ChatGPT to produce “highly anomalous results,” which it claims were designed to demonstrate how the chatbot generated excerpts from its articles without permission.

According to the motion, the Times allegedly employed “deceptive prompts” intended to manipulate ChatGPT into providing portions of articles, which the newspaper then claimed were reproduced by the chatbot. OpenAI asserts that the Times has not revealed the prompts or tools used to direct the chatbot, arguing that the individual took “tens of thousands” of attempts to force the AI to produce verbatim text from its stories. The filing emphasizes, “Normal people do not use OpenAI’s products in this way.”

In response to its assertions, The Times filed a lawsuit in December, claiming that ChatGPT had been trained on its articles without authorization and alleging that the chatbot was capable of generating “near-verbatim excerpts” from its content. OpenAI argues that the newspaper is attempting to “frame these undesirable phenomena as typical model behavior.” The filing states that the Times’ examples of purported “training data regurgitation” and “model hallucination” emerged only after what appeared to be extensive and deliberate attempts to manipulate OpenAI’s models.

OpenAI refutes the allegation that ChatGPT is a substitute for a subscription to The New York Times. It explains that in practical use, ChatGPT cannot be employed to access Times articles at will. Furthermore, OpenAI claims the newspaper did not disclose its findings prior to initiating the lawsuit, suggesting that it “kept these results to itself, apparently to set up this lawsuit.”

The company also contends that the articles cited by the Times were not nearly verbatim as alleged but merely similar, and that the Times strategically aimed its lawsuit at older articles, some published between three and 12 years ago, which have long been available online.

This legal battle raises significant questions about the use of publicly available content for training AI models, as well as the principles of fair use under copyright law. OpenAI has previously stated that it cannot develop advanced AI models without the ability to access copyrighted materials, asserting that it is essential for the progression of AI technology.

OpenAI CEO Sam Altman addressed this matter in January at the World Economic Forum in Davos, asserting that the company does not require data specifically from The Times for training its models. He stated, “Any one particular training source does not move the needle for us that much,” underscoring the broader implications of copyright and data access in the field of artificial intelligence.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles