Anthropic Responds to Music Publishers in AI Copyright Lawsuit, Alleging 'Volitional Conduct'

Anthropic, a leading generative AI startup, has recently filed a court response challenging copyright infringement claims made by a coalition of music publishers, including Concord, Universal, and ABKCO, against its chatbot, Claude (now replaced by Claude 2).

In the fall of 2023, these music publishers initiated a federal lawsuit in Tennessee, alleging that Anthropic unlawfully scraped song lyrics from the internet to train its AI models, thereby reproducing copyrighted lyrics in chatbot interactions.

In reaction to a motion for a preliminary injunction—which would compel Anthropic to halt the availability of Claude—Anthropic reiterated arguments common in other AI copyright disputes. Generative AI companies, like Anthropic and OpenAI, assert that their use of publicly available data, including copyrighted material, falls under fair use, a legal doctrine that may soon be tested in the Supreme Court.

Minimal Impact of Lyrics in Training Data

Anthropic contends that its use of the plaintiffs' song lyrics is "transformative," adding new purpose and character to the original works. Citing research director Jared Kaplan, the filing emphasizes that their goal is to build a dataset that educates a neural network on human language.

The company further argues that the inclusion of song lyrics constitutes a "minuscule fraction" of the overall training data, making it financially unfeasible to acquire licenses for the extensive text required to train models like Claude. They highlight that comprehensive licensing for trillions of data snippets across various genres is impractical for any entity.

A unique aspect of Anthropic's argument is the claim that the plaintiffs engaged in "volitional conduct," which establishes direct infringement liability. In this context, Anthropic suggests that the music publishers, through targeted efforts to provoke lyrical output from Claude, effectively controlled the generation of allegedly infringing content, shifting responsibility away from Anthropic.

Disputing Irreparable Harm

In addition to challenging copyright liability, Anthropic argues that the plaintiffs fail to demonstrate irreparable harm. They assert that there's insufficient evidence of decreased licensing revenues since Claude's launch, pointing out that the publishers themselves believe monetary compensation could remedy any alleged harm—contradicting their claims of irreparable damage.

Anthropic argues that the request for an injunction against its AI models is unjustified given the weak evidence of irreparable harm. They also state that any previously generated lyrical outputs were an unintended "bug," which has been resolved with new safeguards implemented to prevent future occurrences.

The company claims that the plaintiffs' request is overly broad, seeking to restrict not only the specific 500 works cited in the case but millions of additional works under their claimed control.

Anthropic also contests the venue, stating that the lawsuit is improperly filed in Tennessee, as their operations are based in California, and none of the alleged activities occurred within Tennessee. Their terms of service stipulate that disputes would be litigated in California courts.

The Ongoing Copyright Battle

The copyright struggle in the evolving generative AI sector is intensifying. Numerous artists have joined lawsuits against creative generative models like Midjourney and OpenAI’s DALL-E, contributing further evidence of copyright infringement. Recently, The New York Times filed its own suit against OpenAI and Microsoft for using its content without permission in training AI models, seeking billions in damages and demanding the destruction of any AI models using its material.

In response to these tensions, a nonprofit group called "Fairly Trained" has emerged, advocating for a licensed data certification for AI training materials, with support from music publishers such as Concord and Universal.

Additionally, major companies, including Anthropic, Google, and OpenAI, have promised legal safeguards for enterprise users of AI-generated content. Although creators remain resolute in their legal battles, including Sarah Silverman’s case against OpenAI, the courts will need to navigate the complexities of technological advancements versus statutory rights.

As regulatory scrutiny intensifies over data mining practices, the outcome of ongoing lawsuits and Congressional hearings may shape the future of copyright protections in the context of generative AI. The next steps remain uncertain, but Anthropic’s recent filing suggests that generative AI companies are rallying around specific fair use and harm-based defenses. So far, no copyright plaintiffs have successfully won a preliminary injunction in such AI disputes, and Anthropic aims to maintain this trend as this legal saga unfolds.

Most people like

Find AI tools in YBX