Generative AI and the Speech Debate: Legal Perspectives and Risks
Generative AI is increasingly sophisticated, producing content that is expressive, informative, and often persuasive.
Legal experts in the U.S. controversially suggest that outputs from large language models (LLMs) may be protected under the First Amendment. This protection could extend even to potentially harmful content, placing such outputs beyond government oversight.
The Call for Regulation
Peter Salib, an assistant professor of law at the University of Houston Law Center, argues that effective regulation is essential to avoid catastrophic outcomes from AI-generated content. His insights will be featured in the upcoming Washington University School of Law Review.
“Protected speech is a sacrosanct constitutional category,” Salib explains, highlighting concerns about the implications if outputs from advanced models like GPT-5 are deemed protected. This scenario could severely hinder our ability to regulate these technologies.
Arguments Supporting AI as Protected Speech
A year ago, legal journalist Benjamin Wittes claimed that we may have given “First Amendment rights” to machines like ChatGPT. He contended that these AI systems are “undeniably expressive” and produce speech-like content by generating text, images, and engaging in dialogues.
Wittes noted that while AI content may be derivative, the First Amendment protects expression rather than originality. Salib notes that many scholars are beginning to recognize that AI’s outputs resemble speech closely enough to warrant protection.
Some argue that these outputs are the protected speech of the human programmers involved, while others assert they belong to the corporate owners of the AI systems, like ChatGPT. However, Salib argues, “AI outputs do not represent communications from any speaker with First Amendment rights; they are not human expressions.”
Emerging Risks of AI Outputs
AI advancements are rapid, leading to significantly more capable systems used in autonomous and unpredictable ways. Salib, who advises the Center for AI Safety, warns that generative AI could result in severe risks, including the synthesis of deadly chemical weapons and attacks on critical infrastructure.
“There is strong empirical evidence that near-future generative AI systems will pose significant risks to human life,” he states, foreseeing potential scenarios like bioterrorism and automated political assassinations.
AI: Speech but Not Human Speech
Global leaders are beginning to enact regulations aimed at ensuring safe and ethical AI operations. These laws could mandate that AI systems refrain from generating harmful content, potentially appearing as speech censorship.
Salib points out that if AI outputs are protected speech, any regulation would face strict constitutional scrutiny, only allowing regulation in cases of imminent threat.
He emphasizes the distinction between AI outputs and human speech. While traditional software often reflects human intent, generative AI aims to produce any content, lacking specific communicative intent.
Corporate Speech Rights and Regulation
Corporations, though not human, possess derivative speech rights. Salib argues that corporate rights depend on those of human creators. If LLM outputs do not qualify as protected speech, it would be illogical for them to gain protection when disseminated by corporations.
To mitigate risks effectively, Salib advocates for regulating AI outputs rather than restricting the systems’ generative processes. Current LLMs cannot be fully controlled or programmed to avoid harmful outputs due to their complexity and unpredictability.
Successful regulations should focus on the nature of AI outputs—determining whether a model’s content is safe enough for release. This approach would incentivize AI companies to prioritize safety in development.
Ultimately, “laws must be designed to prevent people from being deceived or harmed,” Salib emphasizes, indicating the critical need for careful legislative action in the face of advancing AI capabilities.