Experts Advocate for Legal ‘Safe Harbor’ to Allow Researchers, Journalists, and Artists to Evaluate AI Technologies

A recent paper by 23 AI researchers, academics, and creatives argues that establishing ‘safe harbor’ legal and technical protections is crucial for enabling independent evaluations of AI products. This protection is necessary for researchers, journalists, and artists conducting “good-faith” investigations into these technologies.

The paper highlights a significant barrier: terms of service from major AI providers—including OpenAI, Google, Anthropic, Inflection, Meta, and Midjourney—often legally restrict research on AI vulnerabilities. The authors urge these companies to support public interest AI research by offering indemnities against account suspensions and legal repercussions.

“While these terms are intended to deter malicious use, they also inadvertently hinder AI safety research,” the accompanying blog post notes. Companies may enforce these policies strictly, potentially grounding impactful research.

Co-authors Shayne Longpre from MIT Media Lab and Sayash Kapoor from Princeton University emphasized the importance of this issue in the context of a recent New York Times lawsuit. OpenAI labeled the Times’ evaluation of ChatGPT as “hacking,” to which the Times’ lead lawyer replied that OpenAI’s interpretation mischaracterizes legitimate investigation as malicious activity.

Longpre referenced earlier advocacy by the Knight First Amendment Institute aimed at protecting journalists investigating social media. “We aimed to learn from that initiative when proposing a safe harbor for AI research,” he explained. “With AI, there’s a lack of clarity on usage and associated harms, making research access vital.”

The paper, titled "A Safe Harbor for AI Evaluation and Red Teaming," notes instances of account suspensions during public interest research at companies like OpenAI, Anthropic, Inflection, and Midjourney, with Midjourney noted for its frequent suspensions. Co-author and artist Reid Southen faced multiple bans after investigating potential copyright infringements in Midjourney’s outputs, revealing that the platform could violate copyright without users’ intent.

“Midjourney banned me three times at a personal cost of nearly $300,” Southen recounted. “The first suspension came within eight hours of my results posting, and they later revised their terms without notifying users, shifting blame for infringing content.”

Southen advocates for independent evaluations, reasoning that companies have demonstrated an unwillingness to self-assess, harming copyright owners in the process.

Key to these discussions is transparency. Longpre asserted the importance of allowing independent researchers to investigate the capabilities and flaws of AI products if they can demonstrate responsible use. He expressed a desire to collaborate with companies to improve transparency and enhance safety.

Kapoor acknowledged that while companies have legitimate concerns regarding service misuse, policies shouldn’t uniformly apply to malicious users and researchers conducting critical safety work. He noted ongoing conversations with affected companies, stating, “Most have engaged with our proposal, and some have begun to adapt their terms—particularly OpenAI, which modified their language following our initial draft.”

Overall, the paper advocates for a balanced approach that promotes AI safety and supports research through clear, protective measures.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles