OpenAI is actively pursuing licensing agreements with major media outlets, including CNN, Fox News, and Time, to gain access to copyrighted content. According to Bloomberg, these discussions are critical for the continued development of their AI models, particularly in light of recent claims that training models without this content would be "impossible."
The tech company is seeking permission to integrate articles and news snippets from CNN, which is part of Warner Bros. Discovery, into its ChatGPT offerings. This approach mirrors OpenAI's previous collaboration with Axel Springer, covering popular brands such as Politico and Business Insider. Similar negotiations are underway with Fox, expanding the potential for video and image content to enhance their products.
Conversations have also been reported between OpenAI and notable media organizations, including News Corp, Gannett, and IAC, focusing on potential licensing agreements. Guardian News & Media, the parent company of The Guardian, has indicated that discussions may evolve into formal negotiations concerning the usage of their journalism to enhance OpenAI's offerings.
The urgency behind these interactions follows the New York Times' legal actions against OpenAI, alleging copyright infringement. The lawsuit claims that ChatGPT was allegedly trained on copyrighted articles, allowing it to recreate nearly identical content. In response, OpenAI maintains that its practices fall under fair use and argues that the New York Times has constructed prompts leading to the near-exact reproduction of its articles.
Securing these licensing deals would enable OpenAI to train its AI models without the threat of legal repercussions. For instance, through its agreement with Axel Springer, articles will be included in ChatGPT responses with appropriate attribution and links to the original sources. OpenAI has established similar arrangements with the Associated Press.
In another significant development, OpenAI has revised its usage policies to eliminate explicit references to military applications. Previously, the company's guidelines outlined "disallowed usages" that encompassed military and warfare applications, among others. The new framework introduces four general principles—referred to as 'universal policies'—designed to be more inclusive of all OpenAI services, including ChatGPT:
1. Comply with applicable laws.
2. Do not use our services to harm yourself or others.
3. Avoid repurposing or distributing output that may cause harm.
4. Respect our safeguards.
The previous specific mention of military activities has been removed. The closest reference remains under the principle of not harming oneself or others, which emphasizes that users must refrain from using OpenAI services to engage in actions that may lead to the development of weapons or jeopardize the security of any service or system.
This policy update, issued on January 10, aims to enhance clarity and includes guidance tailored to specific services. An OpenAI spokesperson confirmed that the new policy framework prohibits the use of their tools for harmful purposes, including weapon development and communications surveillance. However, they acknowledged the potential for beneficial national security applications that align with OpenAI's mission, citing their ongoing collaboration with DARPA to create advanced cybersecurity tools that support critical infrastructure and industries. This revised policy aims to foster clarity and facilitate open discussions about responsible usage in sensitive areas.