YouTube Updates Policies to Prepare for the Rising Wave of AI Content

YouTube has unveiled its strategy for managing AI-generated content on its platform, introducing a set of new policies focused on responsible disclosure and tools for the removal of deepfakes. The company emphasizes that while it already has rules against manipulated media, the rise of AI technology necessitated the development of additional guidelines. These measures aim to prevent misleading viewers who might be unaware that a video has been “altered or synthetically created.”

A key change involves new disclosure requirements for YouTube creators. They must now indicate when they produce altered or synthetic content that appears realistic, including videos created using AI tools. For example, if a creator posts a video that falsely represents a real-world event or depicts an individual saying or doing something they did not, this disclosure must be made.

It's important to note that this requirement is strictly for content that "appears realistic," and does not apply universally to all AI-generated videos.

“We want viewers to have clarity when engaging with realistic content, especially regarding the use of AI tools or other synthetic modifications,” said YouTube spokesperson Jack Malon. “This is particularly crucial when discussing sensitive issues, such as elections or ongoing conflicts.”

YouTube is also venturing into AI technology itself. In September, the company announced the upcoming release of a generative AI feature called Dream Screen, set to launch early next year. This feature will enable users to create AI-generated video or image backgrounds by simply entering their desired content. All generative AI features will be labeled as altered or synthetic when they are released.

Creators who fail to consistently disclose their AI usage risk facing content removal, suspension from the YouTube Partner Program, or other penalties. YouTube has committed to collaborating with creators to ensure they understand these requirements before going live. However, it warns that even labeled AI content may be removed if it portrays “realistic violence” intended to shock or disturb audiences, highlighting the potential for confusion surrounding current events like the Israel-Hamas conflict.

Interestingly, YouTube's approach to enforcement follows a recent relaxation of its strike policy. In late August, the company introduced new options for creators to erase warnings before they escalate to strikes, which could otherwise lead to channel removals. This shift could enable some creators to navigate YouTube’s rules more cautiously, knowing they can take risks without losing their channels entirely.

If YouTube adopts a lenient approach towards AI, allowing creators to “make mistakes” and continue posting, the potential for misinformation to spread increases. The specifics on what constitutes “consistent” violations of AI disclosure guidelines before penalties are enforced remain unclear.

Additional changes include the introduction of a tool that allows users to request the removal of AI-generated or altered content that simulates identifiable individuals, known as deepfakes. However, not every flagged content piece will be removed, as parody or satire may be exempt. The company emphasizes that the uniqueness of the individual being impersonated will impact the removal decision, particularly in the case of public officials or well-known figures.

YouTube is also implementing a mechanism for music partners to request the removal of AI-generated music that mimics an artist’s voice. This initiative serves as a preliminary step towards a more comprehensive system that will eventually compensate artists for AI-generated music. Notably, content subject to news reporting, analysis, or critique may still be allowed online amidst these considerations.

YouTube is integrating AI into various aspects of its operations, enhancing the efforts of its 20,000 global content reviewers and identifying emerging threats. The company acknowledges that malicious actors may attempt to circumvent its policies, and it is committed to evolving its protections based on user feedback.

“We are just beginning to explore the potential of generative AI to foster innovation and creativity on YouTube. We’re enthusiastic about this technology's possibilities and understand that its impact will resonate throughout the creative industries for years,” stated Jennifer Flannery O’Connor and Emily Moxley, YouTube's VPs of Product Management. “Our focus is on finding a balance between these advancements and ensuring community safety during this critical juncture, and we will collaborate closely with creators, artists, and others within the creative sector to shape a future that benefits everyone.”

Most people like

Find AI tools in YBX