Meta is not the only company facing challenges due to the surge in AI-generated content. YouTube has also implemented a significant policy change, quietly introduced in June, allowing individuals to request the removal of AI-generated or synthetic content that mimics their face or voice. This development is part of YouTube’s ongoing commitment to responsible AI practices, building on its initial framework announced last November.
Under the new policy, instead of challenging misleading content like deepfakes, individuals can now request the removal of such AI-generated material as a violation of their privacy. YouTube's updated Help documentation specifies that these requests must come from the affected parties, with some exceptions for minors, individuals without access to a computer, and the deceased.
However, submitting a removal request does not guarantee the content will be taken down. YouTube emphasizes that it will evaluate each complaint based on several key factors. This includes whether the content is clearly labeled as synthetic or AI-generated, if it uniquely identifies an individual, and whether it could be categorized as parody, satire, or other valuable content in the public interest. Additionally, content featuring public figures or well-known individuals engaging in “sensitive behavior,” such as criminal acts or endorsing products or political candidates, will be scrutinized more closely—particularly during election seasons, where AI-generated endorsements could sway public opinion.
Once a complaint is lodged, YouTube allows the content creator a 48-hour window to respond. If the content is removed within that period, the complaint is resolved. If not, YouTube will begin its review process. Users should note that removal entails completely deleting the video, including the individual's name and personal details from the title, description, and tags. Creators can blur faces, but simply making the video private won't satisfy the removal request, as it could be reverted to public status at any time.
Though the policy shift wasn’t broadly publicized, YouTube previously introduced a Creator Studio tool in March to help creators disclose when content is generated using altered or synthetic media, including generative AI. More recently, the platform has tested a feature allowing users to add crowdsourced notes, providing context about whether videos are parodies or misleading.
YouTube is not opposed to AI; they've started experimenting with generative AI for features like comment summarization and interactive video recommendations. However, the company has made it clear that simply labeling content as AI-created does not exempt it from removal; all content must still adhere to YouTube’s Community Guidelines.
Regarding privacy complaints related to AI-generated material, YouTube assures creators that receiving such complaints won’t lead to automatic penalties. A company representative stated last month, "Keep in mind that privacy violations are distinct from Community Guidelines strikes; receiving a privacy complaint will not automatically result in a strike." In essence, YouTube’s Privacy Guidelines differ from its Community Guidelines, so content may be removed due to a privacy request even if it doesn’t breach the Community Guidelines. While no penalties like upload restrictions are applied for video removals in response to privacy complaints, YouTube warns that accounts with repeated offenses may face consequences.
Updated on 7/1/24, 4:17 p.m. ET with additional details regarding actions YouTube may take for privacy violations.