Meta to Tag AI-Generated Content Across Facebook, Instagram, and Threads

In a post today, Meta announced plans to identify and label AI-generated content on Facebook, Instagram, and Threads, although it cautioned that "not yet possible to identify all AI-generated content." This move follows the recent viral spread of pornographic AI deepfakes of singer Taylor Swift on Twitter, which elicited backlash from fans, lawmakers, and global media. It comes at a critical time as Meta faces increasing scrutiny to manage AI-generated images and manipulated videos ahead of the 2024 US elections.

Nick Clegg, Meta’s president of global affairs, stated that "these are early days for the spread of AI-generated content." He noted that as AI usage grows, "society will engage in debates about how to identify both synthetic and non-synthetic content." Clegg emphasized that Meta will continue to monitor developments, collaborate with industry peers, and engage in discussions with governments and civil society.

Meta's initiatives align with industry best practices as demonstrated by its partnership with the Partnership on AI (PAI) to establish standards for identifying AI-generated content. The platform plans to label user-uploaded images on its services when it detects standard indicators of AI generation. Since the launch of its AI image service, photorealistic images created with Meta AI have been labeled as "Imagined with AI."

Clegg highlighted that Meta's current methods represent the forefront of technical capabilities. "We’re diligently working on classifiers to automatically detect AI-generated content, even in the absence of invisible markers," he added, while also striving to secure invisible watermarks against removal or alteration.

This announcement is part of Meta’s broader efforts to effectively identify and label AI-generated content, utilizing techniques like invisible watermarks. In July 2023, seven tech companies committed to concrete steps to enhance AI safety, including watermarking, while Google DeepMind launched a beta version of SynthID, which embeds imperceptible digital watermarks directly into images.

Nevertheless, experts caution that digital watermarking, whether visible or hidden, isn't foolproof. A University of Maryland computer science professor pointed out, "we don’t have any reliable watermarking at this point — we broke all of them." Feizi and his team demonstrated the ease with which bad actors can remove watermarks or falsely add them to human-created images, triggering inaccurately labeled content.

Margaret Mitchell, chief ethics scientist at Hugging Face, emphasized that while invisible watermarks are not a definitive solution for identifying AI-generated content, they serve as valuable tools for legitimate creators seeking a form of "nutrition label" for AI content. She highlighted the importance of understanding content provenance: "Knowing the lineage of where content came from and how it evolved is crucial for tracking consent credit and compensation."

Mitchell expressed enthusiasm for the potential of watermarking technologies, acknowledging that while shortcomings exist in AI, the overall technology holds promise. "It's essential to recognize the bigger picture amid recent discussions around AI capabilities," she concluded.

Most people like

Find AI tools in YBX