Coalition of Tech Giants Unite to Combat AI Deepfakes in 2024 Elections
A coalition of 20 major tech companies has signed an agreement to address the threat of AI deepfakes during the 2024 elections across more than 40 countries. Among the companies committed to this initiative are OpenAI, Google, Meta, Amazon, Adobe, and X. However, the agreement's vague language and lack of binding enforcement raise concerns about its effectiveness.
The signatories of the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections" include key players in AI model development and distribution, as well as social media platforms where deepfakes are most likely to emerge. The full list comprises Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.
The accord establishes commitments aimed at countering AI-generated content designed to mislead voters, including:
1. Developing technology to mitigate risks associated with deceptive AI election content, featuring open-source tools when applicable.
2. Assessing models to determine potential risks concerning deceptive AI election content.
3. Detecting the distribution of deceptive content on their platforms.
4. Addressing detected harmful content appropriately.
5. Promoting cross-industry resilience against deceptive AI election content.
6. Ensuring transparency regarding how companies manage such content.
7. Engaging with global civil society organizations and academics.
8. Enhancing public awareness and media literacy to foster resilience against misinformation.
This accord addresses AI-generated audio, video, and images that misrepresent political candidates or provide false voting information. The participating companies plan to collaborate on tools to identify and combat deepfake distribution while also launching educational campaigns to enhance transparency for users.
OpenAI has previously announced its commitment to suppressing election-related misinformation globally. The company will use its DALL-E 3 tool to generate images encoded with a digital watermark, clearly indicating their AI-generated nature. OpenAI also intends to collaborate with journalists and researchers to refine its content verification systems.
While the coalition marks a significant step, notable absences raise questions about its comprehensiveness. For example, Midjourney, known for its advanced AI image generator, is not part of the group despite its potential impact on deceptive imagery. Midjourney has suggested it may prohibit political image generation during elections. Apple's absence from the initiative is also curious, given that the company has yet to launch generative AI products or a social media platform.
While the principles outlined in the agreement offer promise, critics express skepticism about the efficacy of voluntary commitments without strong enforcement mechanisms. Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center, noted that although the participating companies have vested interests in fair elections, the lack of mandatory compliance could hinder the initiative's success.
AI-generated deepfakes have already been utilized in electoral campaigns, including misleading advertisements during the US Presidential Election. For instance, the Republican National Committee (RNC) aired an ad featuring AI-generated images of President Biden and Vice President Harris, and other candidates have used similar tactics with vague disclaimers about AI-generated content.
The Federal Communications Commission (FCC) has taken steps to mitigate the misuse of voice-cloning technology by voting to ban AI-generated robocalls. Meanwhile, the US Congress has yet to enact any significant AI legislation, although the European Union is advancing its own comprehensive AI regulatory framework.
As the influence of AI in elections grows, industry leaders acknowledge the need for responsibility. Microsoft Vice Chair Brad Smith emphasized the importance of ensuring that AI tools do not contribute to election deception, stating, “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”