Tech Giants Unite in Voluntary Pledge Against Election Deepfakes

Tech Companies Unite to Combat Election-Related Deepfakes Amid Growing Pressure From Policymakers

At the Munich Security Conference today, major tech players, including Microsoft, Meta, Google, Amazon, Adobe, and IBM, united to sign an accord aimed at tackling AI-generated deepfakes that could mislead voters. Joining them were thirteen additional companies, such as AI innovators OpenAI, Anthropic, Inflection AI, ElevenLabs, and Stability AI, alongside social media giants X (formerly Twitter), TikTok, Snap, chipmaker Arm, and cybersecurity firms McAfee and TrendMicro.

The signatories committed to detecting and labeling misleading political deepfakes as they emerge and circulate on their platforms. They plan to share best practices to effectively respond to the spread of deepfakes, offering “swift and proportionate responses.” Importantly, the companies will emphasize contextual understanding, striving to protect “educational, documentary, artistic, satirical, and political expression” while maintaining transparency regarding their policies on deceptive electoral content.

Critics may label the accord as largely symbolic, noting its voluntary measures. However, this initiative reflects a notable apprehension among tech firms about potential regulatory scrutiny during a pivotal election year, especially when 49% of the global population will participate in national elections.

“There’s no way the tech sector can protect elections from this new form of electoral abuse on its own,” stated Brad Smith, vice chair and president of Microsoft, in a press release. “To ensure the future integrity of elections, we will need new multistakeholder collaborations. It’s clear that safeguarding elections requires collective effort.”

While there is currently no federal law in the U.S. prohibiting deepfakes, ten states have enacted regulations criminalizing their use, with Minnesota leading the way by specifically targeting deepfakes in political campaigns. Federal agencies are also working to counter the spread of deepfakes. Recently, the FTC announced plans to amend an existing rule that forbids impersonating businesses or government bodies to extend this protection to all consumers, including political figures. Additionally, the FCC moved to outlaw AI-generated robocalls by reinterpreting existing regulations against artificial and prerecorded voice message spam.

In the European Union, the proposed AI Act mandates clear labeling of all AI-generated content. Meanwhile, the EU's Digital Services Act is pushing for more stringent measures to limit deepfakes across various platforms.

Despite these efforts, deepfakes continue to surge unchecked. Data from Clarity, a deepfake detection firm, reveals a staggering 900% increase in their creation year over year.

Recent incidents highlight the severe implications of deepfakes; last month, AI-generated robocalls mimicked U.S. President Joe Biden’s voice, attempting to dissuade voters during New Hampshire’s primary election. In November, close to Slovakia’s elections, AI-created audio recordings impersonated a liberal candidate discussing plans to raise beer prices and manipulate the election outcome.

A recent YouGov poll indicated that 85% of Americans are either very or somewhat concerned about the proliferation of misleading video and audio deepfakes. Another survey by the Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe AI tools will exacerbate the spread of false and misleading information during the upcoming 2024 U.S. election cycle.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles