EU Drafts Election Security Guidelines Targeting Political Deepfakes by Tech Giants

The European Union has initiated a consultation on proposed election security measures targeting major online platforms like Facebook, Google, TikTok, and X (formerly Twitter). This initiative includes recommendations designed to reduce democratic risks associated with generative AI and deepfakes, while also addressing established concerns over content moderation, political advertising transparency, and media literacy. The primary objective of this guidance is to ensure that technology giants are vigilant about various election-related risks that may emerge on their platforms, particularly due to more accessible AI tools.

The EU's election security guidelines focus on roughly two dozen prominent platforms and search engines currently categorized under the updated e-commerce regulations, known as the Digital Services Act (DSA).

Following the viral growth of generative AI technologies last year, which saw tools like OpenAI's ChatGPT achieve widespread recognition, concerns about advanced AI systems—like large language models (LLMs) capable of generating convincing text, imagery, audio, or video—have intensified. A surge of generative AI products from established tech companies, such as Meta and Google, has further escalated these concerns, as their platforms engage billions of users globally.

The draft consultation highlights that recent advancements in generative AI allow for the creation and distribution of synthetic content that could mislead voters or warp electoral processes by generating deceptive materials regarding political figures, events, or public opinion. It warns that generative AI systems are also prone to generating erroneous information, or "hallucinations," that misrepresent reality and can mislead voters.

Misleading voters doesn't necessarily require sophisticated AI systems; some politicians proficiently disseminate "fake news" with mere rhetoric. Additionally, malicious actors can manipulate digital media using rudimentary editing tools to create misleading political content that can seamlessly spread across social media, exacerbated by reactive user engagement and bots that amplify divisive messaging.

A recent example to consider is Meta's Oversight Board's decision regarding how the company addressed an edited video of President Joe Biden. The board recommended that Meta revise its "incoherent" policies around manipulated videos, which are currently moderated differently based on whether the content was AI-generated or merely edited.

The EU's guidance on election security goes beyond just AI-generated fakes. The bloc emphasizes that platforms must also tackle the risks associated with disseminating such content, not just its creation.

Best Practices Suggested

One key recommendation in the consultation draft is that platforms clearly label generative AI and deepfake content. These labels should be both "prominent" and "efficient," ensuring that they accompany the content wherever it is shared. The guidelines specify that such labeling is crucial for content that resembles real individuals, events, or places or misrepresents facts.

Furthermore, platforms are encouraged to provide accessible tools that enable users to label AI-generated content.

The draft guidance also recommends drawing best practices from the recently proposed AI Act and the non-legally binding AI Pact, stressing the importance of labeling deepfakes and employing advanced technical solutions to mark AI-generated content effectively, enabling detection by platforms.

The draft guidelines recommend that tech giants implement "reasonable, proportionate, and effective" risk mitigation measures designed specifically for both the creation and potential widespread dissemination of AI-generated fakes. Suggestions include employing watermarking and metadata labeling to make AI-generated content easily distinguishable for users. This approach should also apply to any manipulated media, especially when it involves political candidates or parties.

Platforms should adapt their content moderation systems to recognize these watermarks and indicators, and collaborate with generative AI providers to ensure reliable detection of such markers. Additionally, they are encouraged to foster technological advancements that improve the effectiveness of detection tools.

As the DSA's scope expands to encompass a broad range of digital businesses starting later this month, nearly two dozen platforms with over 45 million monthly active users in the region are already subject to its regulations. This includes major platforms like Facebook, Instagram, Google Search, TikTok, and YouTube.

These platforms face added obligations compared to smaller services, including measures to address systemic risks that their operations may pose to democratic processes. For instance, Meta may soon have to clarify its stance on handling political fakes on Facebook and Instagram, particularly in regions impacted by the DSA. Notably, breaches of DSA regulations could incur penalties of up to 6% of the company's global annual revenue.

Additional recommendations for platforms under the DSA concerning election security include taking "reasonable efforts" to ensure that information generated by AI is based on reliable electoral sources, such as official information from electoral authorities. The guidelines also stress the need for any AI-generated quotes or references to maintain accuracy and avoid distortion, which the EU hopes will mitigate "hallucination" effects.

Platforms must notify users of potential inaccuracies in AI-generated content and direct them to trustworthy information sources. They should also establish safeguards to thwart the creation of false content that could heavily influence user behavior, according to the draft.

Among the safety measures proposed is "red teaming"—proactively identifying and testing potential vulnerabilities. The draft suggests conducting internal and external red-teaming exercises focused on electoral processes before releasing generative AI systems to the public, alongside a staggered release strategy to better control unintended consequences.

VLOPs and VLOSEs must establish "appropriate performance metrics" to assess the safety and factual accuracy of their generative AI responses to electoral inquiries and continuously monitor these systems, making adjustments as necessary.

To prevent misuse of generative AI for deceptive electoral purposes, platforms are advised to incorporate safety features like prompt classifiers and content moderation filters. They should proactively identify harmful prompts that violate their electoral-related terms of service.

For AI-generated text, the current draft recommends that platforms indicate the sources used to generate information, facilitating user verification and contextual understanding. This may involve footnote-style citations that accompany generative AI outputs in sensitive contexts like elections.

Support for external researchers is also highlighted in the recommendations, which aligns with the DSA’s broader obligations to enable data access for studying systemic risks. Researchers are encouraged to develop specific tools to analyze AI-generated content, thereby enhancing understanding of election-related risks.

Moreover, the draft emphasizes the need for platforms to modify their advertising systems to reflect potential risks associated with generative AI in advertisements. This includes requiring clear labeling of AI-generated content used in ads or promoted posts.

The EU's final decisions regarding the recommendations for election integrity will be outlined in forthcoming guidelines. While platforms can choose not to adopt these recommendations, they must comply with the binding provisions of the DSA, which may trigger scrutiny of their alternative strategies. Companies must be equipped to justify their approaches to the Commission, which is responsible for implementing guidelines and enforcing DSA regulations.

The EU has confirmed that these new election security guidelines mark the first initiative under Article 35 of the DSA, which focuses on mitigating systemic risks. The guidelines aim to offer platforms foundational practices and strategies to safeguard the integrity of democratic processes.

With a crucial European Parliament election looming in early June, the EU is prioritizing electoral integrity. The draft outlines expectations for platforms to implement robust preparations, as this election presents a significant evaluation of the resilience of democratic processes. Therefore, it is anticipated that the final guidelines will be published before the summer.

Thierry Breton, the EU’s Commissioner for Internal Market, stated: “With the Digital Services Act, Europe is the first continent to have legislation addressing systemic risks posed by online platforms to our democratic societies. The year 2024 is pivotal for elections, which is why we are fully leveraging the DSA’s tools to ensure these platforms meet their obligations and safeguard against election manipulation, while also protecting freedom of expression.”

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles