EU Increases Oversight of Major Platforms to Address GenAI Risks Before Elections

The European Commission has issued a series of formal requests for information (RFI) to major tech companies, including Google, Meta, Microsoft, Snap, TikTok, and X, regarding their management of risks associated with generative AI. These requests pertain to popular platforms such as Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X, and are made under the Digital Services Act (DSA)—the EU's updated regulations on e-commerce and online governance. As Very Large Online Platforms (VLOPs), these companies are mandated to evaluate and mitigate systemic risks while adhering to the full range of DSA compliance measures.

In a press release on Thursday, the Commission outlined its inquiries, specifically seeking details on each platform’s risk mitigation strategies related to generative AI technology. Key concerns include the occurrence of “hallucinations,” where AI generates inaccurate information; the viral spread of deepfakes; and automated manipulations that could mislead voters.

“The Commission is also requesting information and internal documents on risk assessments and mitigation strategies concerning generative AI’s impact on electoral processes, the spread of illegal content, the protection of fundamental rights, gender-based violence, safeguarding minors, and mental health,” the announcement stated. It highlighted that the inquiries focus on both the generation and dissemination of generative AI content.

During a briefing with journalists, EU officials announced plans for a series of stress tests following Easter. These tests will assess the platforms' preparedness to handle generative AI risks, particularly the potential surge in political deepfakes ahead of the upcoming June European Parliament elections. “We aim to hold platforms accountable for their readiness to manage any incidents that might arise leading up to the elections,” a senior Commission official, who wished to remain anonymous, remarked.

The EU is responsible for ensuring that VLOPs adhere to the DSA’s stringent regulations, with a particular emphasis on election security. As part of this initiative, the Commission has been consulting on election security protocols for VLOPs to create formal guidance. Today's inquiries are designed to assist in shaping that guidance, with a deadline set for April 3 for the platforms to respond to the urgent request regarding election protection. However, the EU aspires to finalize its election security guidelines even sooner — by March 27.

The Commission emphasized that the cost of generating synthetic content is decreasing significantly, raising the risk of misleading deepfakes being created during election periods. This urgency underscores the need for major platforms to adequately manage the risks of political misinformation.

A recent agreement from the Munich Security Conference aimed at combating deceptive AI use in elections, which received support from some of the same platforms targeted by the Commission’s RFIs, falls short of the EU’s expectations.

An official from the Commission stated that the upcoming election security guidelines will implement more robust measures. These will include the DSA’s clear due diligence rules, combined with over five years of experience from the non-binding Code of Practice Against Disinformation, which the EU plans to formalize as a Code of Conduct under the DSA. Additionally, transparency labeling and AI model marking rules are on the horizon under the forthcoming AI Act.

The EU’s overarching aim is to create a comprehensive enforcement framework that can be activated during election periods.

The Commission's recent RFIs also cover a wider range of generative AI risks beyond voter manipulation, addressing concerns related to deepfake pornography and other malicious synthetic content, whether in visual, video, or audio forms. These inquiries reflect the EU's enforcement priorities under the DSA, which also emphasize the risks of illegal content, such as hate speech, and the protection of children.

Platforms must respond to these other generative AI RFIs by April 24.

Moreover, the EU is paying attention to smaller platforms where misleading or harmful deepfakes may surface, along with smaller AI tool developers that facilitate the generation of synthetic media at lower costs. Although these smaller entities fall outside the Commission's direct DSA oversight of VLOPs, the regulatory strategy seeks to apply pressure indirectly through larger platforms that act as amplifiers or distribution channels. Self-regulatory measures, such as the previous Disinformation Code and the forthcoming AI Pact, will also play a crucial role following the expected imminent adoption of the AI Act.

The EU’s draft election security guidelines for tech giants specifically target issues like political deepfakes, aiming to enhance transparency and accountability in the digital landscape.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles