EU Urges Stronger Safeguards for Generative AI Amid Deepfake Election Risks

The European Union has issued warnings about the potential dangers posed by widely accessible generative AI tools to free and fair discourse in democratic societies. Vera Jourova, the bloc’s commissioner for values and transparency, has emphasized that AI-generated disinformation could threaten the integrity of elections, especially with the upcoming pan-European vote for a new European Parliament next year.

In a recent speech updating the EU’s voluntary Code of Practice on Disinformation, Jourova acknowledged the initial measures taken by several mainstream platforms to address AI-related risks, including safeguards designed to inform users about the “synthetic origin of online content.” However, she urged that further action is imperative.

“These initiatives must continue and accelerate, given the heightened risk of sophisticated AI products creating and spreading disinformation, particularly in the electoral context,” she cautioned. “I call on platforms to remain vigilant and establish effective safeguards during elections.”

Jourova is scheduled to meet with OpenAI representatives later today to discuss these concerns. OpenAI, the creator of ChatGPT, is not yet a signatory to the EU’s anti-disinformation Code and may face increasing pressure to collaborate on this initiative.

The Impact of AI on Elections

The commissioner’s comments on generative AI come in the wake of prior calls to action from this summer, when she urged platforms to label deepfakes and other AI-generated material—advocating for a dedicated approach to addressing “AI production.” She remarked that machines should not be granted free speech rights.

Upcoming EU legislation, known as the EU AI Act, is anticipated to impose legal obligations on generative AI creators, such as AI chatbots, to disclose user information. While this draft law is still under negotiation and won’t take effect for several years, the Code serves as an interim measure to promote proactive deepfake disclosures among signatories.

Following last year’s efforts to enhance the anti-disinformation Code, the Commission has indicated that compliance with this non-binding Code will be viewed favorably under more stringent legal requirements enforced by the Digital Services Act (DSA). The DSA obligates very-large-online-platforms (VLOPs) and search engines (VLOSEs) to assess and mitigate societal risks associated with their algorithms, including disinformation.

“National elections and the upcoming EU elections will provide a critical test for the Code, and platforms must not fail this test,” Jourova stated today, cautioning that: “Platforms must take their responsibilities seriously, especially in light of DSA obligations to mitigate election-related risks.”

The DSA is now mandatory, and all VLOPs must comply with it. The intention behind the Code is to lay the groundwork for a Code of Conduct that fits into a co-regulatory framework aimed at addressing disinformation risks.

Recent Reports on Disinformation

Today, a second round of reports from disinformation Code signatories has been published, covering the first half of 2023. Currently, only a limited number of reports from key players like Google, Meta, Microsoft, and TikTok are available for download on the EU’s Disinformation Code Transparency Center—the most comprehensive set of reports since the Code’s inception in 2018.

The Code now has 44 signatories from various sectors, not just major social media and search engines but also entities in advertising and civil society involved in fact-checking initiatives.

Google: In its report, Google highlighted recent advancements in large-scale AI models, acknowledging the ensuing discussions about the social impacts of AI and concerns regarding misinformation. As an early adopter of generative AI through its Bard chatbot, Google emphasized its commitment to responsible AI development and outlined initiatives such as new watermarking technologies and metadata integration into its generative models.

Microsoft: As a significant investor in OpenAI, Microsoft has integrated generative AI features into its search engine. Its report discusses the “Responsible AI Principles” and the establishment of a global governance roadmap for AI deployment. Microsoft plans to enhance user understanding of the information they encounter and has formed partnerships (such as with Truepic and Reporters Sans Frontières) to combat manipulated media.

TikTok: TikTok’s report addressed AI-generated content focusing on maintaining service integrity. The platform revised its community guidelines to require users to disclose AI-generated content. TikTok has made progress in fact-checking efforts related to the Russian-Ukrainian conflict and has committed to improving policy enforcement on synthetic media.

Meta: The report from Meta acknowledged the implications of generative AI on disinformation management. The company expressed a desire to collaborate with governmental and academic partners to develop solutions for tackling AI-generated misinformation. Meta is also initiating a Community Forum on Generative AI to gather user feedback on new technologies.

Addressing Kremlin Propaganda

Jourova also underscored the urgency of battling the proliferation of Kremlin propaganda, especially in light of the impending EU elections next year, which raises the specter of heightened Russian interference. She warned that the Russian state is actively waging a war of ideas that distorts the truth to portray democracy unfavorably.

“Platforms must be acutely aware of this context—especially with the upcoming elections—where malicious actors may exploit platform features to manipulate public sentiment,” she stated.

According to early analyses of Big Tech's reporting on the Code, YouTube has already shut down over 400 channels linked to coordinated influence operations tied to the Russian Internet Research Agency. The EU also noted that TikTok's fact-checking spans multiple languages and regional content.

In her address, Jourova reiterated her call for consistent moderation and enhanced fact-checking efforts, particularly in smaller member states. She criticized platforms for limiting access to data, emphasizing the need for transparency and researcher empowerment to analyze disinformation flows.

Twitter (now X), which once participated in the disinformation Code, has deviated from EU expectations since Elon Musk's acquisition of the platform, being evaluated poorly in its disinformation management.

The EU’s executive body is now tasked with overseeing VLOPs’ compliance with the DSA and has the authority to impose fines on violators, further emphasizing the importance of transparency and accountability in combating disinformation online.

The EU is advocating for platforms to clearly label AI-generated content to combat disinformation effectively.

Most people like

Find AI tools in YBX