EU Issues Election Security Guidelines for Social Media Platforms and Entities Covered by the Digital Services Act (DSA)

EU Unveils Draft Election Security Guidelines for Major Tech Platforms

On Tuesday, the European Union (EU) released draft guidelines focused on election security, targeting large platforms with over 45 million regional monthly active users that fall under the Digital Services Act (DSA). These guidelines mandate tech companies to address systemic risks, including political deepfakes, while upholding vital rights such as freedom of expression and privacy. Platforms under scrutiny include Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube, and X.

The European Commission has designated elections as a critical area for enforcing the DSA on very large online platforms (VLOPs) and very large online search engines (VLOSEs). These companies must proactively identify and mitigate risks associated with information manipulation that can threaten the democratic processes in Europe, in addition to adhering to extensive content governance regulations.

According to the EU's election security guidelines, tech giants are expected to enhance their efforts to protect democratic votes. This involves implementing robust content moderation strategies across the multiple official languages of the region, ensuring adequate staffing to effectively respond to information-related risks, and acting on insights from third-party fact-checkers, all while navigating the threat of significant fines for non-compliance.

Platforms must master the delicate balance of moderating political content: distinguishing legitimate political satire, which is protected as free speech, from harmful disinformation aimed at influencing voter behavior. The latter is classified as a systemic risk under the DSA, necessitating swift identification and mitigation by the platforms. The EU's standards require implementing “reasonable, proportionate, and effective” measures to protect electoral integrity while adhering to broader content moderation regulations.

The urgency behind the Commission's efforts stems from the upcoming European Parliament elections scheduled for June. Last month, they conducted a consultation on the draft guidelines, signaling an intent to rigorously evaluate platforms’ readiness next month.

User Control Over Algorithmic Feeds

Central to the EU’s guidelines is the expectation that major social media and tech platforms offer users meaningful control over algorithmic and AI-driven recommender systems. The guidelines emphasize the importance of user agency in shaping their content exposure: “Recommender systems can significantly influence the information landscape and public opinion.” Providers are encouraged to ensure these systems prioritize user choice while considering media diversity and pluralism.

Platforms are also tasked with down-ranking election-related disinformation utilizing “clear and transparent methods,” such as flagging misleading content identified as false through fact-checking or content from accounts with a history of spreading misleading information.

Furthermore, platforms must implement measures to prevent their algorithmic systems from disseminating generative AI-based disinformation, known as political deepfakes. Regular assessments of recommender engines for electoral risks are essential, as is transparency concerning the design and function of these AI systems. The EU advises platforms to engage in rigorous testing practices to enhance their effectiveness in identifying and mitigating risks.

Focus on Local Context and Resources

The guidelines explicitly recommend that platforms allocate resources to analyze “local context-specific risks” and ensure their internal teams possess expertise relevant to the electoral landscape, including language skills. An express recommendation is for platforms to establish dedicated internal teams prior to elections, with sufficient resources aligned with identified risks.

Staffing recommendations advocate for hiring specialists with local knowledge across various relevant areas, such as content moderation, cybersecurity, and threat analysis. This approach aims to counter the trend of centralizing resources, which may overlook the need for localized expertise.

The EU emphasizes that platforms should begin deploying these risk mitigations at least one to six months before elections and maintain them for at least a month following the electoral event. With high risks expected prior to election dates, platforms are urged to act decisively against disinformation campaigns.

Addressing Hate Speech and Misinformation

The EU guidelines encourage platforms to reference existing frameworks, such as the Code of Practice on Disinformation, to enhance their mitigation strategies. They emphasize the importance of providing users with access to official electoral information through various channels, like banners or links, directing them to authoritative sources.

It is crucial that the approach to mitigating systemic risks recognizes the potential impact of illegal content, including hate speech, on democratic discourse. The Commission cautions against measures that might silence vulnerable voices in debates.

The guidelines further advocate for running media literacy campaigns and offering contextual information to users, utilizing tools like fact-checking labels and reliable indicators of information source authenticity. The EU is particularly concerned about the detrimental effects of misinformation and hate speech, which could exacerbate societal polarization and undermine democratic engagement.

Enhancing Transparency and Accountability

For political advertising, the guidelines point platforms toward impending transparency regulations, encouraging them to prepare for compliance now. Platforms must clearly label political ads, disclose sponsors, maintain public repositories, and verify the identities of political advertisers.

Additionally, the guidance highlights the need for systems to demonetize disinformation and stresses the importance of granting free data access to third parties examining election risks. Platforms are encouraged to foster collaboration with oversight bodies and civil society experts to share insights on election security.

In instances of high-risk events, platforms should establish an internal response mechanism, involving senior leadership to ensure accountability. Post-election, the EU recommends that platforms publicly assess their performance during the electoral process, incorporating third-party evaluations to enhance transparency.

While these guidelines are not mandatory, platforms that opt for alternative methods of addressing election threats must be able to demonstrate that their approaches align with EU standards. Failure to do so risks penalties, including fines of up to 6% of global annual turnover for confirmed violations, incentivizing platforms to prioritize compliance and enhance measures against political misinformation.

As the formal adoption of these draft guidelines is expected in April, following the release of all language versions, attention turns toward the upcoming European Parliament elections scheduled for June 6–9, 2024.

Most people like

Find AI tools in YBX