TikTok to Launch Localized Election Resources Across the EU
TikTok is set to introduce localized election resources within its app next month, aiming to reach users across all 27 Member States of the European Union. This initiative is part of TikTok's strategy to combat disinformation risks associated with this year's regional elections and guide users toward “trusted information.”
“We will unveil a local language Election Centre in-app for each of the 27 EU Member States,” TikTok announced today. “These Election Centres will empower our community to distinguish fact from fiction by providing access to reliable and authoritative information, in collaboration with local electoral commissions and civil society organizations.”
As part of its commitment to election integrity, TikTok will label videos related to the European elections, directing users to the corresponding Election Centres. Additionally, the platform plans to add reminders to relevant hashtags, encouraging users to adhere to community rules, verify facts, and report content that violates its guidelines. This announcement comes from a blog post detailing TikTok's preparations for the 2024 European elections.
The blog post further addresses the risks of disinformation, particularly regarding covert influence operations that aim to manipulate opinions during elections through fake accounts and inauthentic content. TikTok has pledged to introduce “dedicated covert influence operations reports” to enhance transparency, accountability, and collaboration across the industry regarding such operations. These reports will launch in the coming months, likely within TikTok's existing Transparency Center.
Moreover, TikTok will roll out nine new media literacy campaigns in the region, adding to the 18 campaigns launched last year, reaching a total of 27 initiatives aimed at promoting informed voter engagement across all EU Member States. The platform is also expanding its network of local fact-checking partners, currently collaborating with nine organizations that cover 18 languages. (The EU recognizes 24 official languages and an additional 16 recognized languages.)
However, TikTok has not announced new measures to address election security risks posed by AI-generated deepfakes. Recently, the EU has heightened its focus on generative AI and political deepfakes, calling for platforms to implement safeguards against such disinformation.
In the blog post, attributed to Kevin Morgan, TikTok's head of safety & integrity for Europe, the Middle East, and Africa, it is noted that generative AI presents "new challenges around misinformation." The platform prohibits “manipulated content that could mislead,” including AI-generated content depicting public figures endorsing political views. However, no specifics were provided on TikTok's current effectiveness in detecting or removing political deepfakes uploaded in violation of this rule.
Morgan clarified that TikTok requires creators to label realistic AI-generated content and mentioned a recently launched tool that enables users to apply manual labels to deepfakes. Still, the post lacks details on how TikTok enforces its deepfake labeling requirements or controls deepfake risks in relation to elections.
“As technology evolves, we will continue to enhance our efforts, including collaborating with industry partners on content provenance,” is the only additional information TikTok offered in this context.
We have reached out to TikTok with several questions about its preparation for European elections, including the specific areas of focus within the EU and any gaps in language, fact-checking, and media literacy initiatives. We will update this post with any response. (Update: See the end of this post for TikTok's responses.)
New EU Mandates on Disinformation
Elections for a new European Parliament are expected in early June, and the EU is putting significant pressure on social media platforms to enhance their preparedness. Since last August, the EU implemented new legal mechanisms compelling action from around two dozen larger platforms, which must adhere to stringent requirements from its updated online governance regulations.
Previously, the EU relied on self-regulation through the Code of Practice Against Disinformation to encourage industry action against disinformation. However, it has been vocal for years about the inadequate responses from the signatories of this voluntary initiative, which includes TikTok and most major social media platforms (except X/Twitter, which withdrew last year).
The EU Disinformation Code was launched in 2018 as a set of voluntary standards; it was strengthened in 2022 with more detailed expectations and commitments, incorporating a wider array of stakeholders involved in the disinformation ecosystem.
While the Code remains non-legally binding, the EU's executive, tasked with enforcing the Digital Services Act (DSA), has indicated that adherence to the Code will be considered when assessing compliance with the DSA's legally binding requirements. This act obligates major platforms, including TikTok, to identify and mitigate systemic risks stemming from their technologies, such as election interference.
The Commission regularly reviews the performance of Code signatories, often admonishing platforms to enhance their moderation capabilities and invest in fact-checking, particularly in smaller EU Member States. Typically, platforms respond to negative feedback with claims of increased action, followed by a similar script several months later.
This cycle of criticism may shift now that the DSA mandates action on disinformation risks, as it is currently operational for bigger platforms. Consequently, the Commission is formulating guidelines for election security, directed at the numerous firms categorized as very large online platforms (VLOPs) or very large online search engines (VLOSEs), which carry a legal duty to mitigate disinformation threats.
For non-compliance, these platforms could face penalties as severe as 6% of their global annual revenue, prompting a renewed focus from tech giants on addressing societal disinformation challenges, which they have often sidestepped in pursuit of user engagement and growth.
The Commission oversees DSA enforcement for VLOPs/VLOSEs and will ultimately determine whether TikTok and its peers have made adequate efforts to combat disinformation risks.
In light of these announcements, TikTok appears to be amplifying its approach to regional information dissemination and election security risks while striving to fill gaps noted by the Commission. The new Election Centres, localized in all official languages across EU Member States, could significantly contribute to combating election interference. Success will largely depend on how effectively these resources encourage users to critically evaluate politically charged content by connecting them with authoritative sources.
The expansion of media literacy campaigns across all EU Member States also addresses a frequent Commission concern, though it remains uncertain whether all campaigns will occur before the European elections (we have requested clarification).
Other aspects of TikTok’s strategy seem less robust. For example, the platform's last Disinformation Code report to the Commission noted its expanded synthetic media policy to include AI-generated or modified content, but it also stated its intent to strengthen its enforcement of this policy. Today’s announcement provided no updates on these enforcement capabilities.
In earlier reports, TikTok expressed intentions to explore new tools for enhancing detection and enforcement of synthetic media, including user education efforts. However, specific advancements in this area remain elusive, highlighting a broader challenge found across platforms: a lack of effective methods for detecting deepfakes amidst the proliferation of AI-generated misleading information.
This disparity may necessitate other policy interventions to adequately confront the risks posed by AI-generated content.
Regarding TikTok's focus on user education, it has not clarified whether its forthcoming media literacy campaigns will address AI-related risks. We have sought more information in this regard.
TikTok initially joined the EU’s Disinformation Code in June 2020 but has faced increasing scrutiny and mistrust due to security concerns surrounding its China-based parent company. With the DSA now in effect and a significant election year imminent, TikTok and other platforms are expected to remain under the Commission's watch regarding disinformation risks.
Although X, owned by Elon Musk, is the first platform formally investigated for DSA compliance and related obligations, TikTok is also in the spotlight.
Update: TikTok did not respond to all our inquiries, including details on its disinformation expenditures in the EU compared to the $2 billion budget it mentioned for global initiatives this year. However, it confirmed that the Election Centres will be available in all official EU languages, aiming to connect users with trusted information on voting processes and provide media literacy tips. Users will access these Centres through prompts related to election content.
The nine media literacy campaigns will occur this year, with some—though not all—scheduled before the European elections, addressing topics like election misinformation and critical thinking skills. TikTok mentioned that its content moderation teams cover at least one official language in each of the 27 EU Member States, and it plans to continue expanding its fact-checking resources across Europe.
While TikTok did not furnish data on AI deepfake removals, it pointed to quarterly updates in its Transparency Report regarding synthetic media rule violations. The company disputes suggestions that it is not advancing in its efforts against AI-generated disinformation, highlighting last autumn's introduction of labels for creators to identify synthetic media. It also mentioned testing an auto-labeling feature for AI-generated content.
Moreover, TikTok confirmed that numerous social media companies, including itself, are collaborating on an accord to combat the deceptive use of AI targeting voters, with further details expected to be revealed soon at the Munich Security Conference.