Former President Donald Trump recently shared a series of AI-generated images aimed at boosting his presidential campaign, including a fabricated endorsement from pop star Taylor Swift. These posts highlight how Trump may use generative AI to create election disinformation, which poses challenges for regulation due to existing legal precedents allowing candidates to mislead in political advertisements. This comes after Trump accused Vice President Kamala Harris of using AI to generate a rally crowd.
Among the images Trump posted is one featuring a figure resembling Harris from behind as she speaks to a crowd in Chicago, where the Democratic National Convention is taking place. The background prominently displays a communist hammer and sickle. Another image includes a collection of user-generated posts promoting “Swifties for Trump,” including an AI-generated depiction of Swift dressed as Uncle Sam, with the message, “Taylor wants you to vote for Donald Trump.” Trump captioned this compilation with, “I accept!”
Experts suggest that Trump's posts may not violate the emerging state laws against election deepfakes. Robert Weissman, co-president of Public Citizen, explains that around 20 states have implemented regulations prohibiting AI-generated false images in elections, but these laws typically focus on depictions that convincingly represent a person saying or doing something, requiring a level of plausibility.
Currently, no federal restrictions exist on deepfakes in elections, aside from the Federal Communications Commission's ban on AI-generated robocalls. Public Citizen has urged the Federal Election Commission to curb candidates' ability to misrepresent their opponents using AI, but existing regulations may not encompass obviously exaggerated images like those involving Harris or Swift. However, Swift might have legal grounds for claiming misuse of her likeness under California’s Right of Publicity.
Court rulings have often upheld First Amendment protections for even intentional falsehoods, particularly regarding political candidates. Weissman notes that any legislative measures targeting AI deepfakes might still leave many uses unregulated, as proving harm to voters would be necessary for legal action.
While private platforms can act against misleading generative AI content without government involvement, enforcement has been inconsistent. X’s policy on synthetic and manipulated media aims to prevent posts that could deceive and cause harm, but application of this policy has been selective. Meanwhile, Trump's favored platform, Truth Social, has minimal content regulations.
Weissman comments on the implications of Trump's actions, stating, “It’s very hard to have a democratic society if people can’t believe the things that they see and hear with their own eyes.”