Generative AI Threatens 2024 US Elections: Risks and Concerns
Generative AI is set to complicate the 2024 US elections, introducing challenges such as chatbots and deepfakes. Nathan Lambert, a machine learning researcher at the Allen Institute for AI and co-host of The Retort AI podcast, emphasized that political dynamics will likely impede AI regulation in the coming year.
“I don’t anticipate any significant AI regulations in the US in 2024 due to the election year’s heated atmosphere,” Lambert stated. He noted that the elections will heavily influence the narrative around AI usage, particularly how candidates respond and how the media portrays AI's role in spreading misinformation.
As tools like ChatGPT and DALL-E become more integrated into election strategies, Lambert warned, “It’s going to be a hot mess,” irrespective of whether the attribution of AI usage goes to campaigns, malicious actors, or companies like OpenAI.
Concerns About AI in Political Campaigns
Despite the elections being nearly a year away, the early application of AI in political campaigns has raised significant concerns. A recent ABC News report highlighted Florida Governor Ron DeSantis’ campaign utilizing AI-generated images and audio of Donald Trump during the summer.
Furthermore, an AP-NORC poll revealed that nearly 58% of adults believe AI tools will exacerbate the spread of false and misleading information in next year’s elections.
In response, Big Tech companies are acting to address these concerns. Google recently announced plans to restrict election-related prompts in its chatbot, Bard, and search generative experience in the lead-up to the presidential election. These restrictions are expected to be implemented by early 2024.
Meta has also pledged to prohibit political campaigns from utilizing new generative AI advertising products. Advertisers on Facebook and Instagram must now disclose when AI tools alter or generate election ads. Meanwhile, OpenAI has reportedly revamped its approach to eliminating misinformation from ChatGPT and other products amid growing concerns about disinformation leading up to the elections.
However, Wired highlighted issues with Microsoft’s Copilot, which has circulated conspiracy theories and misinformation, indicating potential systemic problems.
The Serious Implications of Generative AI for Democracy
According to Lambert, it may be “impossible to keep generative AI information as sanitized as necessary” for a fair election process.
This concern transcends the scope of the 2024 Presidential race, according to Alicia Solow-Niederman, an associate professor of law at George Washington University Law School. She argued that generative AI tools pose a significant threat to democracy by enabling misinformation and disinformation campaigns.
Solow-Niederman referenced legal scholars Danielle Citron and Robert Chesney, who describe the “liar’s dividend” concept: “In a world where distinguishing truth from falsehood is challenging, trust erodes, undermining our electoral system and ability to self-govern,” she warned.