The Federal Communications Commission (FCC) is exploring the possibility of requiring political advertisements to disclose when they feature AI-generated content. Chairman Jessica Rosenworcel has proposed a rule that would mandate transparency in political ads on both radio and television, ensuring that viewers know if AI was involved in their production. This proposal encompasses ads from individual candidates and those released by political parties.
The motivation behind this initiative stems from the belief that voters have a right to be informed about the use of AI in political messaging, especially with the presidential election on the horizon. “As AI tools become more accessible, we want to ensure that consumers are fully informed when this technology is utilized,” said Rosenworcel. “I am advocating for swift action on this proposal to uphold transparency in political ads.”
The FCC is actively seeking feedback from stakeholders on how to define AI-generated content and whether these disclosure requirements should extend to broadcasters and cable operators. Importantly, the rule would not prohibit the use of AI in political advertising; instead, the agency acknowledges that AI is expected to play a significant role in shaping political communications for the upcoming 2024 election and beyond.
However, the incorporation of AI-generated content raises concerns about the potential for misleading information presented to voters. The agency highlights the risks associated with deepfakes—manipulated images, videos, or audio recordings that could misrepresent individuals or events. These concerns emphasize the importance of transparency in political advertising and its impact on the electoral process.
In a related context, the FCC has previously taken action against the use of AI in deceptive robocalls. Following an incident where a fake recording of President Biden was used to mislead New Hampshire voters before the presidential primary, the agency proposed a hefty $6 million fine against the perpetrator.
Examples of AI-generated political content have already surfaced. For instance, during the early Republican primaries, the Ron DeSantis campaign released a political attack ad featuring AI-generated imagery of former President Donald Trump with Dr. Fauci. Additionally, the Republican Party produced a video with AI-generated visuals illustrating a dystopian future should Joe Biden be re-elected in 2024.
As voters prepare for the 2024 presidential election, UC Berkeley professor Hany Farid is creating a platform to track the emergence of deepfakes in political advertising, reflecting a growing concern over the integrity of political discourse in the age of AI. This initiative aims to foster awareness and accountability as technology continues to evolve within the political landscape.