The U.S. Federal Communications Commission (FCC) has unanimously voted to ban robocalls that employ AI-generated voices, regardless of intent to cause harm. This ruling takes immediate effect, equipping state attorneys general with enhanced tools to combat violators. Previously, authorities could only target individuals responsible for actual harm—like fraudulent schemes—utilizing AI-generated voices. Now, the mere act of using AI to create deepfake voices in robocalls is deemed illegal.
The FCC stated, “This action now makes the act of using AI to generate the voice in these robocalls itself illegal, expanding the legal avenues for law enforcement.” Prior to this decision, a coalition of 26 states urged the FCC to take decisive steps against the use of AI in telemarketing, emphasizing the need to protect consumers from deceptive practices. Pennsylvania Attorney General Michelle Henry highlighted that advancing technologies should not be harnessed to “prey upon, deceive, or manipulate consumers.”
The urgency behind the ruling has been underscored by recent incidents involving audio deepfakes. Notably, a synthetic version of President Biden's voice was used to mislead New Hampshire voters in late January. The deepfake message urged individuals to abstain from voting in the Democratic presidential primary, effectively telling them to postpone their votes for the general election. This incident fueled concerns around the increasing misuse of voice deepfakes, which have been employed to replicate the voices of celebrities, politicians, and even family members.
In a related context, explicit deepfake images of singer Taylor Swift have also sparked controversy on social media. Following the emergence of such content, X (formerly Twitter) temporarily suspended searches for these images; however, one specific instance was reported to have garnered 47 million views.
On January 30, bipartisan legislation was introduced in the U.S. Senate that would allow victims of explicit deepfakes to seek civil penalties against those who create or distribute such material. This legislative move reflects the growing need for legal frameworks that protect individuals from the evolving threats posed by AI-generated content.