Robocalls are already a nuisance, but the rise of AI-generated voices—like those mimicking President Biden—exacerbates the issue. In response, the FCC is proposing a rule that would make the use of voice cloning technology in robocalls unequivocally illegal, facilitating the prosecution of those behind these fraudulent operations.
You might wonder why such a measure is necessary since robocalls are illegal in the first place. While many automated calls are unwanted, some serve legitimate purposes, making it essential for authorities to step in only when a clear violation occurs.
Take the recent incident involving fake Biden calls in New Hampshire that attempted to discourage voting. The state’s attorney general confirmed this was “an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.” Under New Hampshire law, voter suppression is illegal. Authorities will charge the offenders with these violations, along with other potential charges, but they need credible evidence of a crime to take action.
If using voice cloning technology in automated calls is, in itself, illegal, it simplifies the process of holding robocallers accountable. "That’s why the FCC is moving to classify this emerging technology as illegal under existing legislation, empowering State Attorneys General with new tools to combat these scams and safeguard consumers," stated FCC Chairwoman Jessica Rosenworcel in a recent news release. The Commission had previously indicated they were looking into this issue as it gained attention.
The FCC aims to investigate the risks associated with AI-enhanced robocalls. Currently, they leverage the Telephone Consumer Protection Act (TCPA) to penalize robocallers and other telemarketing fraud. While the TCPA prohibits “artificial” voices, it remains unclear if cloned voices fit into this category. For example, a company might use a generated voice of its CEO for legitimate reasons.
However, legal uses of the technology are far fewer and less pressing than the illegal ones. As such, the FCC plans to issue a Declaratory Ruling to classify AI-powered voice cloning as falling under the “artificial” category.
As technology related to telephony, messaging, and generative voice continues to evolve rapidly, the legal landscape is also changing. This means it may not always be clear what constitutes illegal activity, or why some blatantly illegal calls seem to evade consequences. The legal framework is a work in progress.
Update: FCC spokesperson Will Wiquist informed me that this proposal will undergo internal processes and will be voted on at the discretion of the Commissioners. It will only become public if and when it is formally adopted.