FTC Proposes Changes to Rules Aimed at Tackling Deepfakes

In response to the rising threat of deepfakes, the Federal Trade Commission (FTC) is planning to amend an existing regulation that prohibits the impersonation of businesses and government agencies, expanding the ban to protect all consumers. Depending on the final wording of the rule and the feedback received from the public, the new regulation could also make it illegal for Generative AI (GenAI) platforms to offer goods or services that they know, or reasonably should know, are being used to harm consumers through impersonation.

“Fraudsters are leveraging AI tools to impersonate individuals with frightening accuracy and on a much broader scale,” FTC Chair Lina Khan stated in a press release. “With the surge in voice cloning and AI-driven scams, safeguarding Americans from impersonation fraud is more vital than ever. Our proposed enhancements to the impersonation rule aim to strengthen the FTC’s efforts in tackling AI-enabled scams that target individuals.”

1. Fraudsters are employing voice cloning and other AI technologies to impersonate people with unsettling precision and on a grand scale. The @FTC is proposing to extend its impersonation rule to include individual impersonation, imposing significant penalties on these fraudsters. https://t.co/8ON0G63ZjL — Lina Khan (@linakhanFTC) February 15, 2024

The threat of deepfakes extends beyond public figures like Taylor Swift. Online romance scams using deepfake technology are becoming increasingly prevalent, with scammers impersonating employees to defraud corporations. A recent YouGov poll revealed that 85% of Americans are either very concerned or somewhat concerned about the proliferation of misleading audio and video deepfakes. Another survey by The Associated Press-NORC Center for Public Affairs Research indicated that nearly 60% of adults anticipate that AI tools will exacerbate the spread of false and misleading information during the 2024 U.S. election cycle.

Recently, my colleague Devin Coldewey reported on the FCC's decision to classify AI-generated robocalls as illegal by revising an existing regulation that bans artificial and pre-recorded message spam. This move comes in the wake of a phone campaign that utilized a deepfake of President Biden to discourage New Hampshire voters. The FTC's recent actions, combined with the FCC's changes, highlight the federal government's growing efforts to combat the dangers of deepfakes and related technologies.

Currently, there is no comprehensive federal law that explicitly bans deepfakes. High-profile victims, such as celebrities, may turn to existing legal protections like copyright law, rights of publicity, and various tort claims (such as invasion of privacy and intentional infliction of emotional distress) to seek recourse. However, these existing laws can be time-consuming and complex to litigate.

In light of the lack of federal legislation, ten states have already enacted laws that criminalize deepfakes, primarily focusing on non-consensual pornography. As deepfake-generating technologies evolve in sophistication, it is likely that these laws will be updated to cover a broader range of deepfake uses, with more states enacting similar measures. Notably, Minnesota's law already addresses deepfakes used in political campaigns.

Most people like

Find AI tools in YBX