A bipartisan coalition of senators is advancing the ‘NO FAKES Act’—a legislation aimed at imposing hefty penalties on AI-generated deepfakes produced without the subject's consent. Spearheaded by Senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-N.C.), this initiative addresses the rising concerns over digital replicas that can convincingly mimic an individual’s appearance and voice.
Deepfakes are synthetic media that create remarkably realistic representations of a person's likeness, often without their knowledge or approval. The proposed legislation would cover individuals both living and deceased for up to 70 years after their death, thereby extending protections to a wide range of public figures and private citizens alike. Notably, the bill includes exemptions for cases involving news reporting, documentaries, parody, scholarship, satire, and commentary.
Under the NO FAKES Act, violators could face fines starting at $5,000 for each offense or be required to compensate the victim for actual damages incurred—whichever is greater. In instances where wrongdoing is proven to be carried out with "malice, fraud, or oppression," courts may impose punitive damages along with the victim's reasonable attorney fees.
This legislative push comes in the wake of celebrity concerns over unauthorized deepfakes being commercialized. Recently, actors like Tom Hanks and talk show host Gayle King alerted their followers about the misuse of their likenesses to promote products—a dental plan for Hanks and a weight loss product for King. These incidents underscore an alarming trend where artificial intelligence is leveraged to exploit public figures for commercial gain without disclosure or consent.
"AI technologies must evolve alongside our legal framework," stated Klobuchar. "We need established parameters to safeguard individuals from having their voices and images replicated without permission." The senators recognize the urgent need to update regulations amid the rapid advancement of generative AI technologies, which have led to an explosion of such unauthorized reproductions.
A notable example that illustrates this issue is the song 'Heart on My Sleeve,' which used deepfake technology to replicate the voices of popular artists Drake and The Weeknd. The track garnered significant attention on platforms like YouTube and Spotify before being removed.
The ongoing discussions between actors' unions and Hollywood studios have also seen friction, particularly concerning actor compensation and the use of digital likenesses in entertainment. In contrast, the writers' union recently reached a three-year agreement that not only provides for increased pay and benefits but also stipulates that AI cannot write or revise literary content, nor can writers' work be utilized to train AI models without consent. However, writers are permitted to use AI tools if the studio consents to their utilization.
As legislative efforts gain momentum and industry discussions intensify, it becomes increasingly clear that the intersection of artificial intelligence and personal rights is a critical issue that demands immediate attention and comprehensive solutions.