According to a study by the American Sunlight Project (ASP), over 25 members of Congress have become targets of malicious deepfake videos, with the vast majority being women. The research identified more than 35,000 instances of nonconsensual intimate imagery (NCII) depicting 26 members of Congress—25 women and one man—on deepfake websites. Most of this content was quickly removed after researchers notified the affected lawmakers.
ASP founder Nina Jankowicz highlighted that the internet has opened new avenues for harms disproportionately targeting women and marginalized communities. Currently, there is limited policy in place to restrict the creation and spread of such content.
The study found that younger members of Congress are more likely to be targeted, with gender being the most significant factor—female lawmakers are 70 times more likely than their male counterparts to be targeted. Although affected offices took action, the quick removal of the content does not prevent it from being shared or uploaded again.
The research indicates that nearly 16% of current female members of Congress (about 1 in 6) have been victims of AI-generated nonconsensual intimate imagery. Jankowicz herself has been a victim of deepfake abuse and has publicly discussed her experience, emphasizing that even copyright claims cannot fully control the spread of these videos.
ASP is calling for federal legislation to address this issue. The Defending Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act) would allow individuals to sue those who create, share, or receive such images. Another bill, the Take It Down Act, would impose criminal liability for such activities and require tech companies to remove deepfake content. Both bills have received bipartisan support in the Senate but still face hurdles related to free speech and technical policy in the House.
Insights
Scholars and advocates like Jankowicz argue that the rapid proliferation of deepfake technology underscores the urgent need for comprehensive legal frameworks. These frameworks should not only address the immediate harm caused by deepfakes but also consider long-term societal impacts, including mental health effects on victims and broader implications for public trust and discourse.
Moreover, Jankowicz points out that while some progress has been made in removing harmful content, the ease with which perpetrators can generate and distribute deepfakes remains a significant challenge. This raises concerns about the potential for blackmail and geopolitical exploitation, particularly affecting policymakers who may be targeted for their roles.
To combat these threats, ASP recommends a multi-faceted approach involving stricter regulations, better enforcement mechanisms, and increased public awareness. By fostering collaboration between government bodies, tech companies, and advocacy groups, they aim to create a safer digital environment where individuals, especially women and marginalized groups, can participate without fear of harassment or exploitation.
In conclusion, addressing the growing threat of deepfakes requires not just legislative action but also a concerted effort to promote ethical standards and responsible use of AI technologies. Only through collective efforts can we mitigate the risks and ensure that technological advancements benefit society as a whole.