If you checked in on X, the social media platform previously known as Twitter, in the past 24-48 hours, you likely encountered AI-generated deepfake images and videos featuring Taylor Swift. These explicit images depicted her in nonconsensual scenarios with fans of her boyfriend, NFL player Travis Kelce, who plays for the Kansas City Chiefs.
This nonconsensual imagery sparked outrage among Swift's fans, with the hashtag #ProtectTaylorSwift trending alongside “Taylor Swift AI,” as news outlets globally reported on the incident. Meanwhile, X struggled to remove the content and prevent its reposting, akin to a game of "whack-a-mole."
The situation has reignited discussions among U.S. lawmakers about regulating the rapidly evolving generative AI market. Yet, critical questions remain on how to enforce regulations without stifling innovation or infringing on the First Amendment rights that protect parody, fan art, and other forms of expression involving public figures.
What Tools Were Used to Create the AI Deepfakes?
It remains unclear which specific AI tools were used to generate the Swift deepfakes. Leading platforms like Midjourney and OpenAI’s DALL-E 3 strictly prohibit the creation of explicit or suggestive content. According to Newsweek, the account @Zvbear on X claimed responsibility for posting some of the deepfakes but later made their account private.
Investigations by independent tech outlet 404 Media traced the images back to a Telegram group that reportedly used "Microsoft’s AI tools," specifically Microsoft's Designer, which is powered by DALL-E 3 and has similar content restrictions.
Despite stringent policies, the open-source AI model Stable Diffusion by Stability AI allows users to create varied content, including sexually explicit materials. This flexibility has led to concern, as platforms like Civitai have been flagged for facilitating the creation of nonconsensual AI imagery. While Civitai has announced efforts to combat this misuse, it has not been linked to the Swift deepfakes.
The implementation of Stable Diffusion on Clipdrop also prohibits explicit content. Nevertheless, users have persistently found ways to bypass these safeguards, resulting in the recent influx of Swift deepfake images.
Even amidst the growing acceptance of AI for consensual creative projects—such as HBO’s True Detective: Night Country—the technology's misuse for harmful purposes could tarnish its public image and invite stricter regulations.
Potential Legal Actions
A report from The Daily Mail indicates that Taylor Swift is "furious" about the distribution of the explicit images and is considering legal action. It remains uncertain whether this action would target the platform Celeb Jihad that hosted the images, the AI tools facilitating their creation, or those responsible for making them.
This incident has heightened concerns regarding generative AI tools and their potential to create damaging imagery depicting real individuals, fueling calls from U.S. legislators for more stringent regulations.
Tom Kean, Jr., a Republican Congressman from New Jersey, recently introduced two bills: the AI Labeling Act and the Preventing Deepfakes of Intimate Images Act. Both proposals aim to enhance regulation in the AI space.
The AI Labeling Act would mandate that AI-generated content include a clear notice indicating its artificial creation, although the effectiveness of such labeling in preventing the dissemination of explicit content is questionable. Meta and OpenAI have already implemented similar labeling measures to inform users about AI-generated imagery.
The second bill, co-sponsored by Democrat Joe Morelle, seeks to amend the Violence Against Women Act. It would empower victims of nonconsensual deepfakes to pursue legal action against creators and platforms, potentially awarding damages of $150,000 and imprisoning offenders for up to 10 years.
While these bills may not directly assist Swift in her case, they could pave the way for better protections for future victims of AI-generated deepfakes.
For both bills to become law, they must progress through relevant committees, receive a vote from the full House of Representatives, and pass a corresponding bill in the Senate. Currently, both bills are in the introductory phase.
Congressman Kean's Statement on the Incident
In response to the deepfake incident, Congressman Kean stated, “AI technology is advancing faster than the necessary guardrails. Whether the victim is Taylor Swift or any young person across our country, we need to establish safeguards to combat this alarming trend. My bill, the AI Labeling Act, would be a significant step forward.”
This incident, along with similar occurrences, highlights the urgent need for legislative action to address the growing concerns surrounding AI-generated content and its implications for privacy and personal safety.