Players gonna play. Haters gonna hate. But when it comes to the pornographic AI-generated deepfakes of Taylor Swift, the situation is alarming enough to have sent Elon Musk scrambling to hire 100 more content moderators for X and pushed Microsoft to enhance guardrails on its Designer AI app. To AI companies, I have a simple message: No, you cannot just ‘shake it off.’
You might wish to shake it off and keep cruising, insisting you won’t stop grooving. But as Taylor Swift herself reminds us, “now we got problems.”
Are you ready for AI agents?
Marc Andreessen’s "Techno-Optimist Manifesto" proclaimed, “Technology is the glory of human ambition and achievement.” OpenAI's mission emphasizes developing artificial general intelligence (AGI) for the benefit of humanity, while Anthropic expresses confidence in creating reliable and steerable AI systems. Meanwhile, Meta’s chief AI scientist, Yann LeCun, recently claimed that “the world didn’t end” five years after GPT-2 was withheld due to safety concerns: “In fact, nothing bad happened,” he posted.
Sorry, Yann, but bad things are happening with AI. This doesn’t negate the good developments, nor does it diminish the overall optimism about technology's evolution. However, it's crucial to recognize that real people are experiencing the negative impacts of AI, and the "normies" often grasp this reality better than those within the AI industry. It is essential for AI companies to acknowledge these concerns honestly and outline how they intend to address them.
Failing to do so could lead them to the disillusionment cliff I discussed last October. Along with rapid advancements in AI, there are numerous challenges—from election misinformation and AI-generated porn to job displacement and plagiarism. While AI offers remarkable potential for humanity, companies must do a better job of communicating that vision.
Presently, they also struggle to convey how they will rectify existing issues. As Swifties poignantly recognize, “now we got problems…you made a really big cut.”
I’m rooting for the AI anti-hero.
I am genuinely enthusiastic about the AI landscape; it’s thrilling and full of promise. Yet, it can be exhausting to support what many view as a morally ambiguous anti-hero. Sometimes I wish the leaders in AI would openly acknowledge their shortcomings, as in the line, “I’m the problem, it’s me.”
However, self-reflection is vital. Regardless of the good intentions of AI researchers, executives, and policymakers, the Taylor Swift AI deepfake incident is only the tip of the iceberg. Millions of women and girls face the threat of being targeted by AI-generated porn. Experts anticipate that AI will exacerbate issues during the 2024 election, and many workers will attribute their layoffs to AI, regardless of the evidence.
People are increasingly cynical about AI. This reaction can frustrate those who recognize its capacity to address significant challenges.
If AI companies do not find a way to move forward that respects and protects the humans they hope will embrace this technology, they may find themselves facing undeniable fallout. If that happens—baby, now we got bad blood.