An artificial intelligence revolution is underway. Recently, the Hollywood actors' union united with striking writers, marking the first instance in six years that both unions have protested simultaneously. The focus is on AI, as creative professionals express their concerns. Writers oppose studios using AI to generate scripts, while actors reject proposals allowing companies to scan their likenesses, enabling the indefinite use of these “deepfake” reproductions without consent or compensation.
This surge of activism stems from a genuine fear of being replaced by technology and a sense of helplessness. The tech industry has historically operated with minimal regulation, allowing AI companies to produce and deploy potentially harmful products with little oversight. However, this status quo is changing. The emergence of generative AI has spurred U.S. lawmakers to advocate for AI-related legislation. Although new regulations will take time to implement, existing laws offer individuals substantial legal avenues to challenge AI companies when their rights are violated.
Recently, I highlighted how multiple companies are facing lawsuits and investigations that could reshape the future development and application of AI, promoting fairness and equity. For instance, the Federal Trade Commission (FTC) has initiated an investigation into OpenAI to determine if the company breached consumer protection laws by using individuals' online data to train its widely-used chatbot, ChatGPT. Additionally, artists, writers, and the image company Getty are suing AI firms like OpenAI, Stability AI, and Meta for allegedly unlawfully training their models on copyrighted works without permission or payment.
Comedian and writer Sarah Silverman has joined the fight for copyright protection against AI entities. The FTC’s probe and ongoing lawsuits emphasize data practices that often include personal information and artistic materials. In the collective lawsuit against GitHub, Microsoft, OpenAI, Stability AI, and Meta, Matthew Butterick, representing artists and writers (including Silverman), argues that these cases will significantly influence how AI companies operate within legal frameworks.
AI companies have numerous choices regarding their model constructions and data usage, but their commitment to ethical considerations remains uncertain. Courts may soon require these companies to be transparent about their model-building processes and the data used in their datasets. Enhancing transparency in AI models is crucial to dispelling the myth of AI's seemingly magical capabilities.
The ongoing strikes, investigations, and legal battles may create opportunities for artists, actors, and writers to receive proper compensation through licensing and royalty systems as their work serves as training data for AI models. For me, these legal challenges represent a broader societal struggle: they will shape how much power we allocate to private companies and how much control we retain in this AI-driven era. This is a battle we must engage in vigorously.