A new bipartisan initiative has emerged in the Senate aimed at introducing federal transparency standards for AI-generated content, particularly focusing on deepfakes. The Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act seeks to empower content creators by offering legal protections against unauthorized usage of their works in artificial intelligence systems.
Under the proposed legislation, companies that develop generative tools capable of producing images and written content would be mandated to include provenance information or metadata detailing the origins of any generated content. This requirement would enable rightsholders—comprising journalists, artists, songwriters, and more—to easily verify if their original works were utilized in the creation of new content without their consent. Lawmakers emphasize that these provisions will empower creators to safeguard their work and establish usage terms, including compensation mechanisms.
Additionally, the COPIED Act aims to establish strict regulations against tampering with the data concerning a piece of content’s origin. This prohibition would extend to all entities, including search engines and social media platforms, preventing them from altering or removing associated metadata. The Federal Trade Commission (FTC) and state attorneys general would be tasked with enforcing these new regulations.
Content creators would also retain the right to pursue legal action against users or platforms that utilize their work without permission to fabricate deepfakes. Moreover, the National Institute of Standards and Technology (NIST) would be responsible for developing standards related to digital watermarking and synthetic content detection, alongside cybersecurity measures to thwart malicious efforts to alter digital watermarks.
The COPIED Act is spearheaded by Senators Martin Heinrich, Maria Cantwell, and Marsha Blackburn, all of whom have highlighted the critical need for transparency in AI-generated materials. “The bipartisan COPIED Act will provide much-needed transparency around AI-generated content,” stated Cantwell. “This legislation will also empower creators, including local journalists, artists, and musicians, to regain control over their work through a structured provenance and watermarking process.”
The introduction of this bill reflects a growing concern among rightsholders regarding the protection of their content against generative AI systems. Prominent media organizations, such as The New York Times and various local newspapers, have initiated lawsuits against OpenAI, alleging widespread copyright infringement linked to unauthorized data scraping for model training.
The COPIED Act seeks to fortify legal protections for content creators and enhance the transparency of AI-generated outputs. Key industry stakeholders, including SAG-AFTRA, the News/Media Alliance, and the National Association of Broadcasters, have expressed their support for the measure. Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director, articulated the urgency of the act, highlighting the need for transparency to protect the economic interests and reputations of performers.
The Recording Industry Association of America (RIAA) also backs the COPIED Act, emphasizing the plight of artists facing competition from AI-generated content that exploits copyrighted material without consent. “Leading tech companies often withhold essential information regarding how their models are created and trained, profiting from the unlicensed use of copyrighted works,” noted Mitch Glazier, RIAA’s chair and CEO.
Through the COPIED Act, the Senate aims to strike a balance between fostering innovation in AI technologies and safeguarding the rights of content creators in an increasingly digital landscape, ensuring that intellectual property is respected and protected.