Researchers Reveal Current AI Watermarks Are Easily Removable

A traditional watermark, typically a visible logo or pattern, serves as a deterrent to counterfeiting on items ranging from currency to postage stamps. You might recognize watermarks in your graduation photos as well. In the realm of artificial intelligence, watermarking takes on an intriguing twist. It enables computers to identify whether text or images are AI-generated.

So, why is watermarking important? The rise of generative art has facilitated the creation of deep fakes and misinformation. While traditional watermarks are visible, AI-generated content often requires invisible watermarks to combat misuse effectively. Major tech companies like Google, OpenAI, Meta, and Amazon are committed to developing watermarking technologies to tackle misinformation.

Researchers at the University of Maryland (UMD) set out to explore how easily bad actors can manipulate watermarks. Soheil Feizi, a UMD professor, expressed skepticism about the reliability of current watermarking applications. His team discovered that existing methods could be easily bypassed and even fraudulent emblems could be added to non-AI-generated images with little effort. However, they also developed a watermark that is nearly impossible to remove without compromising the underlying intellectual property, thus enhancing theft detection.

In a collaborative study between the University of California, Santa Barbara, and Carnegie Mellon University, researchers demonstrated that simulated attacks could effectively strip watermarks from images. They identified two methods of watermark removal: destructive and constructive. Destructive attacks involve altering the image—adjusting brightness, contrast, or using JPEG compression—rendering the quality noticeably worse along with the removal of the watermark. Constructive attacks are more subtle, employing techniques like Gaussian blur to bypass watermarks.

Despite the current limitations of watermarking in AI-generated content, the landscape is evolving. As digital watermarking faces challenges from hackers, new tools like Google’s SynthID—an identification system for generative art—are under development. These innovations come at a crucial time, especially with the 2024 U.S. presidential election on the horizon. AI-generated content could significantly influence political opinions, raising concerns about its potential for spreading misinformation. The Biden administration has acknowledged these risks, highlighting the urgent need to address the disruptive possibilities of artificial intelligence.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles