DeepMind and Google Cloud Collaborate to Introduce Watermarking for AI-Generated Images

In collaboration with Google Cloud, Google DeepMind (Google's AI research division) is introducing a new tool designed to watermark and identify AI-generated images, specifically those created by Google’s Imagen model. The tool, named SynthID and currently available in beta for select Vertex AI users, embeds a discreet digital watermark directly into an image's pixels, rendering it mostly undetectable to the human eye while still detectable by algorithms. It is exclusively compatible with Imagen, Google's text-to-image model, accessible only through Vertex AI.

Previously, Google indicated it would integrate metadata to mark visual media generated by AI models. SynthID advances this concept significantly.

“While generative AI can unleash incredible creative potential, it also introduces risks such as the potential spread of false information—either intentionally or unintentionally,” DeepMind stated in a blog post. “Identifying AI-generated content is crucial for informing users when they engage with generated media and combating misinformation.”

DeepMind asserts that SynthID, which was co-developed with Google Research, remains effective even when images are altered—whether through applying filters, changing colors, or compressing files. The tool utilizes two distinct AI models, one for watermarking and another for detection, which were trained together on a broad spectrum of images.

While SynthID cannot definitively identify watermarked images with 100% accuracy, it effectively differentiates between images that may contain a watermark and those that are highly likely to have one.

“SynthID isn’t infallible against extreme manipulation, but it represents a promising technical solution for enabling individuals and organizations to responsibly engage with AI-generated content,” DeepMind explained in the blog post. “This tool also has the potential to evolve alongside other AI modalities beyond imagery, such as audio, video, and text.”

Existing watermarking methods for generative art are not new. For instance, the French startup Imatag, established in 2020, offers a similar watermarking solution that purportedly withstands edits such as resizing, cropping, and compression. Steg.AI also employs an AI model for robust watermarking resilient to alterations.

However, the urgency for tech companies to establish clear indicators of AI-generated works is growing. Recently, China's Cyberspace Administration mandated that generative AI providers label generated content—including text and images—without hindering user experience. Additionally, U.S. Senator Kyrsten Sinema (I-AZ) highlighted the importance of transparency in generative AI, advocating for the use of watermarks in recent Senate committee hearings.

At its annual Build conference in May, Microsoft pledged to implement cryptographic methods for watermarking AI-generated images and videos. Concurrently, Shutterstock and the generative AI startup Midjourney embraced guidelines for embedding markers that denote content created by AI tools. OpenAI’s DALL-E 2, a text-to-image model, also includes a small watermark in the bottom right corner of its generated images.

Despite this progress, a universal watermarking standard—both for creating and detecting watermarks—remains elusive. SynthID, like many other proposed technologies, currently remains limited to images generated by the Imagen model. DeepMind has indicated plans to potentially make SynthID available to third-party developers in the future; however, the adoption of this technology by third parties, particularly those developing open-source AI image generators that lack certain safeguards, remains uncertain.

Most people like

Find AI tools in YBX