Google is expanding access to Imagen 2, its advanced AI model designed for creating and editing images based on text prompts. This expansion is currently available to Google Cloud customers using Vertex AI who have received approved access. However, the company has not disclosed the specific data used for training this new model, nor has it provided a way for creators who may have unknowingly contributed to the dataset to opt out or seek compensation.
Imagen 2, which was officially previewed at Google I/O in May, is developed using cutting-edge technology from Google DeepMind, the company’s premier AI lab. Google claims that this enhanced model offers "significantly" better image quality than its predecessor, although they controversially withheld sample images ahead of their announcement. One of the notable new features is its ability to generate text and logos, making it particularly useful for applications like advertising. “If you want to create images with a text overlay — for example, promotional content — you can accomplish that,” said Google Cloud CEO Thomas Kurian during a recent press briefing.
The text and logo generation capabilities of Imagen 2 align it with top-tier image-generating models such as OpenAI's DALL-E 3 and Amazon's Titan Image Generator. Imagen 2 distinguishes itself with a multilingual capability, allowing text rendering in Chinese, Hindi, Japanese, Korean, Portuguese, English, and Spanish, with plans for more languages by 2024. Additionally, it can overlay logos onto existing images, which adds to its versatility.
“Imagen 2 can generate emblems, lettermarks, and abstract logos, along with overlaying these designs on products, clothing, and business materials,” stated Vishy Tirumalasetty, head of generative media products at Google, in a blog post issued before today’s announcement.
Thanks to innovative training and modeling techniques, Imagen 2 can interpret more detailed, long-form prompts and offer comprehensive responses about different elements in an image. This also improves its multilingual capabilities, enabling it to translate prompts from one language while outputting results in another.
Imagen 2 utilizes SynthID, a method developed by DeepMind, to embed invisible watermarks in generated images. To detect these watermarks—designed to remain intact even after edits such as color adjustments or filters—users must rely on a Google-specific tool that third parties do not have access to. As concerns grow over AI-generated misinformation online, this feature may help alleviate some apprehensions.
While Google has opted not to disclose the data used for training Imagen 2, this lack of transparency is not entirely unexpected. The legal landscape surrounding whether GenAI companies can commercialize models trained on publicly accessible or copyrighted data remains unsettled. Relevant lawsuits are currently unfolding, with companies claiming protection under fair use doctrine, but resolution may take time.
For now, Google is being cautious by not revealing specifics—contrasting with its earlier approach for the first-generation Imagen, which disclosed its use of the public LAION dataset. LAION, however, is known to harbor problematic content, including private medical images and copyrighted materials, which could tarnish Google’s reputation.
Unlike some competitors, such as Stability AI and OpenAI, which provide options for creators to opt out of training datasets, Google—along with several rivals, including Amazon—does not offer an opt-out mechanism or creator compensation. Instead, Google has established an indemnification policy that protects approved Vertex AI customers from copyright claims related to both the use of training data and outputs generated by Imagen 2.
Concerns about regurgitation—where a generative model produces a direct copy of training examples—remain valid for corporate users and developers. An academic study previously highlighted that the first-generation Imagen was susceptible to this issue, generating identifiable images of real individuals and copyrighted artistic works in specific scenarios.
A recent survey among Fortune 500 companies indicated that nearly one-third identified intellectual property as their primary concern regarding the use of generative AI. Another poll revealed that nine out of ten developers prioritize IP protection when deciding to implement generative AI technology. Google aims to address these apprehensions with its updated indemnification policy, which previously did not cover Imagen outputs. However, concerns raised by creators and content contributors remain unaddressed in this latest iteration.