OpenAI has announced an update to its flagship app, ChatGPT, along with the integrated AI image generator, DALL-E 3. This update introduces new metadata tagging that enables users and organizations to identify AI-generated imagery.
This announcement follows closely on the heels of Meta's similar initiative, aimed at labeling AI images created with its separate generator, Imagine, available across platforms like Instagram, Facebook, and Threads.
According to OpenAI, the new metadata, based on C2PA specifications, will allow anyone—content platforms and distributors alike—to recognize images generated by its products. The update is already live for web users and will be rolled out to all mobile users by February 12.
Additionally, OpenAI has launched a site called Content Credentials, where users can upload images to verify their AI-generated status, although this feature only applies to images created after today; previously generated images will not include the new metadata.
What is C2PA?
The Coalition for Content Provenance and Authenticity (C2PA) is a collaborative initiative established in February 2021 to develop technical standards aimed at certifying the source and history of media content. Funded by prominent companies such as Adobe, Microsoft, and The New York Times, its mission is to combat misinformation and online content fraud. In January 2022, C2PA released its initial technical standards for embedding metadata in AI-generated images, revealing their origin under certain circumstances.
Recent events, including the spread of nonconsensual deepfake content involving celebrities and social media users, have emphasized the urgency of C2PA’s work. OpenAI earlier this year committed to applying C2PA standards to mitigate disinformation ahead of the 2024 elections.
How OpenAI is Implementing C2PA
OpenAI is embedding metadata—in the form of an electronic "signature"—into AI-generated image files. However, the company acknowledges on its help site that this metadata can be inadvertently or deliberately removed. For instance, most social media platforms strip metadata from uploaded images, and actions like taking a screenshot can eliminate it. Therefore, the absence of metadata does not definitively indicate whether an image was created with ChatGPT or DALL-E 3.
Moreover, this metadata isn’t visible to users without accessing the file description. In contrast, Meta is developing a public-facing AI labeling scheme that includes visual identifiers, such as a sparkles emoji, to clearly mark AI-generated images. Meta's feature is still in design and will begin rolling out in the coming months, relying on C2PA standards alongside the IPTC Photo Metadata Standard.
This initiative underscores both companies' commitments to transparency and authenticity in AI-generated content, reflecting the growing concerns surrounding misinformation and digital integrity.