On Tuesday, Google announced plans to enhance its search platform to help users identify which images are generated or edited by AI. Over the coming months, Google will label AI-generated and modified images in the "About this image" section in search results, Google Lens, and the Circle to Search feature on Android. Similar notifications may also appear on other Google platforms, such as YouTube, with more details to be shared later this year.
Notably, only images containing "C2PA metadata" will be marked as AI-processed. The C2PA (Coalition for Content Provenance and Authenticity) is a consortium focused on creating technical standards to track the history of images, including the devices and software used to capture or create them. Major companies like Google, Amazon, Microsoft, OpenAI, and Adobe support C2PA, but the standards have yet to achieve widespread adoption.
Media reports have highlighted challenges faced by C2PA in terms of promotion and interoperability, with only a few generative AI tools and cameras from Leica and Sony currently adhering to its specifications. Additionally, like any metadata, C2PA information can be deleted, altered, or rendered unreadable. Some well-known generative AI tools, such as xAI’s Grok chatbot using Flux for image generation, do not include C2PA metadata, partly due to a lack of developer support for the standard.
Despite these limitations, experts believe that implementing any measures is better than having none, especially given the rapid spread of deepfake technology. Surveys indicate that most people are concerned about being deceived by deepfakes and the potential for AI to facilitate the spread of misinformation. According to estimates, scams involving AI-generated content are projected to grow by 245% between 2023 and 2024. Deloitte expects losses related to deepfakes to soar from $12.3 billion in 2023 to $40 billion by 2027.