In recent years, the rapid advancement of AI technologies has led to an explosion of AI-generated content, including hyper-realistic images, videos, and texts. However, this surge has also raised significant concerns about misinformation and deception, complicating our ability to distinguish between reality and fabrication.
The apprehension that we are being overwhelmed by synthetic content is justified. Since 2022, AI users have collectively created over 15 billion images. To give context, this incredible figure represents what took humanity 150 years to produce before 2022.
The sheer volume of AI-generated content presents challenges that we are just beginning to understand. Historians may soon need to view the internet post-2023 as fundamentally different from what came before, much like how the development of the atomic bomb impacted the field of radioactive carbon dating. Increasingly, Google Image searches yield AI-generated results, and instances of alleged war crimes in the Israel/Gaza conflict are being misidentified as AI creations when that is not the case.
Embedding Signatures in AI Content
Deepfakes, created using machine learning algorithms, produce counterfeit content that mimics human expressions and voices. The recent unveiling of Sora, OpenAI’s text-to-video model, underscores how rapidly virtual reality is becoming indistinguishable from physical reality. In light of growing concerns, tech giants are taking steps to mitigate potential misuse of AI-generated content.
In February, Meta introduced initiatives to label images created with its AI tools across platforms like Facebook, Instagram, and Threads. This includes visible markers, invisible watermarks, and detailed metadata to indicate their artificial origins. Following suit, Google and OpenAI announced similar measures to embed ‘signatures’ within AI-generated content.
These initiatives are supported by the Coalition for Content Provenance and Authenticity (C2PA), which aims to trace the origins of digital files and differentiate between genuine and manipulated content. While these efforts promote transparency and accountability in content creation, the question remains: Are they sufficient to safeguard against potential misuse of this evolving technology?
Who Determines What’s Real?
A critical issue arises with the implementation of detection tools: Can they be universally effective without being exploited by those with access? This leads to the pressing question: who has the authority to define reality? Understanding this is essential before we can genuinely address the potential of AI-generated content.
The Edelman Trust Barometer 2023 reveals significant public skepticism regarding institutions' management of technological innovations. The report indicates that globally, people are nearly twice as likely to believe that innovation is poorly managed (39%) rather than well managed (22%). Many express concerns about the pace of technological change and its implications for society.
This skepticism is compounded by the observation that, as countermeasures improve, the challenges they aim to address also evolve. Rebuilding public trust in technological innovation is crucial if we want watermarking measures to be effective.
As we’ve seen, achieving this trust is no easy task. For instance, Google Gemini faced criticism for bias in image generation, leading to embarrassment within the company. The ensuing apologies highlight the lasting impact of such incidents on public perception.
Need for Transparency in Technology
Recently, a video featuring OpenAI’s CTO, Mira Murati, went viral after she was unable to specify the data used to train Sora. Given the importance of data quality, it is concerning that a CTO could not provide clarity about the training data. Her dismissal of follow-up questions raised further red flags, suggesting that transparency must be prioritized in the tech industry.
Moving forward, establishing standards for transparency and consistency is imperative. Public education about AI tools, clear labeling practices, and accountability for faults are all critical components for fostering a trustworthy environment. Communication about issues that arise is equally essential.
Without these measures, watermarking may merely serve as a superficial solution, failing to tackle the fundamental challenges of misinformation and declining trust in artificial content. As seen in current events, deepfake election interference is already emerging as a significant issue in the world of generative AI. With a substantial portion of the global population heading to the polls, addressing this problem is vital for the future of content authenticity.
Elliot Leavy is the founder of ACQUAINTED, Europe’s first generative AI consultancy.