Last week, OpenAI launched its GPT Store, allowing third-party creators to showcase and monetize custom chatbots (GPTs). However, the company's innovations have not stopped there as January 2024 unfolds.
On Monday, OpenAI released a blog post outlining new safeguards for its AI tools, particularly focusing on its image generation model, DALL-E, and citation practices in ChatGPT. This initiative aims to combat disinformation ahead of the numerous elections scheduled globally later this year.
“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to ensure our technology is not used in a way that could undermine this process,” the blog states.
Among the current safeguards is a “report” function allowing users to flag potential violations related to custom GPTs, including those impersonating real individuals or institutions, which violates OpenAI’s usage policies.
Anticipated New Measures
OpenAI’s blog reveals that users will soon access real-time news reporting globally, complete with attribution and links, enhancing the credibility of information accessed through ChatGPT. This enhancement aligns with OpenAI’s partnerships with respected outlets like the Associated Press and Axel Springer, home to Politico and Business Insider.
Notably, OpenAI will implement image credentials from the Coalition for Content Provenance and Authenticity (C2PA). This initiative aims to apply cryptographic digital watermarking to AI-generated content, ensuring its detectability in the future. OpenAI plans to integrate C2PA credentials in DALL-E 3 imagery early this year, though a specific date has yet to be announced.
Additionally, OpenAI previewed its “provenance classifier,” a tool designed to detect images generated by DALL-E. Initially mentioned during the DALL-E 3 launch in fall 2023, this tool will allow users to upload images to determine their AI origins.
“Our internal testing has revealed encouraging results, even with images subjected to typical modifications,” the blog mentions. “We plan to roll it out to our first group of testers—including journalists, platforms, and researchers—for feedback soon.”
The Role of AI in Political Campaigning
As political organizations like the Republican National Committee (RNC) in the U.S. increasingly use AI for messaging, including impersonating opponents, the effectiveness of OpenAI’s safeguards in curbing anticipated waves of digital disinformation remains uncertain. Nevertheless, OpenAI is clearly positioning itself as a proponent of truth and accuracy amid concerns over the misuse of its tools.