In response to a report by The Wall Street Journal, which suggests that OpenAI has developed a tool capable of accurately identifying essays generated by ChatGPT, the company has offered insights into its text watermarking research. The report indicates that discussions surrounding the tool's release have delayed its public availability, even though it is reportedly "ready."
OpenAI stated, "Our teams have developed a text watermarking method that we continue to evaluate as we explore alternatives." The company explained that watermarking is just one of several approaches, along with classifiers and metadata, that it is examining as part of extensive research into text provenance.
While OpenAI claims that the watermarking method has shown high accuracy in certain scenarios, it struggles with specific manipulations, such as the use of translation systems, rephrasing with different generative models, or inserting and then removing special characters between words. Additionally, OpenAI highlighted concerns that watermarking could disproportionately affect certain user groups, noting, “For example, it could stigmatize the use of AI as a helpful writing tool for non-native English speakers.”
In its blog post, OpenAI emphasized that it is carefully considering these potential risks. The company has also prioritized the release of authentication tools for audiovisual content. An OpenAI spokesperson further clarified that the company is adopting a “deliberate approach” to text provenance due to the complexities involved and the potential impact on the wider ecosystem beyond its own operations.