AI Leaders, Including OpenAI and Google, Commit to Adding Watermarks to AI-Generated Content

According to a report by Reuters, the U.S. government announced that several leading artificial intelligence companies have voluntarily committed to enhancing AI safety through measures such as watermarking AI-generated content. The seven participating companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—have pledged to improve the security and transparency of their systems, including allowing third-party experts to review their models.

In a statement to TechRadar, the U.S. government underscored the responsibility of tech developers to ensure product safety. "To fully harness the potential of artificial intelligence, the U.S. government encourages the industry to uphold the highest standards, ensuring that innovation does not undermine the rights and safety of the American people," it stated.

As part of their commitments, these companies will perform both internal and external security testing of their AI systems prior to public release. They will also share vital information with industry stakeholders, government agencies, academia, and the public to effectively manage AI risks. Additionally, they plan to invest in cybersecurity and internal threat controls to protect proprietary and unreleased model weights critical for generative AI operations.

The firms will facilitate third-party investigations and report any identified security vulnerabilities. To build public trust, they aim to implement methods to notify users when they encounter AI-generated content, possibly through watermarks or other indicators. Furthermore, the companies will prioritize research on societal risks associated with AI models, including racial discrimination, bias, and privacy concerns.

It is important to note that these commitments are voluntary, lacking specified penalties for noncompliance, which may affect their implementation.

Most people like

Find AI tools in YBX