OpenAI and Anthropic have agreed to allow the US government access to significant new AI models prior to their release to enhance safety measures. The companies signed memorandums of understanding with the US AI Safety Institute, enabling access to these models both before and after their public launch. This collaboration aims to assess safety risks and address potential issues effectively.
This initiative comes as federal and state legislators explore appropriate regulations for AI technology without hindering innovation. Recently, California lawmakers passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), requiring AI companies to implement specific safety precautions before developing advanced foundation models. Although this legislation has faced opposition from prominent AI firms, including OpenAI and Anthropic, due to concerns about its impact on smaller open-source developers, it has been revised and is pending the signature of Governor Gavin Newsom.
Meanwhile, the White House is actively seeking voluntary commitments from major companies to prioritize AI safety. Several leading firms have pledged to enhance cybersecurity efforts, conduct research on discrimination, and develop watermarking techniques for AI-generated content.
Elizabeth Kelly, Director of the US AI Safety Institute, remarked that these new agreements mark “an important milestone as we work to responsibly steward the future of AI."