OpenAI Promises U.S. AI Safety Institute Exclusive Early Access to Its Upcoming Model

OpenAI CEO Sam Altman has announced that OpenAI is collaborating with the U.S. AI Safety Institute, a federal entity focused on evaluating and mitigating risks associated with AI technologies. This partnership will grant early access to OpenAI's next generative AI model for safety testing. Altman shared this update in a post on X late Thursday, although specifics were sparse. This agreement, along with a similar partnership established with the U.K.'s AI safety authority in June, aims to counter the perception that OpenAI has deprioritized AI safety in favor of advancing more powerful generative technologies.

In May, OpenAI disbanded a unit dedicated to creating controls to prevent “superintelligent” AI systems from operating uncontrollably. Reporting indicated that the company shifted focus from safety research to launching new products, leading to the resignation of the team's co-leads, Jan Leike, who now heads safety research at Anthropic, and OpenAI co-founder Ilya Sutskever, who founded Safe Superintelligence Inc. to emphasize safety in AI.

Facing increasing criticism, OpenAI announced the removal of non-disparagement clauses that discouraged whistleblowing and committed to creating a safety commission. The company pledged to allocate 20% of its computing resources to safety research— a promise that the previous safety team didn't receive. Altman reaffirmed this commitment, along with the nullification of non-disparagement terms for past and current staff.

Despite these assurances, some critics remained unconvinced, especially after OpenAI appointed only company insiders to the safety commission and reassigned a senior AI safety executive in a recent organizational shake-up. Concerns were formally raised by five senators, including Hawaii Democrat Brian Schatz, who sent a letter questioning OpenAI's practices. OpenAI’s chief strategy officer, Jason Kwon, responded, asserting the company's commitment to rigorous safety measures throughout their development process.

The timing of the collaboration with the U.S. AI Safety Institute raises eyebrows, particularly following OpenAI's endorsement earlier this week of the Future of Innovation Act—a proposed Senate bill seeking to empower the Safety Institute to set standards and guidelines for AI development. This series of events might be interpreted as an attempt by OpenAI to influence regulatory frameworks surrounding AI technology.

Notably, Altman serves on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides vital recommendations for the secure development and deployment of AI in critical infrastructure. Furthermore, OpenAI has significantly ramped up its federal lobbying efforts, spending $800,000 in the first half of 2024 compared to $260,000 throughout 2023.

The U.S. AI Safety Institute, located within the Commerce Department’s National Institute of Standards and Technology, collaborates with a consortium of companies, including Anthropic and major tech players such as Google, Microsoft, Meta, Apple, Amazon, and Nvidia. The group is charged with advancing initiatives outlined in President Joe Biden’s October AI executive order, focusing on guidelines for AI red-teaming, capability evaluations, risk management, safety protocols, and watermarking of synthetic content.

Most people like

Find AI tools in YBX