Sam Altman: Upcoming OpenAI Model to Undergo U.S. Government Safety Checks First

Amid rising concerns about the safety of advanced artificial intelligence systems, OpenAI CEO Sam Altman announced that the company’s next major generative AI model will undergo safety checks with the U.S. government before its release.

In a post on X, Altman shared that OpenAI has been collaborating with the U.S. AI Safety Institute—a federal agency—on an agreement that grants early access to its new foundation model. This partnership aims to advance AI evaluation science and ensure robust safety measures.

OpenAI’s Commitment to Safety

Altman emphasized that OpenAI has revised its non-disparagement policies, enabling current and former employees to voice concerns about the company's practices freely. Furthermore, the company is committed to dedicating at least 20% of its computing resources to safety research.

Concerns from U.S. Senators

As OpenAI has emerged as a leader in the AI industry with products like ChatGPT and the recently launched SearchGPT, its rapid development strategy has sparked controversy. Critics, including former safety leaders, have accused the company of neglecting safety in favor of speedy advancement.

In response to these concerns, five U.S. senators recently wrote to Altman, questioning OpenAI’s dedication to safety and addressing allegations of retaliation against former employees who raised alarm through the non-disparagement agreement. The senators expressed that ongoing safety concerns seem to contradict OpenAI's stated commitment to responsible AI development.

Response from OpenAI's Leadership

According to Bloomberg, OpenAI's chief strategy officer Jason Kwon replied, reaffirming the company's mission to develop AI that serves humanity’s best interests. He highlighted OpenAI’s commitment to implementing rigorous safety measures throughout its processes and maintaining transparency with employees.

Kwon reiterated plans to allocate 20% of computing power to safety research, rescind non-disparagement clauses to promote open dialogue, and work with the AI Safety Institute on safe model releases.

Altman further echoed this commitment on X, though he did not disclose specific details about the collaboration with the AI Safety Institute.

Collaborating for Safer AI

The AI Safety Institute, part of the National Institute of Standards and Technology (NIST), was established during the U.K. AI Safety Summit to tackle the potential risks associated with advanced AI, including national security and public safety concerns. It collaborates with over 100 tech companies, including Meta, Apple, Amazon, Google, and OpenAI, to promote safety in AI development.

Importantly, OpenAI is not solely partnering with the U.S. government. It has also established a similar safety review agreement with the U.K. government.

Safety Concerns Escalate

Safety concerns intensified in May when Ilya Sutskever and Jan Leike, the co-leaders of OpenAI's superalignment team focused on developing safety systems for superintelligent AI, abruptly resigned. Leike openly criticized the company for prioritizing flashy products over essential safety measures.

Following their departures, reports indicated the disbandment of the superalignment team. Despite these setbacks, OpenAI has continued its product releases while bolstering its commitment to safety through in-house research and the formation of a new safety and security committee. This committee, chaired by Bret Taylor (OpenAI board chair), includes notable members such as Adam D’Angelo (CEO of Quora), Nicole Seligman (former Sony Global General Counsel), and Sam Altman.

Through these efforts, OpenAI aims to address the critical safety concerns surrounding advanced AI technologies, striving for a balance between innovation and responsible development.

Most people like

Find AI tools in YBX