Biden Names Leaders for AI Safety Institute Amid Ongoing NIST Funding Concerns

Today, the Biden administration appointed Elizabeth Kelly, a key White House aide instrumental in the President’s AI Executive Order (EO), as the director of the newly established U.S. AI Safety Institute (USAISI) at the National Institute of Standards and Technology (NIST). Elham Tabassi has also been appointed as chief technology officer.

NIST, part of the U.S. Department of Commerce, released a highly-anticipated AI risk management framework in January 2023 and was tasked with evaluating and auditing AI technologies under the White House's AI EO. This includes developing guidelines for conducting AI red-teaming tests to ensure the deployment of safe, secure, and trustworthy AI systems.

The leadership appointments mark the first significant announcement regarding the USAISI since its establishment in November 2023 to support the responsibilities outlined in the AI Executive Order. Nevertheless, details about the institute's operations and funding remain scarce, particularly as NIST, which has approximately 3,400 staff and an annual budget of just over $1.6 billion, faces funding challenges.

A bipartisan group of senators requested $10 million to establish the USAISI within NIST in the fiscal 2024 funding legislation, but the status of this request is unclear. Additionally, in December, members of the House Science Committee criticized NIST for its lack of transparency and for not announcing a competitive process for planned AI research grants related to the USAISI. The lawmakers expressed concern over a proposed AI research partnership between NIST and the RAND Corporation, a think tank connected to the tech industry and the controversial “effective altruism” movement.

In their letter, the lawmakers noted challenges within the AI safety research field, stating, “findings within the community are often self-referential and lack the quality that comes from external critiques.” They highlighted significant disagreements over the scope and definitions within AI safety research.

Rumman Chowdhury, former head of AI efforts at Accenture and Twitter’s META team, discussed the USAISI leadership appointments, noting that while there is high anticipation for updates on NIST and the USAISI, the pace of communications is typical given the thorough vetting process involved. Chowdhury emphasized her confidence in NIST’s impartiality, stating, “They’ve been doing this kind of work for a long time... They’re not appointed, so this is not at the whims of whoever is in power.”

However, she stressed that funding remains a critical issue for the USAISI. “This is an unfunded mandate via the executive order,” Chowdhury explained, pointing out the challenges of securing legislative funding in a polarized political landscape. While the UK has allocated $100 million for AI safety, U.S. funding remains uncertain.

Regarding the recent Senate funding request, she mentioned she was aware of it but had not seen any indications that funding had been received. Chowdhury also highlighted the AI Safety Institute Consortium, which invites organizations to contribute their expertise in AI governance, safety, and responsible AI. Membership fees for the consortium are set at $1,000 per year, although she was unaware of NIST's plans concerning research grants.

Calls for funding have persisted, with Anthropic advocating for $15 million to support NIST's AI measurement and standards initiatives as early as April 2023. This was seen as a straightforward solution for policymakers seeking to maintain U.S. leadership in critical technology development.

Despite these challenges, Chowdhury remains optimistic about the USAISI's future within NIST. “I think Elham is one of the most level-headed people I know... We need someone focused on getting the work done,” she said. She also noted that the positioning of the USAISI within NIST sends a positive message about prioritizing scientific integrity over political influence.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles