White House Launches New AI Safety Consortium: 200+ Leading Firms Collaborate to Test and Evaluate AI Models

Biden Administration Launches US AI Safety Institute Consortium

Following the appointment of a senior White House aide as director of the newly established US AI Safety Institute (USAISI) at the National Institute of Standards and Technology (NIST), the Biden Administration unveiled the US AI Safety Institute Consortium (AISIC), touted as “the first-ever consortium dedicated to AI safety.”

This collaboration features over 200 member organizations, including major tech giants like Google, Microsoft, and Amazon, as well as leading large language model (LLM) companies such as OpenAI, Cohere, and Anthropic. The consortium also encompasses research labs, academic teams, civil society organizations, state and local governments, and nonprofits.

According to a NIST blog post, the AISIC represents the largest assembly of test and evaluation teams to date, aiming to establish foundational measurement science in AI safety. Operating under the USAISI, the consortium will support key initiatives outlined in President Biden’s Executive Order, including developing guidelines for red-teaming, capability evaluations, risk management, safety and security protocols, and watermarking synthetic content.

Formation Details

The consortium’s announcement occurred on October 31, 2023, alongside President Biden’s AI Executive Order. The NIST website stated that participation is open to organizations capable of contributing expertise, products, data, or models. Selected participants must pay an annual fee of $1,000 and enter into a Consortium Cooperative Research and Development Agreement (CRADA) with NIST.

Guidelines for AI Safety

Consortium members will focus on several key guidelines:

1. Create tools and best practices for developing and deploying AI safely, securely, and reliably.

2. Establish guidance for identifying and evaluating potentially harmful AI capabilities.

3. Promote secure-development practices for generative AI and address dual-use foundation models, ensuring privacy-preserving machine learning.

4. Provide resources to create and maintain testing environments.

5. Develop practices for effective red-teaming and privacy-preserving machine learning techniques.

6. Design tools for authenticating digital content.

7. Define criteria for necessary AI workforce skills, including risk identification, management, testing, and validation.

8. Explore the interplay between society and technology, focusing on how people engage with AI in diverse contexts.

9. Offer guidance on managing interdependencies among AI entities throughout their lifecycle.

Funding Uncertainties

Despite the announcement, questions remain about the funding for the USAISI. Reports indicate that since the White House's initial announcement in November, little detail has been shared regarding the operational framework or funding sources, especially given NIST’s annual budget of over $1.6 billion and its known resource constraints.

In January, a bipartisan group of senators requested $10 million from the Senate Appropriations Committee to help establish the USAISI, but the status of this funding request remains unclear. Additionally, lawmakers in the House Science Committee expressed concerns about NIST's transparency and the lack of a competitive process for granting research funds related to the new institute.

Speaking about funding challenges, Rumman Chowdhury, a leading figure in AI ethics formerly with Accenture and Twitter, highlighted that the USAISI operates under an unfunded mandate via executive order. She expressed understanding of the political complexities impeding legislative action while noting the significant lack of funding for the initiative.

Most people like

Find AI tools in YBX