In advance of a meeting between Vice President Kamala Harris and leaders from America's top four AI technology companies—Alphabet, OpenAI, Anthropic, and Microsoft—the Biden Administration announced a series of comprehensive actions aimed at addressing the risks associated with emerging technologies. This initiative includes a $140 million investment to establish seven new AI research and development centers under the National Science Foundation (NSF).
Additionally, the Administration secured commitments from leading AI firms to participate in a public evaluation of their systems during DEFCON 31, and instructed the Office of Management and Budget (OMB) to create policy guidance for federal employees regarding AI use.
A senior administration official explained, "The Biden-Harris administration has been proactive on these issues since before the introduction of the latest generative AI products." The Administration's "AI Bill of Rights" was unveiled last October to ensure that AI design, development, and deployment protect the rights of the American public. The official emphasized the importance of clarifying values and safeguarding common sense during this period of rapid innovation.
The announcement and the AI Bill of Rights blueprint offer clear guidelines for companies, policymakers, and developers to mitigate consumer risks. While the federal government currently has the authority to protect citizens and hold companies accountable, as demonstrated by the FTC, the official noted, "There’s much more that can be done to ensure we get AI right."
The new National AI Research Institutes will foster collaboration among academia, the private sector, and the government, focusing on ethical applications in areas such as climate, agriculture, energy, public health, education, and cybersecurity.
The White House representative stated, "We also need tech companies to be our partners in this effort. They have a fundamental responsibility to ensure their products are safe and protect people's rights before they are launched to the public."
During Thursday’s meeting, the Vice President will engage with tech leaders to discuss the potential risks of current and upcoming AI developments and highlight their role in promoting responsible innovation. The goal is to work collaboratively to safeguard the American public while leveraging the benefits of new technologies.
The Administration has also secured commitments from more than seven leading AI companies—Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI—to allow public evaluation of their AI systems at DEFCON 31 (August 10-13). Attendees will have the opportunity to assess whether these models align with the principles outlined in the AI Bill of Rights.
Finally, the OMB plans to provide guidance to federal employees in the coming months concerning the official use of AI technologies, establishing specific policies for federal agencies and facilitating public feedback before finalizing these policies.
These initiatives mark significant steps towards responsible innovation in AI, ensuring that advancements enhance lives while safeguarding individual rights and safety.