Just days after President Joe Biden issued a comprehensive executive order on AI development, Vice President Kamala Harris announced six additional machine learning initiatives at the UK AI Safety Summit. Key highlights included the creation of the United States AI Safety Institute (US AISI), the first draft policy guidance on federal AI use, and a commitment to responsible military applications of AI technology.
Harris emphasized the importance of ethical AI adoption: "President Biden and I believe that all leaders—from government to civil society and the private sector—have a moral duty to ensure AI is developed in ways that protect the public and guarantee equitable access to its benefits." She warned that while AI can be transformative, it also poses significant risks, including advanced cyber threats and the potential for AI-generated bioweapons.
The summit focused on the existential threats posed by generative AI. "To define AI safety, we must address the full spectrum of risks — from threats to humanity as a whole to dangers faced by individuals, communities, and vulnerable populations," Harris stated. "To ensure AI is safe, we must manage all these dangers."
Announcing the establishment of the US AISI within the National Institute of Standards and Technology (NIST), Harris outlined its responsibilities. The institute will develop guidelines, benchmark tests, and best practices for evaluating potentially dangerous AI systems, including conducting red-team exercises as highlighted in Biden's executive order. The AISI will also provide technical guidance to lawmakers and law enforcement on various AI-related issues, such as content authenticity and discrimination mitigation.
In addition, the Office of Management and Budget (OMB) will soon release a draft policy guidance for public comment on government AI use. This guidance, building on the Blueprint for an AI Bill of Rights, aims to advance responsible AI innovation while ensuring transparency and protecting federal employees from heightened surveillance and job displacement. Public comments can be submitted at ai.gov/input.
Harris noted the progress of the Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy, which the US issued in February and has since garnered 30 signatories committed to responsible military AI development—out of the 165 nations still to engage.
The administration is also launching a virtual hackathon aimed at reducing the harm caused by AI-enabled phone and internet scams. Participants will develop AI models to combat robocalls and robotexts, particularly those targeting vulnerable populations such as the elderly.
Content authentication is a major focus for the Biden-Harris administration. The executive order outlined initiatives led by the Commerce Department to validate content produced by the White House through collaboration with industry groups like C2PA. This effort includes establishing industry norms and voluntary commitments from major AI firms. Harris called for global collaboration in creating standards for authenticating government-produced content, stating, "These voluntary commitments are an initial step toward a safer AI future, with more to come."
She concluded by stressing the need for legislation that bolsters AI safety without hindering innovation: "As history has shown, without regulation and strong oversight, technology companies may prioritize profit over the well-being of their customers, community security, and democratic stability."