As the AI Safety Summit begins in Seoul, South Korea, this week, the United Kingdom, one of the co-hosts, is ramping up its efforts in AI safety. The AI Safety Institute, established in November 2023 with the aim of assessing and mitigating risks associated with AI platforms, is set to open a new office in San Francisco.
This strategic move is designed to position the Institute closer to the heart of AI innovation. The San Francisco Bay Area hosts leading companies such as OpenAI, Anthropic, Google, and Meta, all pioneers in developing foundational AI technologies.
Foundational models serve as the essential building blocks for generative AI services and a variety of applications. Interestingly, despite having signed a Memorandum of Understanding (MOU) with the United States to collaborate on AI safety initiatives, the U.K. has opted to establish a presence in the U.S. to address these critical issues. “Having a team on the ground in San Francisco will provide access to the headquarters of many AI companies,” said Michelle Donelan, U.K. Secretary of State for Science, Innovation, and Technology, in an interview. “While many of these firms have operations in the U.K., we see this presence as vital for accessing a broader talent pool and enhancing collaboration with the United States.”
This proximity to leading AI innovators not only serves to better understand ongoing developments but also elevates the U.K.'s visibility among these influential companies. This is particularly important as the U.K. views AI as a significant opportunity for economic growth and investment.
Given the recent upheaval at OpenAI regarding its Superalignment team, establishing a presence in the Bay Area seems timely. Currently, the AI Safety Institute, which launched in November 2023, operates with a modest staff of 32 employees—small in comparison to the tech giants investing billions in AI development.
One of the Institute's recent achievements is the release of "Inspect," its first set of safety testing tools for foundational AI models introduced earlier this month. Donelan described this release as “phase one.” Benchmarking models has proven to be a complex task, and current engagement with companies is largely voluntary and inconsistent. According to a senior official at a U.K. regulator, there is currently no legal requirement for companies to have their models reviewed, and many firms may hesitate to undergo evaluation prior to release. This could create a situation where risks are identified too late.
Donelan emphasized that the AI Safety Institute is actively developing strategies to engage with AI companies for evaluation purposes. “Our evaluation process is evolving,” she stated. “With each evaluation, we will refine and enhance the process further.”
In Seoul, one of the key objectives of the conference is to present the Inspect tools to regulators in hopes they will adopt them as well. “With our evaluation system in place, phase two must focus on ensuring AI safety across all sectors of society,” she added.
In the long run, Donelan anticipates the U.K. will create more legislation regarding AI. However, echoing Prime Minister Rishi Sunak’s stance, she noted the importance of gaining a thorough understanding of AI risks before proceeding with legislation. “We are committed to understanding the full scope of the issues at hand,” she remarked, referencing the Institute’s recent international AI safety report, which highlighted significant gaps in current research and the need for more global research endeavors.
“Creating legislation takes about a year in the U.K. If we had started legislation before organizing the AI Safety Summit last November, we would still be in the drafting phase with little to show for it,” Donelan explained.
Ian Hogarth, Chair of the AI Safety Institute, reiterated the importance of an international approach to AI safety, emphasizing collaboration with other nations to identify risks associated with advanced AI technologies. “Today represents a pivotal moment for advancing our agenda, and we are excited to expand our operations in an area rich with tech talent while building on the expertise our London team has developed since inception.”