Countries Pledge to Share AI Risk Thresholds at Seoul AI Safety Summit for Enhanced Global Cooperation

The recent Seoul AI Safety Summit concluded with a decisive commitment from nations around the globe to exchange risk benchmarks for the development and deployment of foundation models. This pivotal gathering followed last November’s AI Safety Summit, culminating in the signing of the Seoul Ministerial Statement—an agreement aiming to quantify potential AI risks and define what constitutes a "severe risk."

Countries including Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, Netherlands, Nigeria, New Zealand, The Philippines, Republic of Korea, Rwanda, Saudi Arabia, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, the U.K., the U.S., and the EU all endorsed this crucial statement. The agreement particularly focuses on risks associated with foundation models, which are advanced AI systems capable of processing a diverse array of inputs, such as text, images, and more.

Participants acknowledged that severe risks might arise if a foundation model could be exploited by malicious actors to facilitate access to chemical or biological weapons. In addition, delegations expressed a joint commitment to partner with leading technology firms to establish and publish risk frameworks for foundation models ahead of the upcoming AI Action Summit scheduled for early 2025 in France.

Lee Jong Ho, Korea’s Minister of Science and Information and Communication Technology, emphasized the significance of this summit: “Through this AI Seoul Summit, 27 nations and the EU have set the goals of AI governance as safety, innovation, and inclusion. Governments, industry leaders, academia, and civil society from various countries are united in advancing our global AI safety capabilities and striving for sustainable AI development.”

The insights garnered from the summit will lead to the publication of a report on AI safety science. This comprehensive document will capture the knowledge shared among participants, aimed at informing policymakers and technology developers worldwide.

Michelle Donelan, U.K. Secretary of State for Technology, remarked on the importance of the agreements reached in Seoul: “These developments signify the onset of Phase Two in our AI Safety agenda, wherein we adopt actionable steps to enhance resilience against AI risks and deepen our understanding of the scientific frameworks necessary for a unified approach to AI safety in the future. For companies, this means establishing risk thresholds beyond which they will refrain from releasing their models, while countries will work in tandem to delineate severe risk thresholds.”

The collective efforts made during the Seoul AI Safety Summit mark a significant step forward in bolstering international cooperation in AI governance, paving the way for safer, more responsible development in this rapidly evolving field.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles