Seoul Summit: Global Leaders and Corporations Unite for AI Safety Commitment

Government officials and AI industry leaders convened on Tuesday to discuss essential safety measures in the rapidly evolving field of artificial intelligence (AI) and to establish an international safety research network.

Nearly six months after the inaugural global summit on AI safety at Bletchley Park in England, Britain and South Korea are hosting the AI Safety Summit this week in Seoul. This summit highlights the new challenges and opportunities presented by the emergence of AI technology.

On Tuesday, the British government announced a groundbreaking agreement among 10 nations and the European Union to form an international network, modeled after the U.K.’s AI Safety Institute—the world's first publicly supported organization aimed at enhancing AI safety science. This network will foster a common understanding of AI safety, aligning its efforts with research, standards, and testing protocols. The countries involved in this agreement are Australia, Canada, the EU, France, Germany, Italy, Japan, Singapore, South Korea, the U.K., and the U.S.

During the opening day of the AI Summit in Seoul, global leaders and influential AI companies participated in a virtual meeting led by U.K. Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol. The discussion centered on AI safety, innovation, and inclusion.

The leaders affirmed the broader Seoul Declaration, which calls for enhanced international collaboration in developing AI to tackle significant global issues, uphold human rights, and bridge digital divides, all while ensuring that AI remains "human-centric, trustworthy, and responsible."

"AI is an incredibly exciting technology, and the U.K. has spearheaded global efforts to harness its potential by hosting the world’s first AI Safety Summit last year," Sunak stated in a government announcement. "However, to reap the benefits of this technology, we must prioritize safety. That’s why I’m pleased to announce an agreement today for a network of AI Safety Institutes."

Just last month, the U.K. and the U.S. solidified a partnership through a memorandum of understanding to collaborate on research, safety evaluations, and best practices concerning AI safety.

The newly announced agreement follows the world's first AI Safety Commitments made by 16 prominent AI companies, including Amazon, Anthropic, Cohere, Google, IBM, Inflection AI, Meta, Microsoft, Mistral AI, OpenAI, Samsung Electronics, Technology Innovation Institute, xAi, and Zhipu.ai, the latter being a Chinese company backed by Alibaba, Ant Group, and Tencent.

These AI companies, spanning the U.S., China, and the United Arab Emirates (UAE), have committed to ensuring that no model or system is developed or deployed if safety measures cannot mitigate risks to acceptable levels, as per the U.K. government's statement.

“It’s a world first to have so many leading AI companies from diverse regions agreeing to shared commitments on AI safety,” Sunak emphasized. “These commitments guarantee that the world’s foremost AI entities will prioritize transparency and accountability in their efforts to create safe AI solutions.”

Most people like

Find AI tools in YBX