U.K. Prime Minister Rishi Sunak emphasized the need for a balanced perspective on artificial intelligence (AI) as he prepares for the upcoming AI Safety Summit on November 1 and 2. Speaking at the Royal Society, the nation’s leading scientific institution, he acknowledged the significant risks associated with AI, particularly its potential misuse by malicious actors. However, he urged against adopting an excessively alarmist stance.
“Currently, the entities developing AI are also the ones responsible for testing its safety, and even they often lack a complete understanding of their models' future capabilities,” Sunak remarked. “There are compelling incentives to compete by creating the most advanced models swiftly, leading us to question whether we should trust them to assess their own outcomes.”
Sunak opposed hasty regulatory measures that could stifle innovation, stating, “How can we create effective laws around something we do not fully comprehend? Instead, we are fostering world-leading capabilities within the government to understand and evaluate AI model safety.” The approach taken by the U.K. government to AI legislation has been notably less stringent compared to the European Union, outlining its requirements in a white paper and delegating rule-making to regulators in specific domains.
In June, during the AI Summit London conference, the country’s AI minister indicated that forthcoming regulations would complement technical standards and assurance techniques, signaling potential enhancements in oversight.
**A Collaborative Approach to AI Safety**
Sunak expressed his intent to work collaboratively with other nations on AI safety, rather than adopting an adversarial posture. This collaborative spirit is underscored by the invitation extended to China for the summit, despite existing tensions with the U.S. He articulated the importance of engaging diverse perspectives, asserting, “A serious strategy for AI must involve dialogues with all of the world’s leading AI powers.”
China has implemented stringent regulations for its AI firms, requiring a security review by the national data oversight body before new generative AI models can be released to the public. Nonetheless, the Chinese government has indicated its willingness to participate in international discussions on AI oversight, although President Xi Jinping emphasized a focus on national security.
Deputy Prime Minister Oliver Dowden confirmed that China has accepted the summit invitation, although he noted the need to await confirmation from all potential participants.
Sunak aims for the summit to cultivate a shared understanding of AI risks and seeks consensus on the first international statement regarding these challenges. He envisions the establishment of a “truly global expert panel” on AI, comprising nominations from attending countries and organizations, tasked with publishing a comprehensive ‘State of AI Science’ report.
“Our success hinges on collaboration with AI companies. As technology advances, we must ensure our collective understanding of the risks adapts in tandem,” he stated. This initiative aligns with calls from notable figures, including U.N. Secretary-General António Guterres, to create a global regulatory body for monitoring AI safety, akin to the International Atomic Energy Agency (IAEA).
**Industry Perspectives and Concerns**
As Sunak promotes the vision of the U.K. as a leading AI superpower, the government has allocated $120 million to a group that will advise on AI matters, featuring Turing Award recipient Yoshua Bengio among its ranks. The summit will host prominent industry leaders, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, alongside high-profile politicians like U.S. Vice President Kamala Harris.
Maya Dillon, Head of AI at Cambridge Consultants, emphasized the need for cooperation amid the AI revolution, stating, “The challenge is not just to embrace AI but to direct its course thoughtfully. This revolution ought to intertwine business success with societal well-being, requiring genuine collaboration.”
Paul Henninger, Head of Connected Technology at KPMG U.K., noted that while the summit could spark collaborative risk assessment strategies, organizations will also expect regular updates as technology continues to evolve.
Despite the ambitious goals set for the summit, Chris Royles, EMEA Field CTO at Cloudera, viewed the quest for comprehensive regulations as a lofty aspiration. He suggested that businesses should concentrate on training their AI models with trusted data sources. Similarly, Fabien Rech, General Manager at Trellix, advocated for prioritizing security measures in AI development to bolster confidence and protect against cyber threats.
**Highlighting AI Risks**
Before Sunak's address, the U.K. government released a paper delineating potential AI risks, which included:
- **Societal Harms**: The generation of misinformation and deepfakes, job disruptions due to automation, and algorithmic biases that could lead to unfair outcomes.
- **Misuse Risks**: The potential for AI technologies to facilitate the creation of weapons or enhance the effectiveness of cyberattacks and disinformation efforts.
- **Loss of Control Risks**: Concerns regarding humans relinquishing decision-making powers to misaligned AI systems and advanced agents seeking to increase their influence.
The report also outlined pervasive challenges that could heighten these risks, such as the complexities of designing safe AI systems, evaluating their safety, and ensuring accountability in their deployment.
In response to the report, Sjuul van der Leeuw, CEO of Deployteq, commended the U.K. government's serious approach to AI safety, recognizing the significant opportunities AI offers across various industries— contingent upon effective regulation and guidance from policymakers.