The U.K. government has announced an investment of nearly £90 million ($113 million) to establish nine new research hubs dedicated to advancing responsible artificial intelligence (AI) in collaboration with the United States. These research hubs will focus on enhancing British expertise across various sectors, including health care, chemistry, and mathematics.
In addition, the Arts and Humanities Research Council will allocate £2 million ($2.5 million) towards new initiatives that aim to define and implement responsible AI practices in education, policing, and the creative industries. Another £19 million ($23.8 million) will fund 21 projects aimed at developing effective machine learning and AI solutions that uphold ethical standards in their applications.
To further support these efforts, the government plans to establish a steering committee aimed at guiding regulatory activities related to AI, set to commence in Spring. This announcement follows the previous establishment of the £100 million ($125.5 million) AI Safety Institute, initiated at last year’s AI Safety Summit, which is responsible for evaluating potential risks of emerging AI technologies.
Tamara Quinn, an IP and AI partner at law firm Osborne Clarke, expressed that while the announcement offers direction, it may not meet public expectations. “After nine months considering this transformative technology, the government's response might be seen as underwhelming,” she commented.
The focus of today’s announcement appears to center primarily on existing regulators utilizing their current powers to address AI concerns. Given the anticipated timeline for new legislation, especially with a general election on the horizon and priority placed on three digital technology-focused bills—the Digital Markets, Competition and Consumer Bill, the Data Protection and Digital Information Bill, and the Media Bill—the chances for significant changes seem limited.
To bolster oversight capabilities, the U.K. government has earmarked £10 million ($12.5 million) to train departmental regulators as they prepare to take on responsibilities linked to AI governance. This funding will assist regulators in developing research and development tools for monitoring AI-related risks effectively. Regulatory bodies, such as Ofcom and the Competition and Markets Authority, are required to publish their strategy for managing AI by April 30. They must outline AI-related risks specific to their domains, assess their existing skillsets, and present their plans for AI regulation over the coming year.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan, remarked, “AI is evolving rapidly, but we have demonstrated that we can adapt just as quickly. By adopting a flexible, sector-specific approach, we are proactively managing the associated risks.”
However, concerns have been raised regarding the government’s current strategy. A recent House of Lords report indicated that the focus seems overly concentrated on large language models, stressing the necessity to expand efforts toward enhancing opportunities while addressing immediate security and societal concerns. The House of Lords Communications and Digital Committee cautioned that without broadening its safety initiatives, the U.K. risks falling behind its global competitors, losing influence, and becoming overly reliant on foreign technology companies for critical advancements in AI.
As the U.K. aspires to cement its position as a global leader in AI, Prime Minister Rishi Sunak previously hosted the AI Safety Summit, uniting global leaders to forge a comprehensive strategy for responsible AI. The government's commitment to building one of Europe’s most powerful supercomputers further demonstrates its dedication to advancing technology and fostering collaboration with leading AI experts, including Turing Award laureate Yoshua Bengio, who is advising the Prime Minister on the future of AI development.