UK Government Unveils $100M+ Initiative to Boost ‘Responsible’ AI Research and Development

The U.K. government is releasing its response to an AI regulation consultation that began last March, where it expressed a preference for using existing laws and regulators, supplemented by “context-specific” guidance for a light-touch oversight of the rapidly evolving tech sector. The full response will be available later this morning (update: it's now accessible here). In a press release preceding the publication, the Department for Science, Innovation and Technology (DSIT) positions the plan as a step towards enhancing U.K. “global leadership” through directed measures, including over £100 million (~$125 million) in additional funding aimed at strengthening AI regulation and spurring innovation.

According to DSIT, there will be an extra £10 million (~$12.5 million) allocated to help regulators “upskill” to manage their expanded responsibilities. This funding is intended to assist in applying current sectoral rules to AI advancements and in enforcing existing laws regarding AI applications that violate regulations, which may involve developing specific technologies. DSIT noted, “The fund will support regulators in creating innovative research and tools to mitigate risks in sectors like telecoms, healthcare, finance, and education. For instance, it could fund new technical tools to analyze AI systems.” However, details on the potential recruitment of additional staff with this funding were not provided.

The government also announced a substantial £90 million (~$113 million) investment to establish nine research hubs across the U.K., aiming to promote homegrown AI innovation in key areas, such as healthcare, mathematics, and chemistry. This significant funding allocation highlights the government's prioritization of domestic AI development compared to the comparatively modest resources meant for ensuring AI safety.

While DSIT confirmed that the £10 million fund for expanding regulatory capabilities has yet to be set up, a spokesperson emphasized the importance of proceeding carefully to achieve the intended objectives and secure value for taxpayers’ money. Additionally, the £90 million funding for the research hubs will be distributed over five years, beginning February 1. The spokesperson indicated that investments in the nine hubs range from £7.2 million to £10 million, but did not disclose details about the other six hubs.

Notably, the government is maintaining its stance on refraining from introducing new AI legislation for the time being. DSIT stated, “The U.K. government will not rush to legislate, avoiding ‘quick-fix’ rules that may quickly become outdated.” Instead, the strategy empowers existing regulators to address AI risks in a focused manner.

In an executive summary of the consultation response, Secretary of State for Science, Innovation and Technology, Michelle Donelan, acknowledged that the complexities of AI will eventually necessitate legislative action worldwide once the risks are better understood. She highlighted that “targeted binding requirements” might be essential to ensure accountability among the few tech giants developing highly capable AI systems and to ensure the safety of their technologies. However, no binding requirements have been proposed as these would necessitate new legislation.

As AI capabilities and societal impacts grow, some mandatory measures will be necessary across jurisdictions to tackle potential AI-related harms and assure public safety. However, rushing to act without a thorough understanding of risks could compromise the benefits of technological advancement, as Donelan noted, “We will take our time to get this right — we will legislate when we are confident that it is the right choice.”

This cautious approach is understandable given that the current government is preparing for an election this year, which polls suggest it may struggle to win. Consequently, with time running short in the Parliament, the ability to enact legislation on complex subjects like AI appears limited.

In contrast, the European Union has recently finalized its own risk-based framework for regulating “trustworthy” AI, setting the stage for enforcement later this year. This divergence highlights the U.K.'s strategy of opting against stringent legislative measures on AI, suggesting an effort to create a more inviting environment for tech developers, despite the EU's commitment to legal clarity for businesses and its own suite of AI support initiatives.

“The U.K.’s flexible regulatory framework allows regulators to quickly respond to emerging risks while providing developers with the freedom to innovate and grow,” DSIT proclaimed. Specifically aimed at enhancing business confidence, the release noted that key regulators, including Ofcom and the Competition and Markets Authority (CMA), have been tasked with outlining their approach to AI regulation by April 30. This initiative is expected to clarify AI-related risks in their sectors and detail plans for managing these risks in the coming year, implying that AI developers must remain vigilant regarding evolving regulatory priorities.

Prime Minister Rishi Sunak appears to be cultivating relationships within the tech sector, frequently engaging with technology leaders and hosting a “global AI safety summit” at Bletchley Park. This administration's choice to delay implementing any stringent new rules seems fitting under the current circumstances.

In parallel, the government has emphasized its urgency to distribute taxpayer funding to stimulate domestic AI innovation. Alongside the aforementioned £90 million for research hubs, additional funding of £2 million through the Arts & Humanities Research Council (AHRC) aims to define what responsible AI entails across various sectors, such as education and policing. This is part of the AHRC's existing Bridging Responsible AI Divides (BRAID) initiative.

Moreover, £19 million will be allocated for 21 projects focused on creating “trusted and responsible AI and machine learning solutions” to enhance the deployment of AI technologies and boost productivity. DSIT mentioned that these initiatives will derive funding from the Accelerating Trustworthy AI Phase 2 competition, supported by the U.K. Research & Innovation (UKRI) Technology Missions Fund.

Donelan emphasized the U.K.'s innovative approach to AI regulation, positioning the country as a global leader in both AI safety and development. She expressed a personal commitment to AI's potential to revolutionize public services and the economy, ultimately leading to advancements that could combat serious health conditions.

Today's funding announcements reflect an additional investment beyond the previous £100 million dedicated to the AI safety taskforce, now termed the AI Safety Institute, which is focused on foundational AI models. Concerns have surfaced regarding the absence of a robust peer review system in the government’s approach to funding AI projects. However, a DSIT spokesperson reassured that UKRI continues its standard competitive funding processes, where proposals are scrutinized by independent experts.

DSIT is collaborating with regulators to finalize the specific details surrounding project oversight to ensure that the transformative possibilities of AI can be realized while simultaneously addressing associated risks.

On foundational model safety, DSIT’s release indicated that the AI Safety Institute aims to enhance the U.K.'s ability to evaluate AI technologies through international collaboration. Additionally, a further £9 million investment via the International Science Partnerships Fund aims to unite U.K. and U.S. researchers focused on developing reliable and trustworthy AI solutions.

DSIT’s release also emphasizes the government's intention to propose targeted binding requirements for organizations developing advanced general-purpose AI systems, ensuring that they can be held accountable for the safety of their technologies. Overall, the announcement underscores an ongoing commitment to navigating the nuanced landscape of AI regulation while fostering innovation and addressing potential risks.

Most people like

Find AI tools in YBX