EU and US Prepare to Unveil Collaborative Efforts on AI Safety, Standards, and Research & Development

The European Union and the United States are poised to announce a significant collaboration on artificial intelligence (AI) during the upcoming meeting of the EU-U.S. Trade and Technology Council (TTC) this Friday. A senior commission official provided insights to journalists prior to the event, highlighting a growing spirit of cooperation between lawmakers on both sides of the Atlantic. This joint effort aims to effectively navigate the challenges and opportunities presented by advanced AI technologies, despite the current dominance of U.S. companies like OpenAI in the AI landscape.

Established a few years ago after the Trump administration, the TTC serves as a platform for EU and U.S. lawmakers to discuss critical trade and tech policy issues. This Friday’s gathering will mark the sixth meeting since the forum began its work in 2021, and it carries particular weight given the upcoming elections in both regions. Concerns over a potential second Trump presidency disrupting future EU-U.S. cooperation could motivate legislators to seize current opportunities for collaboration.

According to the senior commission official, "We will certainly announce the establishment of an AI Office and the [U.S.] AI Safety Institute," referring to a forthcoming EU oversight body linked to the EU AI Act. This comprehensive, risk-based framework for regulating AI applications is set to be implemented across the bloc later this year. The anticipated agreement emphasizes a “collaboration or dialogue” between the AI oversight bodies of the EU and U.S., aiming to enhance the enforcement of regulatory powers over AI technologies.

A key aspect of this expected EU-U.S. AI partnership will be standardization efforts, focusing on creating a joint "AI roadmap" to develop foundational standards that guide future advancements. Additionally, the partnership will address "AI for public good," promoting collaborative research initiatives that specifically aim to integrate AI solutions in developing countries and the Global South.

The official noted a mutual recognition of the tangible benefits AI technologies can offer to developing regions, particularly in sectors like healthcare, agriculture, and energy. This focus on fostering AI in these areas underscores the potential for transatlantic collaboration to yield meaningful impacts in the near future.

The landscape of AI is evolving, with the U.S. no longer viewing AI solely as a trade concern. "Through the TTC, we can effectively communicate our policies and demonstrate to our American counterparts that our goals align," the official remarked. This alignment is evident through the AI Act and the U.S. Executive Order focused on mitigating AI-related risks while facilitating the adoption of AI in both economies.

Earlier this week, the U.S. and U.K. solidified a partnership on AI safety, but the impending EU-U.S. collaboration promises to be broader in scope, encompassing not just safety protocols and standardization but also joint efforts to support “public good” research initiatives.

The discussion also touched on emerging technology cooperation, particularly in electronic identity (e-ID). The EU has been developing an e-ID proposal for several years, and there are indications that the U.S. is keen to explore the extensive business opportunities offered by the EU's electronic identity wallet.

Furthermore, the official highlighted a growing consensus between the EU and U.S. regarding platform power management. This is an area where the EU has implemented significant legislation, such as the Digital Markets Act (DMA). “We observe many commonalities between EU laws like the DMA and recent antitrust cases in the United States,” the official noted, emphasizing the potential for mutually beneficial outcomes.

In a related development, the U.S.-U.K. AI memorandum of understanding, signed this week, outlines plans to enhance cooperation on AI safety, including national security and broader societal concerns. The agreement includes at least one joint testing initiative focused on a widely accessible AI model, with possibilities for personnel exchanges between the two nations’ AI safety institutes to share expertise.

The U.S.-U.K. arrangement anticipates broader information sharing on AI capabilities, risks, and fundamental research concerning AI safety and security. This collaboration aims to establish a unified approach to AI safety testing, encouraging researchers on both sides of the Atlantic to converge on a common scientific foundation.

Last summer, ahead of a global AI summit, the U.K. government secured commitments from major U.S. AI companies like Anthropic, DeepMind, and OpenAI for priority access to their AI models, facilitating research on evaluation and safety. The U.K. also announced an investment of £100 million in an AI safety taskforce to focus on foundational AI models.

At last November’s U.K. AI Summit, U.S. Commerce Secretary Gina Raimondo announced the creation of a U.S. AI safety institute following an executive order on AI, which will collaborate closely with AI safety groups established by various governments.

While neither the U.S. nor the U.K. has proposed comprehensive legislation on AI safety thus far, the EU leads the charge in legislative efforts. Nonetheless, the momentum toward cross-border collaboration in AI is unmistakable.

Deal on EU AI Act gets thumbs up from European Parliament.

Apple, Google, and Meta face first formal investigations under the EU’s DMA.

Most people like

Find AI tools in YBX