EU AI Act Negotiations Face Tough Challenges
Discussions among European Union lawmakers regarding a risk-based framework for regulating artificial intelligence (AI) are at a critical juncture. During a roundtable convened by the European Center for Not-For-Profit Law (ECNL) and civil society association EDRi, Brando Benifei, a Member of the European Parliament (MEP) and co-rapporteur for AI legislation, characterized the ongoing talks as “complicated” and “difficult.”
These closed-door discussions, known as "trilogues," are pivotal in shaping EU legislation. Key points of contention include the prohibition of certain AI practices (notably, Article 5’s list of banned uses), fundamental rights impact assessments (FRIAs), and national security exemptions. Benifei indicated that MEPs have established firm positions on these matters and are seeking concessions from the Council, which has yet to yield significantly. “We cannot compromise on the protection of citizens' fundamental rights,” he stated. "While we aim to finalize this by early December, we won’t rush to conclude if it means conceding on these critical issues.”
Civil society representatives also shared their concerns about the negotiations' progress. Sarah Chander, EDRi's senior policy adviser, expressed skepticism about the reception of important civil society recommendations designed to protect fundamental rights against AI risks. She pointed out significant opposition among Member States against a complete ban on remote biometric identity systems and the lack of consensus on the registration and risk classification of high-risk AI applications in law enforcement.
“We must ensure that the AI Act effectively protects fundamental rights and democratic freedoms,” emphasized Benifei. “I believe we are on the right path, but we cannot allow government flexibility on sensitive issues.” The trilogue discussions involve parliamentarians, representatives of EU Member States (the European Council), and the EU's executive body, the Commission, which originally proposed the draft legislation. However, such negotiations often struggle to achieve a balanced compromise and can be hindered by deep-rooted disagreements, as seen with the still-stalled ePrivacy Regulation.
Transparency remains a significant issue within trilogues, with increasing concerns that tech policies are becoming prime targets for industry lobbyists seeking to influence legislation. The AI Act has attracted considerable lobbying efforts from both US tech giants and European startups wanting to rival their American counterparts.
Industry Lobbying and Regulatory Challenges
Benifei highlighted that regulating generative AI and foundational models has become another divisive issue among EU lawmakers, driven by aggressive lobbying from industry stakeholders. “There’s a substantial amount of pressure and lobbying aimed at Member States from the industry,” he explained, emphasizing the need to uphold high regulatory standards.
Recent reports by Euractiv revealed that discussions among a technical group of the European Council faltered when representatives from France and Germany opposed MEPs' proposals for a structured regulation of foundational models. Notably, French AI startup Mistral is reportedly leading resistance against stricter regulations, with German startup Aleph Alpha also actively lobbying governments for leniency regarding generative AI governance.
Corporate Europe Observatory confirmed that France and Germany are pressing for a regulatory exemption for foundational models. Bram Vranken, a representative of the organization, criticized extensive lobbying by Big Tech: “While these companies publicly advocate for the regulation of dangerous AI, they privately pursue a laissez-faire approach that allows them to dictate the rules.”
Mistral’s CEO Arthur Mensch acknowledged his company’s efforts to persuade lawmakers against imposing obligations on foundational model creators, clarifying that they are not obstructing progress. “We maintain that regulating foundational models doesn't make sense; regulations should focus on applications instead,” Mensch stated.
Aleph Alpha did not respond for comment regarding lobbying accusations before time of writing. Max Tegmark, president of the Future of Life Institute, raised alarms about potential regulatory capture: “Allowing Big Tech to exempt foundational models could undermine the EU AI Act's integrity, negating years of hard work and leaving European companies vulnerable to lobbying from both local startups and US firms.”
With influential member states like France reluctant to budge, uncertainty looms over the Council’s stance on foundational models. An EU source acknowledged that pressing issues remain “tough points” for Member States, showing minimal flexibility. However, there's cautious optimism for a trilogue resolution on December 6, as discussions continue among Member State representatives and the Spanish presidency seeks a revised negotiation mandate.
Next Steps and Fundamental Rights Impact Assessments
Benifei expressed hope for a compromise on FRIAs, indicating that MEPs are pushing for robust protections for fundamental rights in AI applications. He noted that data protection laws already encourage proactive evaluations of potential risks and suggested FRIAs should similarly prompt developers to assess the impact of their AI tools on democratic freedoms.
Despite significant opposition from the private sector, which views FRIAs as burdensome, Benifei believes maintaining civil society pressure on governments is crucial to prevent negotiations from stalling. Lidiya Simova, policy advisor to MEP Petar Vitanov, underscored that diminishing obligations on private companies to conduct FRIAs would weaken their effectiveness.
Simova cautioned against further downgrades that might dilute the intent of FRIAs, stating, “If obligations lack meaningful consequences, what value do they hold?” She remarked that difficulties in reaching consensus on the AI Act are not merely about individual issues but also stem from broader structural challenges. “The ongoing effort to integrate fundamental rights within product safety legislation complicates a timely resolution,” she noted.
If negotiations falter, the EU’s ambition to establish itself as a leader in AI governance may falter due to pressing timelines as European elections approach. Establishing a robust legal framework for AI was highlighted as a priority by EU President Ursula von der Leyen in 2019, leading to initial legislative proposals in April 2021. The urgency has intensified since the rise of generative AI and ChatGPT, heightening the tech industry's resistance to increased regulatory oversight.
The next trilogue meeting on December 6 is crucial. Failure to reach an agreement could significantly compress the timeline for negotiations, particularly with impending elections reshaping the political landscape in the EU. Should further negotiations extend into the next year, the changing dynamics will present additional challenges under a new Council presidency. The current Commission's achievements in advancing digital regulations are notable, but whether it can successfully navigate the complexities of AI regulation remains uncertain.
The discourse from yesterday’s roundtable highlighted the intricate global landscape where regulatory actions in one jurisdiction affect others, underscoring that without swift, meaningful regulation, foundational challenges in AI governance will persist.
This report includes additional comments from Max Tegmark and further remarks from Mensch in response to follow-up queries.