On December 8, the European Parliament, alongside the 27 member states, reached a historic agreement on groundbreaking regulations that will establish the world’s first comprehensive framework governing artificial intelligence (AI). This significant achievement came after intense negotiations addressing contentious issues, including the use of facial recognition technology by law enforcement and the regulation of generative AI.
Thierry Breton, the EU's Internal Market Commissioner, celebrated the milestone, tweeting, "Historic! The EU becomes the very first continent to set clear rules for the use of AI. The AI Act is much more than a rulebook - it's a launchpad for EU startups and researchers to lead in the global AI race." The Act permits biometric identification systems under strict conditions while prohibiting social scoring that exploits individuals’ vulnerabilities. Importantly, it empowers consumers with the right to file complaints and receive "meaningful" explanations regarding AI-related decisions affecting them.
Penalties for violations of the Act range from €35 million (approximately $38 million) or 7% of global revenue, to €7.5 million (about $8 million) or 1.5% of revenue, underscoring the strict enforcement mechanisms designed to ensure compliance. In a statement, the European Parliament noted that the regulation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI applications, while simultaneously fostering innovation and solidifying Europe’s leadership in the AI sector. The new rules categorize AI systems based on their potential risks and the extent of their impact.
Now, the next step involves a formal vote by the European Parliament, anticipated to occur early next year. Yann LeCun, Chief AI Scientist at Meta, expressed support for the agreement, commending the French, German, and Italian governments for their commitment to open-source models. The new regulations offer broad exemptions for open-source projects, a crucial development for many innovators in the field.
Amanda Brock, CEO of OpenUK, expressed cautious optimism regarding potential exemptions for nonprofit organizations that generate funds through open-source software. "If this is indeed true, it would be a significant victory for the open-source communities," she noted.
In the days leading up to this agreement, negotiations were intense, extending beyond the December 6 deadline initially set for finalizing the EU AI Act. Key topics of contention included the governance of generative AI systems, such as ChatGPT, and the use of AI in biometric surveillance. Reports indicate that lawmakers engaged in over 20 hours of continuous debates to reach consensus on these critical issues. The pressing nature of the discussions even led Thierry Breton to remark on social media, "New day, same trilogue."
The prospect of the AI Act signifies a transformative step forward in how AI technology will be regulated, aiming to balance safety and innovation. Biometric identification systems are classified as 'high-risk' under the proposed legislation, with discussions ongoing about their use by law enforcement in publicly accessible areas. Some member states are advocating for exceptions under national security and military contexts.
The introduction of new rules covering foundation models, including advanced systems like GPT-4 and Gemini, highlights the evolving landscape of AI governance in Europe. As of now, the December 6 deadline marks one year since the Council of the EU initially adopted its position on the bill, with the final agreement representing a significant convergence of regulatory efforts aimed at shaping a responsible AI future.