EU Legislators Strike Historic Late-Night Agreement on Pioneering Global AI Regulations

After nearly three days of intensive final negotiations, European Union lawmakers have secured a political agreement on a risk-based framework for regulating artificial intelligence (AI). Originally proposed in April 2021, this essential legislation has undergone complex three-way negotiations, and its successful conclusion signifies that a comprehensive EU AI law is ultimately on the horizon.

In a press conference held in the early hours of Friday/Saturday local time, key representatives from the European Parliament, the Council, and the Commission—EU's co-legislators—celebrated the agreement as a hard-fought milestone and a historic accomplishment.

EU President Ursula von der Leyen, who prioritized the development of regulations for "trustworthy" AI upon assuming office in late 2019, took to X to commend the announcement as a "global first."

Prohibitions

While the full details of the agreement will take some weeks to finalize and publish, a press release from the European Parliament indicates a total ban on AI usage for several applications, including:

- Biometric categorization based on sensitive traits (e.g., political, religious beliefs, sexual orientation, and race).

- Unrestricted scraping of facial images from the internet for facial recognition databases.

- Emotion recognition in workplaces and educational institutions.

- Social scoring linked to personal behaviors or qualities.

- AI systems designed to manipulate human behavior against their free will.

- AI technologies exploiting individuals' vulnerabilities, such as age, disability, or economic status.

Remote biometric identification technology in public areas will not be entirely banned; however, safeguards and limited exceptions have been established. This includes requirements for prior judicial approval and strictly defined usage for serious crimes.

The use of retrospective (non-real-time) biometric AI will focus on targeted searches for individuals convicted or suspected of serious offenses. Real-time applications will be restricted in both time and location, applicable only for:

- Targeted searches of abduction, trafficking, or sexual exploitation victims.

- Prevention of specific imminent terrorist threats.

- Identification of individuals suspected of serious crimes like terrorism, trafficking, or robbery.

The Council's press release clarifies that the regulation will not encroach on areas outside EU jurisdiction, nor will it affect member states’ national security competencies or military AI systems. Furthermore, AI systems used solely for research and innovation, or by individuals for non-professional purposes, are exempt.

Civil society groups have expressed skepticism, voicing concerns that the limitations on state agencies’ biometric identification usage insufficiently protect human rights. Digital rights organization EDRi, which advocated for a comprehensive ban on remote biometrics, acknowledged the deal's minor human rights advancements but criticized it as a “shell” of the necessary AI law.

Rules for ‘High Risk’ and General Purpose AIs

The agreement also sets forth obligations for AI systems categorized as "high risk," which pose significant potential harm to health, safety, fundamental rights, democracy, and the environment.

The parliament successfully included mandatory fundamental rights impact assessments that extend to the insurance and banking sectors. AI systems influencing electoral outcomes are equally classified as high-risk.

A "two-tier" regulation system will apply to general AI systems, including foundational models that support the growth of generative AI applications like ChatGPT. The deal mandates transparency measures for "low tier" AIs, requiring developers to create technical documents and publish detailed training summaries to ensure compliance with EU copyright laws. Conversely, "high-impact" General Purpose AIs (defined by extensive compute use) face more stringent compliance requirements.

The document stipulates that these advanced models must conduct evaluations, assess and mitigate systemic risks, perform adversarial testing, report serious incidents, and ensure cybersecurity and energy efficiency. Until EU standards are harmonized, "systemic risk" GPAIs may rely on professional codes of practice for compliance.

The Commission announced an AI Pact to bridge gaps pending the AI Act’s implementation, which will exempt R&D efforts while imposing fewer requirements on open-source models.

The regulatory framework also promotes the establishment of regulatory sandboxes to assist startups and SMEs in training AI systems before market launch.

Penalties and Implementation Timeline

Penalties for non-compliance could range from €35 million or 7% of global turnover to €7.5 million or 1.5% of turnover, depending on the violation and company size.

The Council's terms dictate that the more severe penalty (7%) pertains to violations of banned AI applications, while lower fines (1.5%) apply to inaccurate information provision. Additional 3% sanctions can be imposed for other regulatory breaches, with the agreement allowing for more proportionate caps on fines for SMEs and startups, potentially easing the burden on smaller AI enterprises.

Phased implementation of the new law will occur post-adoption, with six months to enforce prohibitions, 12 months for transparency and governance, and 24 months for all remaining requirements. Thus, the full impact of the EU AI Act may not be realized until 2026.

Spain’s Secretary of State for Digital and AI, Carme Artigas, hailed the agreement as "the biggest milestone in the history of digital information in Europe." She emphasized that this marks the first international regulation for AI globally, aimed at supporting European development in this domain.

Co-rapporteurs Dragoș Tudorache and Brando Benifei from the European Parliament expressed their commitment to enacting legislation that fosters a "human-centric approach," ensuring the protection of fundamental rights while promoting technological innovation.

Despite the EU celebrating this commendable achievement, there are still formal steps required for finalization, including necessary votes in Parliament and the Council. However, this political deal effectively dismantles previous divisions, paving the way for the EU AI Act's passage in the coming months.

The Commission is already initiating steps for the agreement’s implementation, establishing an AI Office that will coordinate with Member State enforcement bodies and guide oversight of advanced AI models. A panel of independent experts will provide insights into GPAI regulations.

Despite pushback from some quarters, particularly France and AI startups like Mistral, the agreed deal includes necessary obligations for general-purpose AI, albeit with distinctions regarding their capacity and compliance levels.

In summary, the EU's groundbreaking regulatory framework for AI symbolizes a significant evolution toward establishing safeguards and promoting responsible innovation, even as stakeholders continue to analyze its broader implications.

Most people like

Find AI tools in YBX