EU Council Approves Finalized Risk-Based Framework for AI Regulations

Final Approval of EU AI Act: A Landmark Step for Artificial Intelligence Regulation

European Union lawmakers have officially given the green light for the bloc’s pioneering, risk-based regulations governing artificial intelligence (AI). This significant development marks the first comprehensive framework of its kind globally, aiming to establish a robust standard for AI regulation.

The EU Parliament had previously passed the legislation in March, paving the way for the Council of the European Union to finalize its approval. The new law is set to be published in the EU's Official Journal in the coming days, and it will take effect 20 days later. Implementation will occur in phases, with certain provisions activating after two years or even longer.

The EU AI Act employs a risk-based regulatory approach, categorizing uses of AI based on potential risks. It outright bans certain “unacceptable risk” use cases, including cognitive manipulation and social scoring. Additionally, it designates specific “high-risk” applications, such as biometrics and facial recognition, particularly in critical areas like education and employment. To access the EU market, app developers will need to register their systems and comply with rigorous risk and quality management standards.

Conversely, applications like chatbots are classified as “limited risk,” subject to less stringent transparency obligations. The legislation also addresses the fast-growing field of generative AI by instituting rules for “general-purpose AIs” (GPAIs), which includes models like OpenAI’s ChatGPT. While most GPAIs will be subject to minimal transparency requirements, those exceeding specific computing thresholds and deemed to pose a “systemic risk” will face stricter regulations.

Mathieu Michel, Belgium's Secretary of State for Digitization, highlighted the significance of the AI act, stating, “This landmark law, the first of its kind globally, tackles a pressing technological challenge while unlocking opportunities for our economies. The Act reinforces Europe’s commitment to trust, transparency, and accountability in emerging technologies, ensuring they can thrive and drive innovation.”

Moreover, the law establishes a governance framework for AI that includes the creation of an AI Office within the European Commission for enforcement. An advisory AI Board, made up of representatives from EU member states, will assist the Commission in implementing the AI Act effectively, similar to the role played by the European Data Protection Board in relation to the GDPR. The Commission will also form a scientific panel to oversee compliance and an advisory forum to provide technical insights.

Standards organizations will be crucial in defining the requirements for AI developers, reflecting the EU’s long-standing approach to product regulation. As the industry pivots from lobbying against the legislation, we can expect a concerted effort to shape the standards that will govern AI development.

Additionally, the AI Act encourages the establishment of regulatory sandboxes to facilitate the development and real-world testing of innovative AI applications.

While the EU AI Act represents a significant step in regulating artificial intelligence, developers should also be aware of existing laws that may affect them, such as copyright laws, GDPR, the EU’s online governance policies, and various competition laws.

Highlights: EU AI Act, AI regulation, risk-based approach, generative AI, enforcement governance, innovation, compliance standards, regulatory sandboxes

Most people like

Find AI tools in YBX