Expert Insights: What You Need to Know for the EU AI Act Compliance

On December 8, European Union policymakers reached a significant milestone by finalizing the Artificial Intelligence Act (AI Act), establishing it as the world’s first comprehensive regulatory framework for artificial intelligence. As a leader in global digital and data governance, the EU has consistently set standards on a myriad of issues, including data privacy and targeted advertising practices. After a lengthy legislative journey initiated in April 2021, the AI Act is poised to transform AI governance, heralding the ‘Brussels Effect’ that will elevate AI standards across the global economy.

The official text is currently undergoing final revisions, but EU leaders expect the historic AI Act to be officially adopted in April 2024, with a phased implementation beginning in 2026.

### What is the EU AI Act?

The EU AI Act represents a groundbreaking legal framework introduced by the European Commission in April 2021, aimed at ensuring safety, accountability, and transparency of AI systems in the EU market. Employing a risk-based approach, the Act will oversee AI developers, distributors, importers, and users, carefully assessing the potential adverse impacts of AI systems. The greater the potential harm of an AI application, the stronger the oversight required.

This legislation is reminiscent of the EU’s General Data Protection Regulation (GDPR), which has influenced global privacy standards since its inception in 2016.

### Who is Impacted by the AI Act?

The AI Act delineates specific definitions and responsibilities for various entities involved in the AI ecosystem, including developers, distributors, and users. Notably, even companies based outside the EU may fall under the Act's jurisdiction if their AI systems yield outputs utilized within the EU. For instance, a South American firm developing an AI system that affects EU residents could be subject to the Act.

The AI Act will not apply to military or national security applications, but it outlines strict conditions for law enforcement’s use of remote biometric identification systems.

The draft AI Act, which the European Parliament adopted in June 2023, highlights two key categories of players:

- **Provider**: This term refers to developers who offer AI systems under their own branding, whether for free or for payment. An example of a provider would be OpenAI with ChatGPT in the EU market.

- **Deployer**: This term was previously known as “User” but is now used to describe entities that utilize AI systems for professional or business purposes.

The compliance obligations primarily rest with the AI Provider, mirroring how the GDPR places accountability on data controllers. However, the sheer number of AI users (Deployers) will far exceed those of Providers, especially regarding high-risk systems, where Deployers will be pivotal in managing AI-related risks.

### Risk Classification of AI Practices

With a focus on protecting end-user rights and safety, the AI Act categorizes AI systems into four risk classifications:

1. **Unacceptable AI**: This category outright bans the use of AI technologies that threaten democratic values or human rights, such as social credit systems and intrusive biometric surveillance.

2. **High-Risk AI**: AI systems that could cause significant harm to critical sectors—like infrastructure, employment, and public health—are considered high-risk. Developers of such systems must comply with stringent requirements including conformity assessments, registration, data governance, and cybersecurity protocols.

3. **Low-Risk AI**: Systems posing minor risks, such as basic chatbots, require transparency measures, mandating that users are informed when content is AI-generated.

4. **Minimal or No Risk AI**: This category covers low-risk applications, including automated summarizers and spam filters, which typically do not pose significant risk to users.

### Penalties for Non-Compliance

Violating the provisions of the AI Act can result in substantial penalties. These range from €7.5 million or 1.5% of global annual revenue to €35 million or 7% of global revenue, depending on the severity of the breach. The 2023 Parliament version proposes the following potential fines:

- **Unacceptable AI**: Up to 7% of global annual revenue, an increase from the previous 6%.

- **High-Risk AI**: Up to €20 million or 4% of global revenue.

- **General-Purpose AI (e.g., ChatGPT)**: Fines up to €15 million or 2% of global revenue, with special provisions for generative AI applications.

- **Providing Incorrect Information**: Fines of up to €7.5 million or 1.5% of global revenues.

### Preparing for the AI Act

To effectively navigate the AI Act, organizations must adopt a comprehensive governance strategy that includes robust internal controls and supply chain management. Companies developing or deploying high-risk AI systems need to conduct thorough assessments of their operations and consider the following questions:

- Which departments are utilizing AI tools?

- Are these tools processing proprietary information or sensitive personal data?

- Do these use cases fall under unacceptable, high, or low-to-no-risk categories as defined by the AI Act?

- Is the company acting as an AI Provider or as an AI Deployer?

- What stipulations exist in vendor agreements concerning data protection and compliance?

The EU's advance on the AI Act could trigger significant shifts in global legislative approaches to managing AI risks. Coupled with the recent U.S. AI Executive Order announced in October, this pivotal agreement signals a transformative era in how businesses can harness AI technology responsibly and effectively.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles