EU's AI Act Takes Effect: What You Need to Know

It’s official: The European Union's risk-based regulation governing artificial intelligence (AI) applications will take effect on Thursday, August 1, 2024.

This marks the beginning of a series of staggered compliance deadlines that will apply to various AI developers and applications. Most provisions will be fully enforced by mid-2026. However, the first deadline arrives in just six months, introducing bans on specific high-risk AI uses, including the deployment of remote biometrics for law enforcement in public spaces.

Under this regulation, the majority of AI applications are classified as low or no risk and therefore fall outside its purview. However, a select group of AI applications are deemed high risk. These include technologies like biometric systems, facial recognition software, and AI applications in sectors such as healthcare and employment. Developers of these high-risk applications must adhere to strict risk and quality management obligations, which include conducting pre-market conformity assessments and possibly facing regulatory audits. Additionally, high-risk systems utilized by public sector authorities or their suppliers must be registered in an EU database.

There is also a "limited risk" category that applies to technologies like chatbots and tools capable of generating deepfakes. These technologies will need to fulfill certain transparency requirements to prevent user deception.

Penalties for non-compliance are structured in tiers: violations of banned AI applications may incur fines of up to 7% of global annual turnover; breaches of other obligations can lead to fines of up to 3%; and supplying incorrect information to regulators may result in fines of up to 1.5%.

An essential aspect of the regulation concerns developers of general purpose AIs (GPAIs). The EU has adopted a risk-based approach here too, where most GPAI developers will face minimal transparency requirements. Nonetheless, they must provide a summary of training data and establish policies to adhere to copyright laws. Only a select group of the most influential models—defined as those trained with over 10^25 FLOPs of computing power—are required to implement risk assessment and mitigation measures.

While enforcement of the AI Act's general provisions is delegated to member state authorities, the regulations concerning GPAIs are enforced at the EU level. The specific requirements for GPAI developers continue to be formulated, as Codes of Practice are still in development. Earlier this week, the AI Office, which oversees strategic operations related to AI, opened a consultation process for stakeholders in this rule-making endeavor, aiming to finalize the Codes by April 2025.

In a recent primer on the AI Act, OpenAI—the creator of the GPT language model that powers ChatGPT—expressed its commitment to collaborating “closely with the EU AI Office and other relevant authorities as the new law is implemented in the coming months.” They plan to produce technical documentation and guidance for users and developers of their GPAI models.

If your organization is seeking to navigate the AI Act's compliance landscape, it’s crucial to first classify any AI systems that may be affected. Identify the GPAI and other AI technologies in use, ascertain their classifications, and understand the obligations associated with your specific use cases. OpenAI emphasizes the need to discern whether you are a provider or deployer of these AI systems. Given the complexity of these issues, consulting with legal counsel is advisable.

The precise stipulations for high-risk AI systems under the Act are still being developed in collaboration with European standards bodies. The Commission has allotted these bodies until April 2025 to finalize their proposals, after which an evaluation will occur. The standards must receive EU endorsement before they can be implemented by developers.

This report includes additional information about penalties and obligations and clarifies that registration in the EU database applies to high-risk systems utilized in the public sector.

Most people like

Find AI tools in YBX