The finalized text of the EU AI Act, a groundbreaking regulation governing artificial intelligence in the European Union, has officially been published in the bloc’s Official Journal. This landmark legislation will come into effect on August 1, giving it a 20-day countdown until implementation. In 24 months, or by mid-2026, the law's provisions will fully apply to AI developers. However, this regulation adopts a phased implementation approach, introducing various deadlines leading up to full compliance, as well as additional deadlines beyond 2026 when different legal provisions will take effect.
In December of last year, EU lawmakers reached a political agreement on this first comprehensive AI framework. This regulation assigns obligations to AI developers based on the use cases and associated risks. Most AI applications will remain unregulated, classified as low-risk, while a limited number of use cases will be outright banned.
“High-risk” applications, including biometric AI and AI used in law enforcement, employment, education, and critical infrastructure, will be permitted, but developers must adhere to requirements focused on data quality and bias mitigation. There is also a third tier of risk which imposes lighter transparency obligations on creators of AI tools like chatbots.
Developers of general-purpose AI (GPAI) models, such as OpenAI's GPT technology, will also face transparency requirements. The most powerful GPAIs—which are typically categorized based on computational thresholds—may also be subject to systemic risk assessments. Heavy lobbying from certain sectors of the AI industry, supported by some Member States, aimed to dilute the obligations for GPAIs due to concerns that the law might hinder Europe’s ability to cultivate its own AI giants in competition with firms in the U.S. and China.
Phased Implementation Timeline
The first key deadline occurs six months after the law's enactment, where the list of prohibited AI applications will take effect in early 2025. The list includes "unacceptable risk" uses such as systems reminiscent of China’s social credit scoring; indiscriminate scraping of the internet or CCTV for facial recognition databases; and real-time remote biometrics in public places by law enforcement, except in specified scenarios, such as during searches for missing persons.
The next critical milestone arises nine months after the law's enforcement, in April 2025, when codes of practice will be introduced for developers working on regulated AI applications. The EU’s AI Office, established to foster an ecosystem and oversee compliance, is responsible for creating these codes. However, questions remain regarding who will draft the guidelines. Recent reports indicate that the EU is seeking consultancy firms for this task, which has raised concerns among civil society regarding the potential influence of AI industry stakeholders on the final rules. Following advocacy from Members of the European Parliament (MEPs) for a more inclusive process, the AI Office will also issue a call for expressions of interest from stakeholders to contribute to drafting codes for general-purpose AI models.
A significant deadline to note is August 1, 2025—12 months after the law takes effect—when transparency requirements for GPAIs will begin to apply. Additionally, a specific subset of high-risk AI systems has been granted a 36-month compliance window, allowing until 2027 to meet their obligations, while other high-risk systems must comply within 24 months.
Keywords: AI regulation, EU AI Act, artificial intelligence compliance, AI developers responsibilities, high-risk AI applications, data privacy in AI.