On August 1, the European Union's Artificial Intelligence Act (AI Act) officially came into effect. This legislation marks the first comprehensive regulatory framework for artificial intelligence globally, signifying a significant advancement for the EU in managing the rapid growth of AI applications. The Act was approved by the EU Council on May 21, announced in the Official Journal of the European Union on July 12, and became effective 20 days later.
The AI Act's regulations will be implemented in phases, allowing businesses a transition period to adjust their systems. Specific rules will take effect either six months or 12 months after the Act's passage, while the majority will be enforced starting August 2, 2026. The regulations adopt a risk-based approach, meaning different AI applications will be subject to varying levels of oversight based on their associated risks to society.
A set of rules prohibiting certain AI systems will be enacted starting February 2025. These rules will prevent AI applications from exploiting personal vulnerabilities, scraping facial images from the internet or surveillance footage indiscriminately, and creating facial recognition databases without consent. Following August 2025, complex and widely-used AI models will face new requirements that mandate all AI-generated content—such as images, audio, or video—to carry clear labels for easy identification. This measure aims to address public concerns regarding misinformation and election interference.
Additionally, the AI Act imposes strict transparency obligations on high-risk AI systems while maintaining lighter requirements for general AI models, the latter of which will come into effect in August 2026. High-risk AI categories include autonomous vehicles, medical devices, loan decision-making systems, education assessment tools, and remote biometric identification systems.
The enforcement of the AI Act will be robust and multifaceted. The EU plans to establish national regulatory bodies across its 27 member states to ensure compliance. These regulatory agencies will have the authority to conduct audits, request documentation, and implement corrective actions. The EU Artificial Intelligence Board will coordinate the work of these agencies, ensuring consistent application throughout the EU.
Non-compliant companies could face severe penalties, including fines of up to €35 million or 7% of their global annual turnover, whichever amount is higher. The EU asserts that the AI Act effectively addresses the unique risks of artificial intelligence, safeguarding public rights while complementing the General Data Protection Regulation (GDPR) enacted in May 2018.
Tanguy Van Overstraeten, Head of Media and Technology at global law firm Nöel & Partners, describes the AI Act as "the world's first such legislation," noting its potential impact on numerous businesses, particularly those developing AI systems or utilizing them under certain circumstances. Charlie Thompson, an executive at business software company Appian, highlighted that the Act's influence extends beyond the EU, affecting any organization engaging with the EU market or its citizens, implying that the regulations may apply to most tech companies globally.
Technology giant Meta has already limited the accessibility of its AI models in Europe, although this move may not be directly related to the AI Act. Eric Loeb, Executive Vice President for Government Affairs at Salesforce, commented that Europe's risk-based regulatory framework promotes innovation while prioritizing safe technology development and deployment, suggesting that other governments should consider these rules when shaping their own policies. Jamil Jiva, Global Head of Asset Management at fintech company Linedata, emphasized the EU's understanding that significant fines for noncompliance are essential for impactful regulation, reflecting strategies similar to those used in the GDPR.