The European Parliament has approved comprehensive legislation for regulating artificial intelligence (AI), marking a significant milestone nearly three years after the initial draft proposal. The AI Act received broad support, with 523 votes in favor, 46 against, and 49 abstentions.
The aim of these regulations is to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI applications while fostering innovation and positioning Europe as a global AI leader. The act outlines specific obligations for AI systems based on their potential risks and impacts.
Although not yet law, the AI Act is undergoing final lawyer-linguist checks and awaits formal enforcement by the European Council. It is expected to be implemented before the upcoming parliamentary elections in early June. Most provisions will take effect 24 months after the act is enacted, but bans on certain applications will apply within six months.
Key prohibitions include biometric categorization systems based on sensitive traits and the untargeted scraping of images from CCTV or the internet for facial recognition databases. Applications involving social scoring, emotion recognition in educational and workplace settings, and AI that manipulates behavior or exploits vulnerabilities will also be banned. Predictive policing based solely on personal characteristics, such as inferred sexual orientation or political opinions, is similarly prohibited. However, law enforcement may use biometric identification systems under specific circumstances, such as locating missing persons or preventing terrorist threats, with prior approval.
High-risk AI applications in fields like law enforcement and healthcare must adhere to strict guidelines. They cannot discriminate and must comply with privacy regulations. Developers are required to ensure that their systems are transparent, safe, and user-friendly. For low-risk applications, such as spam filters, developers must notify users when they are interacting with AI-generated content.
Generative AI and manipulated media will face additional regulations, requiring clear labeling for deepfakes and AI-generated images or videos. AI models must also comply with copyright laws. Rightsholders can choose to reserve their rights to prevent text and data mining, except for scientific research purposes. However, AI models designed strictly for research and prototyping are exempt from this requirement.
The most advanced general-purpose and generative AI models (those utilizing over 10^25 FLOPs of computing power) are classified as posing systemic risks. This category includes powerful models like OpenAI's GPT-4 and DeepMind's Gemini. Providers of these models must evaluate and mitigate risks, report significant incidents, disclose their systems' energy consumption, adhere to cybersecurity standards, and conduct rigorous evaluations.
Violations of the AI Act can result in severe penalties, including fines up to €35 million ($51.6 million) or up to seven percent of global annual profits, whichever is greater. The act applies to any model functioning within the EU, meaning U.S.-based AI providers must comply while operating in Europe. Although OpenAI's CEO, Sam Altman, previously expressed concerns about potential withdrawal from Europe if the AI Act passed, he later clarified that there are no plans for such action.
To enforce the AI Act, each EU member state will establish its own AI regulatory body, and the European Commission will create an AI Office to evaluate models and monitor systemic risks. Providers of general-purpose models identified as high-risk will collaborate with this office to develop appropriate codes of conduct.