As governments worldwide seek to regulate AI, the European Union is leading the way with groundbreaking legislation aimed at imposing strict limits on the technology. Recently, the European Commission outlined a comprehensive regulatory framework that categorizes AI software into four distinct risk tiers, applying varying levels of regulation accordingly.
At the highest tier are systems deemed to present an "unacceptable" risk to individual rights and safety. These algorithms, which include any AI enabling governments and companies to implement social scoring, would be outright banned under the proposed legislation.
The next tier encompasses high-risk AI systems, which represent a broad range of software and prospective regulations. The Commission mandates that these systems undergo strict oversight, focusing on factors such as the datasets used for training, the required level of human oversight, and how information is presented to end users. This category includes AI used in law enforcement and all forms of remote biometric identification. Notably, police use of biometric identification in public spaces would be prohibited, with limited exceptions for national security.
The legislation also identifies a category for limited-risk AI, such as chatbots. These programs will be required to disclose their AI nature, enabling users to make informed choices about their interactions. Lastly, the Commission addresses minimal-risk AI, anticipating that the vast majority of AI systems will fall into this category. Examples include spam filters, for which no additional regulations are planned.
"AI is a means, not an end," stated Internal Market Commissioner Thierry Breton. "Today's proposals aim to enhance Europe's position as a global hub for AI excellence, ensure that AI respects our values, and harness its potential for industrial use."
The proposed legislation, likely to undergo years of debate before implementation, could impose fines reaching up to six percent of a company's global sales for non-compliance. With the EU already recognized for some of the strictest data privacy measures worldwide under GDPR, similar approaches may emerge in content moderation and antitrust regulations.