European Union lawmakers have engaged in over 20 hours of intense negotiations in their efforts to regulate artificial intelligence (AI). A preliminary agreement has been reached concerning a contentious aspect: the rules for foundational models and general-purpose AI (GPAI), as revealed in a leaked proposal.
In recent weeks, French AI startup Mistral has led a strong push advocating for a complete regulatory exemption for foundational models and GPAIs. However, EU lawmakers seem to have resisted the full momentum from industry lobbyists aiming to let market forces dictate the course, opting instead for a tiered approach to regulating these advanced AIs, as suggested earlier this year.
Notably, the proposal includes a partial exemption from certain obligations for GPAI systems that operate under free and open-source licenses. This requirement encompasses the public release of model architecture, weights, and usage information, with exceptions for "high risk" models. Reuters has also reported on these partial exceptions for open-source advanced AIs.
Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, noted that the open-source exemption is further constrained by commercial deployment. This means that when such an open-source model becomes available for market use, the exemption could cease to apply. "Thus, there's a possibility that the law could impact Mistral, depending on how terms like 'make available on the market' or 'putting into service' are interpreted," he remarked.
The preliminary agreement also maintains the classification of GPAIs as having "systemic risk," determined by criteria such as the model's “high impact capabilities.” A model will meet this designation if it uses more than 10^25 floating-point operations (FLOPs) for its training, a threshold few existing models surpass. This suggests that only a limited number of cutting-edge GPAIs will face upfront obligations to proactively mitigate systemic risks, indicating that Mistral's lobbying efforts have softened potential regulatory impacts.
The proposal defines systemic risk as linked to the high-impact capabilities of GPAIs, considering their influence and scalability in the EU market and their potential negative effects on public health, safety, and fundamental rights.
Under the preliminary agreement, GPAI providers classified with systemic risk must undertake evaluations using standardized protocols, document and report significant incidents promptly, conduct adversarial tests, maintain an adequate level of cybersecurity, and disclose energy consumption estimates for their models.
The classification of GPAIs with systemic risk will be determined by the AI Office, which may act independently or respond to alerts from a scientific panel. Model developers that qualify must notify the Commission "without delay" or within two weeks.
Furthermore, the proposal allows the Commission to implement delegated acts to adjust systemic risk classification thresholds.
General obligations for GPAI providers, which do not meet the high-risk criteria, include testing and evaluating the model and maintaining technical documentation for regulatory authorities. They must also provide downstream deployers—AI application developers—with a detailed overview of the model’s capabilities and limitations, assisting them in complying with the AI Act.
The proposal requires foundational model creators to implement policies that adhere to EU copyright law, particularly regarding limitations set by copyright holders on text and data mining. They must also supply a sufficiently detailed summary of the training data used to build the model, made public in a template provided by the proposed AI Office.
This copyright disclosure requirement applies even to open-source models, marking another exception to their carve-out from regulatory obligations. Additionally, the proposal references codes of practice that GPAIs—specifically those with systemic risk—may leverage to demonstrate compliance until a “harmonized standard” is introduced.
The AI Office is expected to help develop these codes, while the Commission plans to issue standardization requests six months after the regulation takes effect, focusing on documentation and reporting improvements for AI systems' energy and resource management.
As negotiations continue, the trilogue discussions on the AI Act resumed this week, with the European Commission aiming for a resolution to the contentious regulatory framework. Ensuring agreement on all elements is crucial for passing the law, leaving the fate of the AI Act uncertain amid ongoing discussions over sensitive issues like biometric surveillance for law enforcement.
Recent updates from the bloc’s internal market commissioner, Thierry Breton, highlighted ongoing progress, with talks anticipated to continue at 9 a.m. Brussels time. The Commission remains focused on finalizing the risk-based AI rulebook first proposed in April 2021, but compromise among co-legislators—the Council and Parliament—will be essential for success.
Stay tuned for more developments on the EU’s AI Act discussions as they unfold.