The recent passage of the EU AI Act signals a new era of oversight for 'general purpose' AI models, particularly concerning the computational power utilized in their training. According to Rachel Appleton, the public policy lead at Anthropic, this legislation focuses on models like OpenAI's GPT-4, Google's Gemini, and Anthropic's own Claude—among other prominent large language and multimodal models that serve as the foundation for smaller, customized models through additional training.
"Interestingly, the EU has identified that any General Purpose AI (GPAI) model requiring computational power of 10^25 FLOPs or more carries systemic risks," Appleton shared during a fireside chat at SXSW 2024. "This encompasses many of the models available on the market today." Such models are now subject to stringent transparency and disclosure mandates, including requirements for model evaluation, cybersecurity measures, red teaming, and impact assessments.
Companies that have already launched GPAI models into the market are given a three-year compliance window to align with the provisions of the AI Act, while those still in development enjoy a grace period of one to two years. Importantly, open-source models are exempt unless they fall under the GPAI classification or are classified as "unacceptable or high risk."
Appleton also noted the EU's initiative to establish an AI office dedicated to governing GPAI models, describing it as a significant undertaking. Non-compliance can result in hefty penalties, amounting to up to 7% of a company's global annual revenue or a flat fee of €35 million ($38.3 million), whichever is higher. Fines will be adjusted based on the specific risks associated with the AI model.
In stark contrast, the U.S. currently lacks a comprehensive legal framework for regulating AI. President Biden's executive order from October introduced over 100 directives aimed at federal agencies, marking one of the largest executive orders in history. "One intriguing aspect is that the Biden administration invoked wartime powers under the Defense Production Act to mandate that developers of dual-use foundation models trained at 10x26 FLOPs or higher report their training runs to the Department of Commerce," Appleton explained. These models raise national security, public health, and economic security concerns, thereby necessitating the reporting of safety test results to the U.S. government.
However, any new administration could easily overturn Biden’s executive order. In Congress, there is a glimmer of bipartisan support for AI regulation, with several initiatives under discussion. Notably, Senate Majority Leader Chuck Schumer (D-NY) is facilitating forums to explore the socioeconomic implications of GPAI models, highlighting the growing importance of this issue.
**Key Legal Concerns Surrounding AI Models**
During the SXSW session, Janel Thamkul, deputy general counsel at Anthropic, outlined four primary legal challenges facing GPAI models:
1. **Data Privacy and Security:**
The gathering and usage of personal data, especially publicly available information, raise significant privacy concerns. Thamkul pointed out that both U.S. and European regulators are striving to find the right balance between leveraging this information and protecting user privacy. This is particularly challenging given the sensitive nature of data utilization in new technologies like AI.
2. **Algorithmic Bias and Discrimination:**
AI models can inadvertently perpetuate social biases and create harmful content, including disinformation and offensive materials. Thamkul suggested that diversifying training datasets can positively influence outcomes and help mitigate these risks.
3. **Intellectual Property Rights:**
Current rulings by the U.S. Patent and Trademark Office and the U.S. Copyright Office assert that to be patentable or copyrightable, works must predominantly be created by human hands. However, other jurisdictions may adopt different perspectives on this issue.
4. **Liability and Accountability:**
Establishing accountability for harmful outputs generated by AI models remains a complex challenge. Thamkul emphasized the need for greater model transparency, noting that Anthropic is advancing research in mechanistic interpretability to support this objective.
As the landscape of AI regulation continues to evolve, the implications for developers and users alike will be significant, shaping the future of AI technology and its integration into society.