Understanding the EU AI Act's Uncertainty: Insights from the OpenAI Controversy

The EU AI Act, expected to be a landmark piece of legislation, is currently facing uncertainty due to disputes over the regulation of foundation models—large-scale AI models like GPT-4, Claude, and Llama.

Recently, the French, German, and Italian governments have pushed for limited regulation of foundation models, a move many attribute to significant lobbying from Big Tech and open-source companies like Mistral, advised by Cédric O, a former French digital minister. Critics claim this approach could undermine the integrity of the EU AI Act.

Advocates for stricter regulation of foundation models have responded vigorously. A coalition of German and international AI experts, including prominent researchers Geoffrey Hinton and Yoshua Bengio, recently published an open letter urging the German government not to exempt these models from the EU AI Act, warning that such an exemption would jeopardize public safety and harm European businesses.

Additionally, French experts have contributed to the conversation with a joint op-ed in Le Monde, expressing strong opposition to Big Tech’s attempts to dilute this crucial legislation during its final phases.

So, why is the EU AI Act encountering resistance at this advanced stage? Over two and a half years after the initial draft was proposed and following extensive negotiations, the Act is now in the trilogue phase, where EU legislators finalize the bill's details. The European Commission hopes to enact the AI Act by the end of 2023, ahead of the 2024 European Parliament elections.

The recent turmoil at OpenAI provides insight into the internal dynamics influencing these discussions. Following a dramatic board upheaval where CEO Sam Altman was briefly fired, the contrasting views within the organization mirrored the broader debate on AI regulation in the EU. Some board members prioritized commercial opportunities and the development of artificial general intelligence (AGI), while others expressed deep concerns about the safety implications of high-risk technology.

The board members advocating caution were connected to the Effective Altruism movement, which has also been influential in lobbying around the EU AI Act. Reports indicate that this community has allocated substantial resources to raise awareness about the existential risks posed by AI.

Conversely, Big Tech, including OpenAI, has actively lobbied against stringent regulations. Sam Altman, while publicly advocating for global AI governance, has sought to weaken certain provisions of the EU's proposed regulations to ease the compliance burden on his company.

Gary Marcus has highlighted these developments, arguing that the chaos at OpenAI emphasizes the necessity of rigorous oversight, rather than allowing Big Tech to self-regulate. He supports the European Parliament's tiered approach, asserting that reducing key components of the EU AI Act to a self-regulation exercise would have dire global consequences.

Brando Benifei, a leading European Parliament negotiator, echoed this sentiment, stating that the unpredictability surrounding Altman's actions illustrates the dangers of relying on voluntary industry agreements.

Is the EU AI Act genuinely at risk? According to German consultant Benedikt Kohn, ongoing negotiations are crucial, with the next trilogue scheduled for December 6. He cautions that a failure to reach an agreement could severely undermine the EU's aspirations as a global leader in AI regulation.

Most people like

Find AI tools in YBX