On August 1, the EU’s Artificial Intelligence Act officially came into effect, marking a significant milestone as the world’s first comprehensive legislation regulating artificial intelligence. This act will be implemented gradually; some provisions will take effect either six months or twelve months after passing, while most regulations will start on August 2, 2026.
With the rapid emergence of new AI applications and business models, artificial intelligence has become a driving force in the latest technological revolution and industrial transformation. However, effectively harnessing AI poses new challenges for global governance. In response to this rapid development, Western nations, led by the US and EU, are accelerating their legislative initiatives and collaborating to establish a regulatory framework for AI, thus vying for influence in shaping international standards.
The US approach to AI regulation is somewhat fragmented, focusing primarily on ensuring orderly development and maintaining global competitiveness in the industry. Multiple guiding documents have been issued, along with an executive order on the safe and trustworthy development and use of AI. This includes provisions aimed at protecting personal privacy, promoting fairness and civil rights, and ensuring responsible governmental use of AI. Nevertheless, the US regulatory landscape also has an exclusive bent, with strict controls over foreign entities' access to critical AI models and new rules limiting American investments in China's AI sector.
Conversely, the EU places a stronger emphasis on digital sovereignty and privacy protection. The EU Artificial Intelligence Act is notably stringent, categorizing AI models into four risk levels and establishing corresponding restrictions and regulatory obligations. Violators face substantial fines for non-compliance. While the governance strategies differ, Western nations are enhancing dialogue to coordinate policies and rules. In 2022, major economies including the US, EU, UK, Australia, and Japan released the "Internet Future Declaration," and in 2023 they established the AI agreement focusing on public interest cooperation across five key areas, including climate change.
In April 2024, the US and EU issued a joint statement underlining the importance of AI collaboration in addressing global challenges, with the US and UK announcing a partnership on AI safety and science. In July 2023, regulators from the US, UK, and EU issued a unified statement agreeing on enhanced AI oversight and committing to maintain a fair and open market environment. The statement highlighted three major market risks associated with AI: concentration of control over critical resources, the extended impact of industries, and harmful competitive agreements, raising concerns about large tech companies' dominance over essential resources like chips, data, and algorithms.
While developed economies strive for regulatory alignment, challenges remain. The US, as a leader in AI technology, focuses more on fostering a competitive environment and building industry "moats" to enhance its domestic competitiveness. In contrast, the EU faces difficulties in maintaining its competitive edge in AI, as reports indicate slow progress in AI applications, and the European Court of Auditors notes that the EU is lagging behind the US and China in this race.
Confronted with these challenges, the EU hopes to steer global digital rule-making to catch up with AI advancements. Currently, the EU is intensifying scrutiny of foreign AI firms, with antitrust authorities preparing to reexamine Microsoft’s partnership with OpenAI and to scrutinize transactions between Google and Samsung.
As the US and EU work together to create a regulatory framework, there is a risk of fragmented global AI policies that may hinder the development of a cohesive governance structure and exacerbate the "digital divide" between developed and developing nations. Experts warn that dominance by any single nation in AI development could lead to new forms of digital colonialism, underlining the need for balanced global cooperation in AI technology. The prevailing sentiment is that isolationist approaches in the face of global challenges could ultimately prove counterproductive. Furthermore, renowned futurist Ian Bremmer emphasizes that AI technology thrives on open collaboration and the free flow of scientific research.