A recent report by Ernst & Young (EY) on the global artificial intelligence (AI) regulatory landscape has garnered renewed attention following President Biden's executive order aimed at monitoring and regulating AI risks while maximizing its benefits.
Titled “The Artificial Intelligence (AI) Global Regulatory Landscape: Policy Trends and Considerations to Build Confidence in AI,” the report was published last month. It serves as a comprehensive guide for policymakers and businesses, clarifying the complex global AI regulatory environment.
The report analyzes the regulatory activities of eight major jurisdictions: Canada, China, the European Union, Japan, Korea, Singapore, the United Kingdom, and the United States. Despite their diverse regulatory contexts, these areas share common goals in AI governance—minimizing risks while enhancing societal benefits. They also align with the OECD AI principles endorsed by the G20 in 2019, which emphasize human rights, transparency, risk management, and ethical considerations.
However, the report highlights significant differences and challenges in global AI regulation. The European Union has taken a proactive stance, proposing an AI Act that imposes mandatory requirements for high-risk AI applications, including biometric identification and critical infrastructure. In contrast, China is moving to regulate core AI functionalities like content recommendation and facial recognition, while the U.S. has historically adopted a lighter regulatory approach focused on voluntary industry guidelines and sector-specific regulations.
The global AI regulatory landscape is continually evolving. Following the publication of the EY report, President Biden's recent executive order significantly alters the U.S. regulatory landscape, introducing mandatory safety testing disclosure for powerful AI systems and requiring developers to notify the federal government of potential risks, including national security and health concerns. This marks a shift from the previously described voluntary guidelines.
Additionally, the U.K. government has introduced an AI White Paper proposing a regulatory framework based on four principles: proportionality, accountability, transparency, and ethics, aligning closely with the EU's approach.
These developments indicate a rapidly changing global AI regulatory landscape, emphasizing the need for policymakers and businesses to stay informed and adapt to new regulations. While the EY report remains a vital resource for understanding this landscape, it may require updates as new rules and initiatives emerge.
The EY report outlines several key trends and best practices in AI regulation that are still relevant:
- A risk-based approach tailored to the specific use case and risk profile of AI systems.
- Consideration of sector-specific risks, such as those in healthcare, finance, and transportation.
- Initiatives addressing AI’s impacts on related policy areas, including data privacy and cybersecurity.
- The use of regulatory sandboxes to collaboratively shape AI rules with stakeholders.
In conclusion, the EY report emphasizes the importance of continuous dialogue among government officials, corporate leaders, and stakeholders to balance regulation with innovation. It provides a roadmap for navigating the complexities of AI regulation and encourages collaboration to close the AI confidence gap, prevent policy fragmentation, and unlock AI’s full potential. This report is essential for anyone interested in understanding the ethical challenges and dynamic policy landscape surrounding AI on a global scale.