The Supreme Court has significantly diminished federal agencies' regulatory powers, as highlighted by Morning Brew.
Just months ago, momentum for AI regulation was building, showcased by events like the AI Safety Summit in the U.K., the Biden Administration's AI Executive Order, and the EU AI Act. However, recent judicial developments and evolving political dynamics are casting doubt on the future of AI regulation in the U.S. This article examines the ramifications of these changes and the obstacles that lie ahead.
The Supreme Court’s ruling in Loper Bright Enterprises v. Raimondo curtails federal agencies' authority to oversee sectors, including AI. By overturning the 40-year-old Chevron deference precedent, the Court has shifted the interpretation of Congressional laws from agencies to the judiciary.
Agency Expertise vs. Judicial Oversight
Many existing laws, particularly those concerning technology and the environment, lack specificity, relying on agencies for interpretation. This ambiguity is often deliberate for political or practical reasons. Now, federal judges can more easily challenge regulatory decisions based on these vague laws, potentially hindering AI regulation. While proponents argue this ensures consistent legal interpretations, the reality is that in fast-evolving fields like AI, agencies typically possess greater expertise. For instance, the Federal Trade Commission (FTC) focuses on AI-related consumer protection, the Equal Employment Opportunity Commission (EEOC) clarifies AI use in hiring, and the Food and Drug Administration (FDA) oversees AI in medical devices.
These agencies are staffed with AI specialists, whereas the judiciary lacks such expertise. Despite this, the Court maintains that “… agencies have no special competence in resolving statutory ambiguities. Courts do.”
Challenges and Legislative Needs
Loper Bright Enterprises v. Raimondo may weaken the establishment and enforcement of AI regulations. According to the New Lines Institute, this ruling means agencies must articulate complex, technical details in ways that resonate with those unfamiliar with the field to substantiate their regulations.
Justice Elena Kagan's dissent highlighted this concern, arguing the Court's majority assumed a regulatory role it is ill-equipped to fulfill. During oral arguments, she reinforced the need for informed decision-making on AI regulation, asserting that knowledgeable parties should drive these discussions.
If Congress intends for federal agencies to guide AI regulations, it must clearly state this in any forthcoming legislation. Ellen Goodman, a Rutgers University law professor, emphasized that “the solution was always getting clear legislation from Congress, but that’s even more true now.”
Political Landscape
However, there’s no certainty Congress will make such provisions. The Republican party’s recent platform expresses a desire to repeal the existing AI Executive Order, stating the intention to remove constraints on AI innovation imposed by “Radical Leftwing ideas.” Analyst Lance Eliot notes this likely includes eliminating AI-related reporting and evaluation requirements.
Influential figures, like tech entrepreneur Jacob He, argue that existing laws sufficiently govern AI, cautioning against excess regulations that might impede U.S. competitiveness. Still, the ruling in Loper Bright Enterprises undermines the regulatory framework that those same laws were meant to support.
In lieu of current executive guidance, the Republican platform advocates for AI development that emphasizes free speech and human flourishing. Reports indicate efforts led by Trump allies to establish a new framework that could prioritize “America first in AI,” potentially leading to reduced regulations perceived as burdensome.
Regulatory Outlook
Regardless of the political landscape, the U.S. will face a transformed AI regulatory environment. The Supreme Court's decision raises serious concerns about the ability of specialized agencies to enforce effective AI regulations, which could slow or obstruct necessary oversight.
A change in leadership might also influence regulatory approaches. Should conservatives prevail, expect a more lenient regulatory environment that favors innovation, diverging sharply from the UK’s promise for binding regulations and the EU's comprehensive AI Act.
The cumulative effects of these shifts could lead to diminished global consistency in AI regulations. This fragmentation may hinder international collaboration, complicating research partnerships and data-sharing agreements while also impacting global AI standards. Though looser regulations might encourage innovation in the U.S., they also raise ethical, safety, and employment concerns that could erode public trust in AI technologies.
In response to weakened regulations, major AI companies might proactively collaborate on ethical standards and safety guidelines, prioritizing transparency and auditability in their systems to foster trust and responsible development.
Ultimately, a period of heightened uncertainty in AI regulation lies ahead. As political dynamics shift, it’s imperative for policymakers and industry leaders to work together, ensuring that AI development aligns with ethical, safe, and societal benefits.
Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.