Biden's AI Executive Order: Recognized for Scope But Lacking Depth Without Supporting Legislation

The Biden administration has unveiled the long-awaited executive order on artificial intelligence (AI) today, coinciding with an international summit on AI safety in the U.K. However, experts and stakeholders are reminding us that the scope of what the president can enact without legislative backing is limited.

This executive order arrives as governments worldwide strive to navigate the complexities and risks associated with AI, a technology that has proven to be an elusive target for regulators. The U.S. and EU have skillfully avoided hasty actions that could stifle innovation, but lengthy negotiations threaten to lead to inaction that could allow abuses and exploitation to flourish.

Biden’s executive order serves as a temporary measure that supports the "voluntary" practices many companies are currently adopting. Due to the limitations of executive power, the order focuses on sharing outcomes, establishing best practices, and providing clear guidelines.

At present, there are no legislative solutions addressing the potential risks and abuses of AI beyond existing tech regulations, which many believe to be insufficient. Federal actions regarding social media platforms and major monopolies like Amazon and Google have been inconsistent, although a more aggressive stance from the new FTC may alter this trend.

As for comprehensive legislation governing AI use, it appears no closer to fruition than it was several years ago. The rapid evolution of the industry and technology means that any regulations enacted could quickly become obsolete. Moreover, there is a lack of clarity on whether limits should be legislated at the federal level or be left to state regulations or specialized agencies.

One potential solution could be the establishment of a federal agency tasked with regulating AI. However, creating such an entity would require more than a simple executive order. Meanwhile, the executive order directs several AI-focused teams, including one within the Department of Health and Human Services, to handle and assess reports of AI-related risks particularly in the healthcare sector.

Senator Mark Warner of Virginia expressed that he was “impressed by the breadth” of the executive order but hinted at its lack of depth. “I appreciate sections that align with my initiatives on AI safety and security,” he stated. “However, many of these provisions merely scratch the surface, especially concerning healthcare and competition policy. Although this is a positive development, further legislative action is essential, and I will continue my efforts…”

Given the current political landscape and the upcoming contentious election period, passing any substantive legislation—especially a complex bill governing AI—seems unlikely.

Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, acknowledged the dual perspectives on the matter. “President Biden is sending a critical message that certain AI systems present immediate risks needing urgent attention. The administration is indeed moving in the right direction,” he noted. “However, today marks only the start of a lengthy regulatory journey that must ultimately ensure companies profiting from AI prove their products are safe and effective, similar to the obligations faced by manufacturers of pharmaceuticals, industrial chemicals, or aircraft. Without new resources from Congress, it remains uncertain whether the federal government can effectively assess the intricate processes involved in AI development or the adequacy of necessary safety testing.”

Sheila Gulati, co-founder of Tola Capital, highlighted that the executive order reflects a “clear intention to promote innovation while safeguarding citizens.” She stated, “It is vital that we do not stifle the agile innovation often seen in startups. By prioritizing AI explainability, adopting a risk-based approach where bias or harm could occur, and focusing on security and privacy, we are taking sensible steps forward. With this executive order and the implications set by NIST standards, we anticipate leadership from standards organizations over legislators in the immediate future.”

Moreover, it’s important to note that the federal government is a significant consumer of current AI and technology products. Companies eager to maintain this relationship will likely comply with the anticipated guidelines in the near term.

Bob Cattanach, partner at Dorsey and Whitney, pointed out that the timing of the executive order seems somewhat misaligned. “The Executive Order preempts Vice President Harris's address at the upcoming U.K.-hosted AI summit, indicating that the White House’s concerns about the largely unregulated AI landscape were so urgent that it opted for unilateral action, potentially straining relationships with key allies.

However, "alienate" may be too strong a term. It's essential to remember that the U.K. is not the EU, and the collaborative processes in the EU will still require years to finalize. It’s clear the administration prefers not to wait for that timeline. It may have been more cohesive and diplomatic for Harris to discuss the executive order at the summit, emphasizing the need for international harmony in AI regulation with the U.S. setting a modest example. You can catch her remarks, which are expected to underscore this message, during the live stream on November 1.

Most people like

Find AI tools in YBX