Biden’s Executive Order: Establishing AI Safety and Security Standards

U.S. President Joe Biden has signed an executive order (EO) aimed at establishing new standards for AI safety and security. This directive mandates that companies developing foundational AI models must notify the federal government and share all safety test results before these technologies are made available to the public.

The rapid growth of generative AI, exemplified by applications like ChatGPT and foundational models created by OpenAI, has ignited a global conversation about the necessity of protective measures to mitigate risks associated with algorithmic control. In May, G7 leaders identified critical issues requiring attention as part of the Hiroshima AI Process, culminating in an agreement on guiding principles and a "voluntary" code of conduct for AI developers.

Recently, the United Nations (UN) announced the establishment of a new board to examine AI governance. Meanwhile, the U.K. is hosting a global summit on AI governance at Bletchley Park this week, with U.S. Vice President Kamala Harris scheduled to address attendees.

The Biden-Harris Administration has focused on AI safety in the absence of legally binding regulations, securing voluntary commitments from major AI developers—including OpenAI, Google, Microsoft, Meta, and Amazon—as a precursor to this executive order.

“Safe, Secure, and Trustworthy AI”

The executive order specifically requires developers of the "most powerful AI systems" to provide safety test results and relevant data to the U.S. government. "As AI’s capabilities expand, so do its implications for the safety and security of Americans," the order states, emphasizing its aim to safeguard citizens from potential AI risks.

By aligning the new AI safety and security standards with the Defense Production Act of 1950, the order targets any foundational model that could jeopardize national security, economic stability, or public health—essentially covering most emerging foundational models. "These measures will ensure AI systems are safe, secure, and trustworthy before companies release them to the public," the order affirms.

Additionally, the order delineates plans to develop new tools and frameworks to ensure the safety and reliability of AI technologies. The National Institute of Standards and Technology (NIST) will be responsible for crafting new standards for extensive pre-release red-team testing. Other departments, such as Energy and Homeland Security, will assess AI-related risks to critical infrastructure.

The order also lays the groundwork for various new directives aimed at mitigating specific risks associated with AI, including measures against the misuse of AI in creating dangerous biological materials, combating AI-driven fraud, and establishing cybersecurity programs to enhance vulnerabilities in essential software.

Addressing Equity and Civil Rights

It is crucial to note that the order addresses equity and civil rights, acknowledging how AI can exacerbate discrimination and bias in fields like healthcare, justice, and housing. Furthermore, it highlights the potential threats posed by AI in workplace surveillance and job displacement. However, some critics argue the order lacks significant enforceability, as much of the content is advisory. For instance, it proposes developing best practices for AI in areas such as sentencing, parole, risk assessments, and predictive policing.

While this executive order provides a framework for integrating safety and security into AI systems, its enforceability will likely require additional legislative action. Notably, the order raises concerns over data privacy, recognizing AI's capability to extract and misuse personal data at scale—a risk that developers may inadvertently encourage during the model training process. The EO calls for Congress to enact bipartisan data privacy legislation to safeguard Americans' information and seeks federal support for privacy-preserving AI development techniques.

As Europe approaches the implementation of comprehensive AI regulations, it becomes evident that countries worldwide are striving to manage the societal impacts of AI technologies—a transformation akin to the upheaval witnessed during the industrial revolution. The effectiveness of President Biden's executive order in regulating major players like OpenAI, Google, Microsoft, and Meta will be critically observed in the coming months.

Most people like

Find AI tools in YBX