The Biden administration has initiated a push for monitoring open-weight models to gather crucial information for future regulations. However, specific regulatory measures for these models have yet to be outlined.
In a recent report, the National Telecommunications and Information Administration (NTIA) emphasized the need to assess the risks posed by public AI models, which could potentially disrupt existing systems. This assessment is essential to prevent any possible disasters.
Despite recognizing the importance of monitoring, the NTIA acknowledged that the U.S. currently lacks the capacity to effectively address many risks associated with foundational models. To bridge this gap, the NTIA proposed focusing on three key areas: gathering evidence on model capabilities for risk monitoring, evaluating and comparing risk indicators, and developing targeted policies to mitigate those risks.
The NTIA defines open-weight models as foundational models with publicly released weights or parameters, which users can download. These differ from open-source models, which operate under an open license that allows for replication, contrary to some claims from AI model developers.
“The consideration of marginal risk helps avoid imposing unnecessarily strict restrictions on dual-use foundation models with widely available weights, especially when weighing benefits and risks against similar systems,” the NTIA stated.
Furthermore, the agency acknowledged that both open and closed models present risks requiring management. However, open models may offer unique opportunities and challenges in risk reduction.
The Biden administration’s focus on open models suggests a regulatory approach akin to the European Union’s AI Act, which was adopted by the EU parliament in March. The EU's legislation regulates AI based on the risks associated with use cases rather than the models themselves. For instance, significant penalties have been established for companies using AI for facial recognition. Given the EU’s approach, the U.S. is likely considering similar measures to address the potential hazards of public AI models.
Kevin Bankston, senior advisor on AI governance with the Center for Democracy and Technology, commended the NTIA for its cautious approach to regulating AI models. In an email, he stated, “The NTIA rightly concluded that there is insufficient evidence of novel risks from open foundation models to justify new restrictions on their distribution.”
For now, AI model developers need not be overly concerned, as the NTIA is still engaged in a comprehensive fact-finding mission.
Assaf Melochna, co-founder of AI company Aquant, noted in an email that the NTIA’s current observations do not significantly alter the landscape for model developers.
“Developers can continue to release their model weights at their own discretion, though they will face increased scrutiny,” Melochna explained. “The sector evolves rapidly, necessitating that federal agencies remain adaptable based on new findings.”