OpenAI's Bold Appeal: Prioritize Safety by Limiting Compute Power

The emergence of AI regulations, such as the EU AI Act, predominantly emphasizes the scrutiny of software, including models and underlying AI systems. However, a prominent group of AI experts, including figures from OpenAI, advocates for a shift in focus toward the hardware—specifically, the computing components like chips found in data centers. Their argument centers around the premise that ensuring AI safety must prioritize these physical assets over more abstract concepts like data and algorithms, which can be easily duplicated and disseminated.

In their research paper titled "Computing Power and the Governance of Artificial Intelligence," co-authored by Turing Award recipient Yoshua Bengio along with specialists from the University of Cambridge, the Oxford Internet Institute, and the Harvard Kennedy School, the authors propose that a concentrated focus on hardware offers distinct advantages. They suggest that such scrutiny would lead to enhanced regulatory visibility into AI capabilities and applications, better allocation of resources towards safe and beneficial AI uses, and improved enforcement of regulations against “reckless or malicious” development or deployment.

The authors highlight the tangible nature of AI computing hardware, which is produced through a heavily concentrated supply chain. They argue that this physicality allows for more effective regulation compared to intangibles. The paper outlines several proposals aimed at fostering better governance of AI technologies. Among these is the introduction of "compute caps," which would impose limits on the number of connections each chip could sustain. This concept echoes the idea of incorporating "start switches" for AI training, allowing developers to restrict access to potentially dangerous data—essentially providing a digital means to veto the use of certain AI systems.

Another significant recommendation is the establishment of a comprehensive registry for AI chips, compelling manufacturers, sellers, and resellers to report all transfers. This initiative would yield precise insights into the amount of compute power held by various nations and corporations at any given time. Furthermore, the authors propose that cloud computing service providers regularly report on large-scale AI training operations. Suggested measures also include implementing general workload monitoring to ensure greater transparency in AI processes.

In their exploration of regulatory frameworks, the authors draw parallels between the governance of AI hardware and safety protocols surrounding nuclear weapons. A noteworthy suggestion involves requiring multiple parties' consent before unlocking compute resources for high-risk AI systems. Haydn Belfield, a lead author of the report from Cambridge’s Leverhulme Centre for the Future of Intelligence, emphasized the benefits of this approach, stating that “computing hardware is visible and quantifiable, allowing for restrictions to be imposed more effectively than with the increasingly virtual aspects of AI.”

While ongoing government initiatives, including the U.S. Executive Order on AI and China's Generative AI Regulation, have started to address the role of compute in AI governance, the authors advocate for further action. They contend that, similar to international regulations on nuclear supplies—which prioritize critical inputs subject to rigorous oversight—a dedicated focus on compute resources will be essential for the effective regulation of AI systems, fostering a safer and more responsible technological landscape.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles