The rapid advancement of artificial intelligence (AI) poses significant risks, prompting OpenAI leadership to call for an international regulatory body similar to those overseeing nuclear energy. However, they stress that this process must not be rushed.
OpenAI is actively seeking diverse perspectives on decisions impacting their AI technologies. During the recent AI Forward event in San Francisco, OpenAI President Greg Brockman shared their global initiative for AI regulation. He introduced a collaborative project akin to Wikipedia, designed to unite individuals with differing viewpoints to foster consensus. "We are not simply sitting in Silicon Valley, drafting rules for the world. We are beginning to contemplate democratic decision-making,” he stated.
Brockman also discussed an idea presented in a blog post, advocating for coordinated global efforts to ensure the safe development of AI. In this publication, OpenAI founders emphasized that the rapid pace of AI innovation might outstrip existing regulatory frameworks.
Since its launch on November 30, ChatGPT has emerged as the fastest-growing application in history, capable of generating authoritative essays from basic text prompts. Concerns about AI have intensified, especially regarding its potential to produce deepfakes and misinformation.
Brockman referenced models like Wikipedia and proposed that organizations like the International Atomic Energy Agency (IAEA) could set restrictions on AI deployment, ensure compliance with safety standards, and monitor resource usage. Another suggestion involves a global agreement to limit the annual growth of advanced AI capabilities or to create a collaborative project among major governments.
Recently, OpenAI CEO Sam Altman presented regulatory ideas to U.S. lawmakers, including requirements for licenses to develop complex AI models and frameworks for governance. He is also engaging with European policymakers on these critical matters.