AI Regulation: A Double-Edged Sword
In a recent interview, Meta's Chief Scientist, Lei Jun Yang, expressed concerns about premature regulations on artificial intelligence (AI), arguing that such measures may inadvertently strengthen the dominance of large tech companies and stifle competition. "Regulating AI too early is akin to attempting to govern jet planes back in 1925 before they were invented," he stated. This approach, he believes, stems from the arrogance of some tech companies that assert they are the only ones capable of safely developing AI.
Yang emphasized that discussions surrounding AI risks are premature. He pointed out that it isn't until we create systems with learning capabilities comparable to a kitten's that we should truly consider the risks involved. He argues that current AI models are not as powerful as some researchers claim, suggesting that OpenAI and Google DeepMind may be overly optimistic in their assessments of problem complexity. "These models lack an understanding of how the world operates, they don't possess planning abilities, and they cannot truly reason. We don't have fully autonomous cars yet because no vehicle can self-train for 20 hours of driving—something a 17-year-old can do," he added.
Despite these concerns, Yang remains optimistic about the future of open-source AI models. He believes that open-source development can enhance competition and encourage broader participation in AI system creation and usage. In response to critics who argue that powerful open-source models could lead to misinformation and bioterrorism risks, Yang points out that this debate about controlling rapid technological advancement has existed since the internet's inception and is reappearing in discussions about AI. He asserts that the essence of technological progress lies in its openness and decentralization.
Meta's recently launched large language model, LLaMA, not only provides accessible code and data but has also seen widespread application in the open-source community. Research from Stanford University indicates that LLaMA is among the most open-source-friendly large models available today. The conversation around the implications of large AI models for the future and the potential risks they pose has gained significant attention in recent years. Yang remarked on the tendency for fears about intelligent robots taking control to be influenced by science fiction, calling this notion "absurd." He argues that intelligence is not inherently linked to a desire for domination, and developers can incorporate moral considerations into systems to manage their behavior.
He acknowledged the possibility that machines or AI could surpass human intelligence in various fields in the future. "There is no doubt that machines will become smarter than us. The critical question is whether that is frightening or exciting. I find it exhilarating, as these powerful AI systems will help us tackle significant challenges like climate change and disease curing."
By encouraging thoughtful discussions around AI regulation and the balance of power among tech companies, we can pave the way for innovative developments that benefit society as a whole.