On the same day that the U.K. convened global corporate and political leaders at Bletchley Park for the AI Safety Summit, over 70 signatories endorsed a letter advocating for a more transparent approach to AI development. “We are at a pivotal moment in AI governance,” the letter, issued by Mozilla, emphasizes. “To address both current and future risks posed by AI systems, we must prioritize openness, transparency, and broad accessibility on a global scale.”
The ongoing debate in the AI landscape mirrors discussions in the software industry over the past decades: the tension between open-source and proprietary models, along with the respective advantages and disadvantages of each. Recently, Yann LeCun, Meta’s chief AI scientist, took to X to criticize efforts by companies like OpenAI and Google’s DeepMind to establish “regulatory capture of the AI industry” through lobbying against open AI research and development. He warned, “If your fear-mongering campaigns succeed, they will inevitably lead to a disaster: a select few companies will control AI.”
This concern resonates within the governance discussions arising from initiatives like President Biden’s executive order and this week's AI Safety Summit in the U.K. Some leaders of major AI firms express alarm regarding the existential risks AI poses, arguing that open-source AI could be exploited by malicious actors to create dangerous applications, such as chemical weapons. Conversely, critics argue that such fear tactics serve to consolidate power among a few protective companies.
Proprietary control
The reality is likely more nuanced, yet it is this context that led dozens of individuals to sign an open letter today, urging for increased openness in AI development. The letter acknowledges, “Yes, publicly available models come with risks and vulnerabilities — AI models can be misused by malicious actors or deployed by inexperienced developers. However, history has repeatedly shown that the same risks apply to proprietary technologies. Access and scrutiny from the public ultimately make technology safer, not more perilous. The belief that stringent proprietary control of foundational AI models is the sole method for safeguarding against societal harm is naive at best, and dangerous at worst.”
Renowned AI researcher LeCun, who has been with Meta for a decade, is among those who signed the letter, joining notable figures like Google Brain and Coursera co-founder Andrew Ng, Hugging Face co-founder and CTO Julien Chaumond, and prominent technologist Brian Behlendorf from the Linux Foundation.
The letter identifies three key areas where increased openness can facilitate safer AI development: promoting independent research and collaboration, enhancing public scrutiny and accountability, and lowering entry barriers for newcomers in the AI field. “History has shown that hastily implementing the wrong kind of regulation can lead to power concentrations that stifle competition and innovation,” it states. “Open models can foster informed debate and improve policy-making. If our goals are safety, security, and accountability, then openness and transparency are essential components to achieve them.”
AI, AI Summit, Government & Policy, Meta, Mozilla, Yann LeCun