A US AI policy expert recently remarked, “If you’re not exploring the influence of effective altruism (EA), you’re missing the story.” Reflecting on this, I realize I overlooked an important angle in my coverage last week.
0:02/14:43 Are You Ready for AI Agents?
Ironically, I thought my article on concerns around securing large language model (LLM) weights was a straightforward win. The recent White House AI Executive Order requires foundation model companies to document ownership and security measures for dual-use foundation models, making this topic timely and relevant. In my piece, I interviewed Jason Clinton, Anthropic’s Chief Information Security Officer, who emphasized the critical need to secure the model weights for Claude, Anthropic’s LLM. He highlighted the danger posed by criminals, terrorists, and nation-states accessing these sophisticated models, noting that “if an attacker accessed the entire file, they could control the entire neural network.” Other frontier companies share these concerns; OpenAI's new “Preparedness Framework” addresses the need to restrict access to sensitive model information.
I also spoke with Sella Nevo and Dan Lahav from the RAND Corporation, authors of a significant report titled Securing Artificial Intelligence Model Weights. Nevo, who leads RAND's Meselson Center, warned that AI models could soon hold substantial national security implications, including potential misuse in developing biological weapons.
The Web of Effective Altruism Connections in AI Security
Upon reflection, my article failed to address the intricate connections between the effective altruism community and the emerging field of AI security. This oversight is notable given the growing influence of EA, an intellectual movement emphasizing the use of reason and evidence to benefit humanity, particularly in preventing existential risks from advanced AI. Critics argue that EA's focus on such distant threats neglects pressing issues like bias, misinformation, and cybersecurity in AI development.
Recently, EA made headlines due to the OpenAI board's involvement in the firing of CEO Sam Altman, highlighting EA connections in high-stakes decision-making.
Despite being aware of Anthropic's ties to EA—FTX founder Sam Bankman-Fried once held a $500 million stake in the startup—I neglected to probe deeper into the EA implications for my story. However, after reading a Politico article that coincidentally appeared the next day, I uncovered key connections between RAND and EA, including significant funding ties.
The Politico article revealed that RAND Corporation researchers were instrumental in shaping the White House's Executive Order on model weights, and that the organization received over $15 million from Open Philanthropy, an EA initiative backed by Facebook co-founder Dustin Moskovitz. Notably, RAND CEO Jason Matheny and senior scientist Jeff Alstott are recognized effective altruists with prior ties to the Biden Administration.
Insights from the Effective Altruism Community
In my follow-up conversation with Nevo, he noted that the strong presence of EA advocates in AI security should not be surprising. Historically, EA has been at the forefront of discussions on AI safety, meaning that anyone engaged in this field likely has encountered EA perspectives.
Nevo also expressed frustration with the Politico article’s tone, suggesting it unfairly implied wrongdoing while highlighting RAND’s longstanding role in providing valuable research for policymakers. He emphasized that neither he nor his center was involved in the Executive Order and that provisions concerning model security were pre-established voluntary commitments by the White House.
While the Meselson Center remains relatively obscure, Nevo indicated that it is one of many RAND research centers, focusing on bio-surveillance and AI's intersection with biological security.
The Importance of Effective Altruism in AI Security
Does EA's influence truly matter? Reflecting on Jack Nicholson’s iconic line, “You need me on that wall!” it raises the question: if we require dedicated individuals in AI security, does their ideology matter?
For many advocating transparency and effective policy in AI, the answer is yes. As highlighted by Politico’s reporting on EA’s influence in Washington, these connections will significantly shape future policies, regulations, and AI development.
The US AI policy expert I spoke with observed that many in the policy sphere overlook potential ideological agendas in AI. Unfortunately, they underestimate their impact.