In January, I interviewed Mark Beall, co-founder and former CEO of Gladstone AI, whose consulting firm recently released a groundbreaking AI safety report for the State Department. The report's key recommendations, reported by TIME, outline how the U.S. should address significant national security risks associated with advanced AI.
During our conversation, I was exploring the growing debate among AI and policy leaders concerning the influence of effective altruism (EA) within AI security circles in Washington, D.C. Beall, a former head of AI policy at the U.S. Department of Defense, expressed strong urgency in managing potential catastrophic AI threats. He emphasized the need for "common sense safeguards" on social media, warning that we must act before facing an AI-related disaster.
For many, "AI safety" evokes concerns about existential risks, often fueled by belief systems like effective altruism. The Gladstone AI authors reported consulting over 200 government officials, experts, and personnel from leading AI labs such as OpenAI and Google DeepMind during their extensive research.
However, the report faced skepticism on social media. Nirit Weiss-Blatt, a communication researcher, highlighted co-author Eduoard Harris's comments on the much-maligned "paperclip maximizer" problem, which some view as an extreme scenario. Additionally, Aidan Gomez, CEO of Cohere, criticized the survey's legitimacy, suggesting users could achieve more representative results via social media polls. William Falcon, CEO of Lightning AI, further dismissed claims about open-source AI posing extinction threats, arguing that closed-source AI is potentially more dangerous.
Interestingly, Beall's departure from Gladstone coincided with the launch of what he described as "the first AI safety Super PAC." Launched alongside the Gladstone report, the PAC aims to educate voters on AI policy, underscoring the growing public impact of this issue. Beall mentioned securing initial funding for the Super PAC and plans to raise millions more.
Co-founder Brendan Steinhauser's background as a Republican consultant highlights the PAC's bipartisan mission. Beall stated, "We want lawmakers from both sides to promote innovation and protect national security." Steinhauser’s two-decade experience in national politics supports this goal.
While the simultaneous launch of the Super PAC and the Gladstone report may seem unusual, Jeremie Harris, co-founder of Gladstone, clarified that the report was commissioned by the State Department's Bureau of International Security and Nonproliferation to provide neutral expert analysis. He assured that Gladstone operates independently of any political affiliations or funding influences.
Beall acknowledged the significance of the Gladstone report but emphasized that "now, the real work begins." He called for Congress to pass legislation that will ensure a flexible, long-term approach to AI development. When asked about potential donors, Beall expressed hopes for building a diverse coalition, citing widespread national concern over catastrophic AI risks.
While public focus may also be drawn to immediate threats like deepfakes and election disinformation, it’s clear that AI safety and security policy will intertwine with funding and politics moving forward.