Dozens of protesters gathered outside OpenAI's headquarters in San Francisco on Monday evening, voicing their concerns over the company's development of artificial intelligence (AI) as employees were leaving for the day.
The demonstration, organized by Pause AI and No AGI, urged OpenAI engineers to halt their work on advanced AI systems, including the chatbot ChatGPT. Their message was clear: stop the pursuit of artificial general intelligence (AGI) that could outpace human capabilities and avoid military affiliations.
The protest was partly triggered by OpenAI’s recent decision to remove language from its usage policy that banned the military use of its AI technologies. Shortly after the policy change, reports surfaced indicating OpenAI had secured the Pentagon as a client.
“We demand that OpenAI end its relationship with the Pentagon and reject all military clients,” the event description stated. “If ethical boundaries can be altered for convenience, they cannot be trusted.”
Media outlets spoke with protest organizers to understand their goals and definitions of success.
“The mission of No AGI is to raise awareness about the dangers of developing AGI,” said Sam Kirchener, head of No AGI. “We should focus on initiatives like whole brain emulation that prioritize human thought in intelligence.”
Holly Elmore, lead organizer of Pause AI, expressed her group's desire for “a global, indefinite pause on the development of AGI until it is deemed safe.” She emphasized, “Ending military ties is a crucial boundary.”
The protest occurs amid a growing public concern regarding the ethical implications of AI. OpenAI's engagement with the Pentagon has ignited a discussion about the militarization of AI and its potential impacts.
Protesters’ fears center on AGI—the capability for machines to perform human intellectual tasks at unprecedented speed and scale. Their concerns extend beyond job displacement and warfare autonomy; they reflect anxiety over shifting power dynamics and decision-making across society.
“If we succeed in building AGI, we risk losing essential meaning in our lives due to what’s called the psychological threat, where AGI handles everything,” Kirchener warned. “People will no longer find meaning in work, which is critical to our current society.”
Elmore added, “Self-regulation isn’t enough; there needs to be external oversight. OpenAI has frequently reversed its commitments—Sam Altman claimed in June that the board could fire him, yet by November, he couldn't be removed. Similar inconsistencies arise with their usage policy and military contracts—these policies seem ineffective if they allow unrestricted actions.”
While both Pause AI and No AGI share a goal of halting AGI development, their approaches differ. Pause AI is open to the idea of safe AGI development, while No AGI firmly opposes it, citing potential psychological threats and the loss of individual meaning.
Both groups indicate that this protest will not be their last, and others concerned about AI risks can engage through their websites and social media. For now, Silicon Valley continues its rapid progress towards an uncertain AI future.