Silicon Valley Debates: Should AI Weapons Be Granted the Power to Decide Life and Death?

In late September, Brandon Tseng, co-founder of Shield AI, claimed that weapons in the U.S. would never become fully autonomous, meaning an AI algorithm would ultimately decide to take a life. “Congress doesn’t want that,” he stated, emphasizing a general consensus against such technology.

However, just five days later, Palmer Luckey, co-founder of Anduril, publicly shared a more open stance toward autonomous weapons. During a talk at Pepperdine University, he expressed skepticism about the arguments against them. “Our adversaries use compelling sound bites like, ‘Can you agree that a robot should never decide who lives or dies?’” Luckey remarked. “But I ask, where’s the moral high ground in a landmine that can't distinguish between a school bus filled with children and a Russian tank?”

When asked for clarification, Shannon Prior, a spokesperson for Anduril, clarified that Luckey did not advocate for robots to autonomously kill but was instead emphasizing concerns about "bad actors using bad AI."

Historically, Silicon Valley has leaned toward caution regarding military technologies. Trae Stephens, Luckey’s co-founder, noted, “The technologies we’re developing enable humans to make informed decisions, ensuring there is accountability for any lethal decisions made.”

Anduril's team dismissed any perceived inconsistencies between Luckey and Stephens, indicating that the emphasis was on accountability rather than asserting that humans must always pull the trigger.

The U.S. government’s position appears similarly ambiguous. Currently, the U.S. military does not procure fully autonomous weapons. While some maintain that mines and missiles operate autonomously, this differs significantly from systems capable of identifying, targeting, and engaging human targets without human oversight.

The U.S. has not formally prohibited the development or sale of fully autonomous lethal weapons. Last year, updated guidelines for AI safety in military applications were released, endorsed by many allies, mandating that top military officials approve any new autonomous weapons. Yet, these guidelines remain voluntary, with officials indicating that now is “not the right time” to consider a binding ban on such technologies.

Last month, Palantir co-founder and Anduril investor Joe Lonsdale expressed his willingness to contemplate fully autonomous weapons at a Hudson Institute event. He articulated frustration with framing the debate in strictly binary terms. For example, he proposed a scenario in which China embraces AI weapons while the U.S. is forced to "press the button every time it fires," advocating for a more nuanced evaluation of AI applications in weaponry.

He stated, "If I impose a simplistic top-down rule without understanding the complexities of the battlefield, I could jeopardize our position in a conflict.”

Lonsdale reiterated that defense tech firms should not dictate policies regarding lethal AI. “Our role is not to set policy; it is the duty of elected officials to do so, but they must understand the nuances to craft effective legislation,” he explained. He also emphasized the need to explore varying degrees of autonomy in weapon systems. “This isn’t a simple choice of ‘fully autonomous or not.’ There’s an intricate spectrum regarding the roles of soldiers and weapon systems,” he noted. “Before implementing rules, policymakers must grasp military dynamics and understand opposing strategies that could jeopardize American lives.”

Activists and human rights organizations have long sought international bans on autonomous lethal weapons, a push the U.S. has resisted. However, the ongoing conflict in Ukraine may be shifting perspectives, providing valuable combat data and a testing ground for defense tech innovations. Presently, companies integrate AI into weapons systems that still require human oversight for lethal decisions.

Ukrainian officials are advocating for greater automation in their weaponry to gain an advantage over Russian forces. Mykhailo Fedorov, Ukraine’s digital transformation minister, declared in an interview with The New York Times, “We need maximum automation; these technologies are vital to our victory.”

For many stakeholders in Silicon Valley and Washington, D.C., a primary concern is that China or Russia may deploy fully autonomous weapons first, compelling the U.S. to follow suit. A Russian diplomat's comments during a UN AI arms debate last year highlighted this tension: “For us, the priorities are somewhat different.”

At the Hudson Institute event, Lonsdale emphasized the necessity for the tech sector to educate the Navy, the Department of Defense, and Congress about AI’s potential, looking to maintain a competitive edge over China.

Both Lonsdale’s and Luckey’s companies are actively pressing Congress to heed their insights. In fact, Anduril and Palantir have collectively spent over $4 million on lobbying efforts this year, according to OpenSecrets.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles