Bipartisan lawmakers in Congress have introduced legislation to prohibit artificial intelligence (AI) from launching nuclear weapons. While the Department of Defense already mandates that a human must be involved in such significant decisions, the Block Nuclear Launch by Autonomous Artificial Intelligence Act aims to formalize this policy. The bill seeks to ensure that federal funds cannot be used for an automated nuclear launch without "meaningful human control," thereby protecting future generations from potentially catastrophic outcomes.
The legislation was introduced by Senator Ed Markey (D-MA) and Representatives Ted Lieu (D-MA), Don Beyer (D-VA), and Ken Buck (R-CO). Notable Senate co-sponsors include Jeff Merkley (D-OR), Bernie Sanders (I-VT), and Elizabeth Warren (D-MA). Senator Markey emphasized the importance of human oversight in nuclear command, stating, "As we live in an increasingly digital age, we must ensure that humans alone have the authority to command, control, and launch nuclear weapons – not robots. Keeping humans in the loop for life-or-death decisions is essential, especially regarding our most dangerous weapons."
The rapid rise of AI technologies—like popular chatbots and advanced image generation tools—has raised concerns among experts. Many warn that without regulation, the consequences could be severe. Cason Schmit, Assistant Professor of Public Health at Texas A&M University, noted the challenges lawmakers face in keeping pace with technological advancements. While no AI-specific legislation has been passed recently, a coalition of tech leaders and AI experts called in March for a six-month moratorium on developing systems beyond GPT-4. In addition, the Biden administration has recently sought public feedback on potential AI regulations.
Representative Lieu voiced the ongoing uncertainty regarding AI’s societal role, stating, "It is our duty as Members of Congress to responsibly safeguard future generations from potentially devastating consequences. This bipartisan, bicameral Act will guarantee that a human being retains control over nuclear weapon deployment—never a robot. AI cannot replace human judgment in such critical areas."
In today's political climate, the passage of even straightforward measures is uncertain. However, the fundamental principle behind this proposal—"don’t let computers decide to obliterate humanity"—may serve as a critical benchmark for the U.S. government's readiness to address rapidly evolving technologies.