Bipartisan Effort to Prevent AI from Launching Nuclear Weapons

A bipartisan group of U.S. lawmakers is trying to prevent artificial intelligence from launching nuclear weapons without meaningful human control.

In the face of rapid advancements in artificial intelligence (AI) and a heightened global awareness of the potential risks it poses, a bipartisan group of U.S. lawmakers has introduced legislation to prevent AI from launching nuclear weapons without meaningful human control. We might hope that this is a step toward nuclear disarmament as former President Trump has called for. The Block Nuclear Launch by Autonomous Artificial Intelligence Act, seeks to ensure that human control remains a crucial factor in the decision to launch nuclear weapons.

The legislation comes at a time when AI’s role in society remains uncertain and its consequences potentially catastrophic. According to the 2023 AI Index Report published by the Stanford Institute for Human-Centered Artificial Intelligence, 36% of surveyed AI experts are concerned that automated systems could cause nuclear-level disasters.

The proposed bill aims to codify the Pentagon’s 2022 Nuclear Posture Review policy, which states that the U.S. will “maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the president to initiate and terminate nuclear weapon employment.” The legislation seeks to prohibit federal funds from being used to launch nuclear weapons or select and engage targets for launching nukes with AI.

Sen. Ed Markey emphasized the importance of retaining human control over such life-or-death decisions, stating, “We need to keep humans in the loop on making life-or-death decisions to use deadly force, especially for our most dangerous weapons.” Rep. Ken Buck echoed Markey’s sentiment, arguing that “use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited.”

The introduction of this bill highlights the growing concern about the potential dangers of AI and other emerging technologies, such as lethal autonomous weapons systems.

The bill is being spearheaded by Sen. Ed Markey (D-Mass.) and Reps. Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.).

Adapted from a photo by Laurentiu Morariu on Unsplash