The US, the Russian Federation, and the People’s Republic of China (PRC) have all recognized the revolutionary promise of artificial intelligence (AI), with machines completing complex tasks and matching or exceeding human performance. In parallel, all three competitors are modernizing their nuclear forces. It is likely, as each seeks areas of advantage through AI, that they will explore nuclear applications. AI applications—in both nuclear operations and AIenabled military capabilities more broadly—could increase or decrease nuclear risk.
Against this background, the US State Department’s Bureau of Arms Control, Verification, and Compliance (AVC) asked CNA to conduct research and analysis that would sharpen its understanding of how AI could impact nuclear risks. To that end, CNA addressed three questions:
- How are the US, Russia, and PRC using AI to enable their respective nuclear operations today?
- How might US, Russian, and PRC enabled nuclear postures interact—especially during crises or conflict—in the circa 2035 timeframe? In what specific ways might AI increase or decrease nuclear risk?
- What steps can the US government take to mitigate AI-driven nuclear risks and/or capture any risk-reducing benefits of AI-enabled nuclear operations?
This project makes two basic contributions. The first is a deep exploration of the many complicated ways that AI could influence nuclear risk that goes beyond what can be found in prior research on the topic. Building on that exploration, the second contribution is a set of recommendations that will help the US government mitigate the risks and capture the risk reducing benefits of AI-enabled nuclear operations.
Departing from the observation that AI-enabled nuclear operations could have both positive and negative effects on overall nuclear risk, we identified mechanisms by which AI-enabled nuclear operations could increase or reduce nuclear risk, as well as mechanisms by which AI could have a significant but uncertain impacts on nuclear risk. These mechanisms account for not only the technical characteristics of AI, but also for the interface between humans and AI, the ways that AI can alter the behavior of human operators, and the ways AI might shape leaders’ decisions about nuclear use in crisis or war—specifically, the following:
- AI could increase nuclear risks as a result of three categories of challenges.
- AI technical challenges include the performance of specific AI systems, complex and unpredictable interactions among AI systems operating in a system of systems, shortcomings in AI training data, poor alignment between AI tools and tasks, and adversary action against AI systems.
- Human-factors challenges include human trust in AI, unskilled use of AI by operators, skill degradation, and decision-time compression.
- Risks from leader calculus center on the difficulty of assessing how AI could affect the military balance—which in turn shapes leaders’ choices.
- There are opportunities for AI to mitigate nuclear risks in four areas:
- Nuclear weapons surety
- Survivability and resilience of nuclear forces
- Leadership decision-time expansion
- Crisis and conflict de-escalation
- AI could also have significant effects on nuclear risk if used to improve capabilities in five areas. However, whether these improved capabilities reduce or decrease nuclear risk would depend on the details of exactly how AI was used, by which actors, and to what ends. The five areas are as follows:
- Operations and maintenance of nuclear forces
- Performance of non-nuclear forces
- Performance of nuclear forces
- Analysis, planning, and decision support
- Active air and missile defense
Based on these findings we identified three sets of steps that can promote the desirable nuclear risk-reducing benefits of AI-enabled nuclear operations and mitigate risks. These steps are nested, reflecting the fact that AI applications in the nuclear niche will be shaped by military applications more broadly, as well as in the non-military AI ecosystem. Specifically, we propose the following:
- Focused risk mitigation for AI applications in nuclear operations. Because of the unique context and characteristics of nuclear operations and the high stakes involved, some risk mitigation steps should focus specifically on AI applications in nuclear operations.
- Applied efforts on risk mitigation of AI applications in military operations. The US military, like many militaries around the world, is seeking to apply AI to many functions involving conventional warfare. These general military applications will share many of the same challenges as applications to nuclear operations. This reflects an opportunity for parts of the military and government responsible for nuclear operations to work with the US military as a whole to reduce risks from military applications of AI overall.
- Basic research and practical solutions for fundamental sources of AI-related risks. Given the relative newness of modern AI techniques and a focus on commercial applications versus fundamental understanding and safety, there are many aspects of AI risks that are still not well understood. The US government can work with a wide array of partners—other governments, industry, and academia—to better understand these risks and to seek collective solutions to mitigate them. Such fundamental research would help reduce the risks of using AI in a wide range of fields, including nuclear operations.
DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited.
- Pages: 66
- Document Number: IRM-2023-U-035284-Final
- Publication Date: 4/17/2023