skip to main content
Article Podcast Report Summary Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube
Larry LewisAndrew Ilachinski
Download full report

Countries around the world have taken early steps to leverage artificial intelligence (AI) in military capabilities. Although militaries are seeking to leverage the technology of AI for greater effectiveness and efficiency, the idea of adapting AI to military applications has also created considerable controversy. Many concerns have been voiced, including potential bias and lack of fairness, and maintaining human judgment and responsibility in engagement decisions. That said, the chief concern in international discussions is whether military applications of AI could be inherently indiscriminate, unable to differentiate between valid military targets and civilians.

One way to answer this question is to look at specific military applications of AI, including autonomous systems, and examine both technical and operational considerations for how risks to civilians may arise and how they can be mitigated. For example, several presentations during the United Nations Convention on Certain Conventional Weapons meetings on lethal autonomous weapon systems featured examples of autonomous systems that could be used for military warfighting tasks in ways that complied with international law and did not represent an indiscriminate hazard to civilians. Similarly, a previous CNA report (AI Safety: An Action Plan) considered some additional military warfighting applications of AI and how risks to civilians from those applications could be minimized through both operational and technical mitigation steps.

Those discussions, however, only address one half of the two-fold responsibilities for civilian protection found in International Humanitarian Law—the negative responsibility that militaries should not direct attacks on civilians. The affirmative responsibility for militaries to take all feasible precautions to protect civilians from harm has been relatively neglected. With regard to AI and autonomy, states should not only be asking how they can meet their negative responsibilities of making sure that AI applications are not indiscriminate in warfare. They should also be asking: How can we use AI to protect civilians from harm? And how can AI be used to lessen the infliction of suffering, injury, and destruction of war?

This report represents a concrete first step toward answering these questions. We begin by framing the problems that lead to civilian harm. If we understand that AI is a tool for solving problems, before we understand how this tool can be used, we need to understand the problems to be solved. What problems need to be solved to better protect civilians or otherwise promote IHL’s principle of humanity?. Although the imperative for avoiding civilian harm is universally acknowledged, the specific mechanisms for how such harm occurs have never been characterized in detail. How does civilian harm occur?

After synthesizing our body of work on civilian harm—including analysis of several thousand real-world incidents of civilian harm from military operations—we answer this question, presenting a framework illustrating how civilian harm occurs. We then discuss how civilian harm can be mitigated, including a civilian protection life cycle, which demonstrates a comprehensive approach to mitigating harm. We also discuss some examples of specific mitigation steps that can be taken to reduce civilian harm to show the kinds of actions that are possible for meeting the goal of civilian harm mitigation.

We then present a model approach for identifying opportunities where AI could be used to help address the problem of civilian harm, using the civilian protection life cycle to illustrate potential actions. We find many opportunities for AI applications across the life cycle. This high volume of potential applications ought not surprise us because the problem of civilian harm may be viewed as a microcosm of actions, behaviors, and policies associated with the much larger military operational space overall.

We also discuss specific potential applications of AI that address risk factors we have observed in real-world operations, leveraging techniques that currently exist and in many cases have already been applied to other problems. Although we note that no solution will eliminate the problem of civilian harm—military operations will always have a non-zero risk to civilians— AI can be used to help address patterns of harm we observe and reduce the likelihood of harm. We then discuss some potential areas of focus states could prioritize to reduce risks to civilians overall.

Download full report

DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited.

Details

  • Pages: 82
  • Document Number: DOP-2021-U-030953-2Rev
  • Publication Date: 2/3/2022
Back to Special Activities and Intelligence