skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Right Arrow Press Release

With the impending development of lethal autonomous weapon systems (LAWS), observers agree on one thing: autonomy will revolutionize warfighting. Beyond that, the various parties disagree on how to address the risks of inadvertent engagement by weapons that can make their own decisions. Nongovernmental organizations are particularly worried about civilian casualties, while military leaders harbor additional concerns about friendly fire.

In four years of meetings on this topic by the Convention on Certain Conventional Weapons (CCW), two patterns have emerged. One is an approach to analyzing risk through a largely theoretical framework, perhaps because the novelty of autonomous weapons makes it difficult to imagine how real-world lessons from conventional warfare would apply. The second is a widespread belief that the panacea to the risks of autonomy rests in human control, particularly over the final engagement decision—the trigger pull.

Drawing upon CNA’s long history of analyzing military operations, we employ empirical analysis of recent battlefield experiences to examine patterns and yield overall insights into protecting civilians from LAWS. And those lessons suggest that a narrow, trigger-pull approach will fail to adequately shield civilians. Fortunately, these empirical insights point the way toward a more comprehensive solution: a safety net woven from best practices in targeting, policy and testing, with the consideration of operational context. Instead of focusing on process considerations such as human control, this broader approach focuses on outcome, namely the mitigation of inadvertent engagements.

Mitigating human fallibility

One clear lesson from recent military experience is that human judgment during the trigger-pull decision is not perfect. Misidentifications were the reason for about half of all U.S.-caused civilian casualties in Afghanistan, with specific examples painfully abundant. In an area of daily attacks on coalition forces, girls using sickles to cut grass were misidentified as men with weapons. A sniper in the aftermath of a firefight mistook a farmer in a ditch for a combatant. And a helicopter crew thought it was preventing an expected attack when it took aim at a convoy carrying women and children. Soldiers themselves were also likely to be victims of misidentification: for example, in major combat operations in Iraq in 2003, 17 percent of U.S. casualties were from fratricide. The fallibility of human judgment in real-world operations suggests that requiring a human in the loop for trigger-pull decisions will not eliminate the risks to civilians.

A more successful approach has been to reduce the number of decisions operators have to make in the heat of the moment, by front-loading some critical tasks earlier in the wider targeting process. From 2009, the International Security Assistance Force in Afghanistan modified its policies and procedures to help reduce the risk to civilians. Planning of operations began to consider risk factors for civilian casualties more effectively. One example was a focus on pattern-of-life determinations, in which forces used intelligence and reconnaissance data to establish a baseline of what was normal civilian activity. An analysis of available data suggests that these mitigation efforts were a win-win, reducing civilian casualties with no apparent cost to mission effectiveness. In a similar vein, the CCW will find that the most effective exercise of human control with autonomous weapons will take place over the entire targeting process. At the same time, the CCW should also consider the role of autonomous technologies that contribute to targeting without making the final engagement decision.

Building the safety net

But even with a broader approach including the wider targeting process, the CCW risks missing important elements of a safety net against civilian casualties. Discussions regarding LAWS in the CCW often have not addressed the question of how weapons are used. This context, consisting of both the operational environment and the mission, needs to be a key part of the evaluation process for the development and use of LAWS. For example, there are some environments where civilians will be rarely encountered or can easily be identified—including the underwater and air-defense domains. Another element of context is self-defense: should LAWS be handled differently if they are acting in defense of humans? One possibility is to take a crawl-walk-run approach, in which lethal autonomy is first pursued in less demanding missions and environments, where civilian casualties are less likely.

Many members of the CCW have also focused their attention on International Humanitarian Law (IHL), and the requirement that any use of autonomous weapons meet the requirements of IHL. While this should be considered necessary, a cautionary tale on the risks of elevating rules above outcomes can be found in the early results from self-driving cars. A recent study found that self-driving cars were five times more likely to be in a crash than conventional vehicles. Yet the autonomous vehicles were never at fault; they strictly followed the rules of the road. What they did not do was to anticipate the bad driving of humans. Just as the necessary outcome for self-driving cars is not to follow the rules but to reduce crashes, a key desired outcome for autonomous weapons is to avoid inadvertent casualties. Designing systems to follow a set of rules—specifically, International Humanitarian Law—is necessary but not sufficient in itself for addressing this risk through a comprehensive safety net.

Download report

DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited.

Details

  • Pages: 44
  • Document Number: DOP-2018-U-017258-Final
  • Publication Date: 3/1/2018
Back to Cyber Research Program