News Release

April 12, 2018

For Immediate Release
Contact: Fiona Gettinger, Communications Associate
gettingerf@cna.org, 703-824-2388

CNA Expert Speaks to U.N. Representatives About Lethal Autonomous Weapons

Geneva — At this week’s meeting of the U.N. Convention on Certain Conventional Weapons (CCW), Dr. Larry Lewis, director of the Center for Autonomy and Artificial Intelligence at CNA, warned that human intervention at the “trigger pull” will not eliminate the risks of autonomous weapons. The convention’s purpose is to, “ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” In recent years the subject of lethal autonomous weapons systems (LAWS) has been a major focus of the CCW.

Lewis led an event at the convention titled, “The Human-Machine Relationship: Lessons From Military Doctrine and Operations.” The event was organized by CNA and the University of Amsterdam and was attended by officials and diplomats, including the ambassador to the Netherlands. Lewis was joined by Merel Ekelhof, a Ph.D. researcher at the VU University of Amsterdam, and U.S. Air Force Lt. Col. Matt King.

Over the past few years, there has been a growing consensus within the CCW that human control over the targeting process is a solution to the risks posed by LAWS. Lewis argued that this approach is too narrow because humans are fallible.

To illustrate this, he guided his audience though a military incident in which humans made mistakes in the targeting process, resulting in civilian casualties. He discussed a 2003 incident in Uruzgan, Afghanistan. Military helicopters were ordered to strike a group of SUVs approaching a U.S. position, because a Predator drone crew believed they were an imminent threat. The drone’s crew failed to observe children in the vehicles, and the attack resulted in 23 civilian casualties.

While some groups have also discussed banning LAWS entirely, Lewis believes this would be a mistake. He suggests that those concerned about civilian casualties should modify their idea of “evil killer robots.” In fact, Lewis said, “You can actually create better humanitarian outcomes with AI.”

One area where AI could help with the targeting process is in limiting unforeseen consequences of military action. For example in 2017 a Coalition airstrike targeted a bridge crossing the Euphrates River. The strike caused no civilian causalities, but the bridge contained a main water pipeline, so destroying it cut off access to Raqqa’s water supply. Machine learning can improve and expedite pattern-of-life analysis that could prevent such unforeseen consequences.

Lewis, who has spent more than 20 years at CNA providing analysis to the military on such issues as civilian casualties and fratricide, recently published a report titled, “Redefining Human Control: Lessons from the Battlefield for Autonomous Weapons.” Both the report and press release are available on CNA’s website.


Error processing SSI file