For Immediate Release
Contact: Fiona Gettinger, Communications Associate
CNA Expert Speaks to U.N. Representatives About Lethal Autonomous Weapons
Geneva — At this week’s meeting of the U.N. Convention on Certain Conventional Weapons (CCW), Dr. Larry Lewis, director of the Center for Autonomy and Artificial Intelligence at CNA, warned that human intervention at the “trigger pull” will not eliminate the risks of autonomous weapons. The convention’s purpose is to, “ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” In recent years the subject of lethal autonomous weapons systems (LAWS) has been a major focus of the CCW.
Lewis led an event at the convention titled, “The Human-Machine Relationship: Lessons From Military Doctrine and Operations.” The event was organized by CNA and the University of Amsterdam and was attended by officials and diplomats, including the ambassador to the Netherlands. Lewis was joined by Merel Ekelhof, a Ph.D. researcher at the VU University of Amsterdam, and U.S. Air Force Lt. Col. Matt King.
Over the past few years, there has been a growing consensus within the CCW that human control over the targeting process is a solution to the risks posed by LAWS. Lewis argued that this approach is too narrow because humans are fallible.
To illustrate this, he guided his audience though a military incident in which humans made mistakes in the targeting process, resulting in civilian casualties. He discussed a 2003 incident in Uruzgan, Afghanistan. Military helicopters were ordered to strike a group of SUVs approaching a U.S. position, because a Predator drone crew believed they were an imminent threat. The drone’s crew failed to observe children in the vehicles, and the attack resulted in 23 civilian casualties.
While some groups have also discussed banning LAWS entirely, Lewis believes this would be a mistake. He suggests that those concerned about civilian casualties should modify their idea of “evil killer robots.” In fact, Lewis said, “You can actually create better humanitarian outcomes with AI.”
One area where AI could help with the targeting process is in limiting unforeseen consequences of military action. For example in 2017 a Coalition airstrike targeted a bridge crossing the Euphrates River. The strike caused no civilian causalities, but the bridge contained a main water pipeline, so destroying it cut off access to Raqqa’s water supply. Machine learning can improve and expedite pattern-of-life analysis that could prevent such unforeseen consequences.
Lewis, who has spent more than 20 years at CNA providing analysis to the military on such issues as civilian casualties and fratricide, recently published a report titled, “Redefining Human Control: Lessons from the Battlefield for Autonomous Weapons.” Both the report and press release are available on CNA’s website.
CNA is a nonprofit research and analysis organization dedicated to developing actionable solutions to complex problems of national importance. With nearly 700 scientists, analysts and professional staff, CNA takes a real-world approach to gathering data. Its one-of-a-kind field program places analysts on carriers and military bases, in squad rooms and classrooms, and working side-by-side with a wide array of government decision-makers around the world. In addition to defense-related matters for the U.S. Department of the Navy, CNA’s research portfolio includes criminal justice, homeland security, energy security, water resources, enterprise systems and data analysis, and education.
Note to writers and editors: CNA is not an acronym and is correctly referenced as "CNA, a research organization in Arlington, VA."