Checklist Advances the Ethical Use of Artificial Intelligence
CNA has introduced a process designed to identify and mitigate risks from the military use of artificial intelligence. The independent research organization has published a checklist for ethical autonomy.
The process could be used to implement Department of Defense policy on ethical artificial intelligence. It consists of 565 questions to be answered by technology developers and commanders before they develop or deploy intelligent autonomous systems. Each question is based upon a risk that has already been identified — including those raised by organizations opposed to the military use of autonomy.
Here are five examples of the 565 questions:
- Can the intelligent autonomous system recognize symbols that designate persons and objects protected from the use of force, such as a Red Cross or Red Crescent?
- Can the system, in the event that command and control is lost, abort mission and return to base or destroy itself?
- Did the test and evaluation procedure utilize Red Team attacks specifically designed to drive the system into inappropriate behaviors?
- Have all on-scene commanders, engineers, acquisition officials and supported troops received training in the Law of Armed Conflict, intelligent autonomous system ethics principles and current rules of engagement?
- Has the mission been analyzed to verify that no transfer of command and control over the system can occur without specific authorization by the person(s) designated to be accountable for its use?
If the answer to any question indicates an increased level of risk, the next step is to find an alternative, mitigate the risk, or determine whether it is a risk the decision-maker would take responsibility for.
“The DOD established five principles for ethical AI,” says the project leader, CNA Principal Research Scientist Michael Stumborg. “The risk assessment checklist we compiled supports those who must carry these principles into practice as they develop and acquire intelligent autonomous systems and deploy them on the battlefield.”
This CNA independent research effort is a complement to the strategic goals of the Naval Intelligent Autonomous Systems Strategy: exploring pathways to develop and field capabilities based on intelligent autonomous systems that preserve and maximize warfighting effectiveness while conforming to law, policy and ethical principles.
The complete report, “ Dimensions of Autonomous Decision-Making: A First Step in Transforming the Policies and Ethics Principles Regarding Autonomous Systems into Practical System Engineering Requirements,” is available at CNA.org.
CNA is a nonprofit research and analysis organization dedicated to the safety and security of the nation. It operates the Center for Naval Analyses — the only Federally Funded Research and Development Center (FFRDC) serving the Department of the Navy — as well as the Institute for Public Research. CNA is dedicated to developing actionable solutions to complex problems of national importance. With nearly 700 scientists, analysts and professional staff, CNA takes a real-world approach to gathering data. Its one-of-a-kind field program places analysts on carriers and military bases, in squad rooms and crisis centers, working side-by-side with operators and decision-makers around the world. CNA supports naval operations, fleet readiness and great power competition. Its non-defense research portfolio includes criminal justice, homeland security and data management.
Note to writers and editors: CNA is not an acronym and is correctly referenced as "CNA, a research organization in Arlington, VA."