skip to main content
Article Podcast Report Summary Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube
Larry Lewis
Download full report

History is replete with examples of technology being leveraged for military advantage. The chariot, the crossbow, gunpowder, and nuclear weapons all brought revolutionary effects to the battlefield. Many observers believe Artificial Intelligence (AI) will have the same or greater effects on warfare. It is not surprising, then, that nations are planning to take advantage of this technology. In light of this, more and more countries are placing AI as key to their national security strategies.

As the US pursues AI, its plans and efforts to leverage the technology for military applications have encountered significant concerns. For example, Google announced that it would no longer support the Department of Defense’s (DOD’s) Project Maven, and some parties have urged a pre-emptive ban on lethal autonomous weapon systems, one possible application of AI to warfare. Central to these concerns are fears about the safety of military AI applications. The issue of safety is commonly raised in news coverage: when the media discusses military applications of AI, it often uses scenes and language from the movie The Terminator to capture the ubiquitous fear of machines running amok and endangering humans, and possibly ending the human race.

Hollywood depictions aside, it is important not to lose sight of DOD’s strong commitment to safety as a standard practice, including setting rigorous requirements, performing test and evaluation processes, and conducting legal reviews to ensure that weapons comply with international law. Consistent with this stance, safety holds a primary place in the DOD AI strategy, with one of the four lines of effort being AI ethics and safety.

That said, in current DOD AI efforts and in public discourse about DOD AI applications, safety can appear to be the forgotten stepchild of ethics. Ethics as a term is used to cover the broad set of concerns regarding military use of AI. The DOD’s Defense Innovation Board is developing a set of ethical principles to guide its use of AI, and the Joint AI Center has discussed the need for ethicists on its staff. We note that both are important steps for the responsible use of AI. The military’s use of AI must be consistent with American values and principles.

While there is a strong focus on ethics, issues of AI safety—including negative operational outcomes such as civilian casualties, fratricide, and accidental escalation leading to international instability and conflict—are also highly relevant when considering concerns over military use of AI. An ethics focus will not necessarily address these issues: the appropriate set of experts for an ethics discussion is substantively different from the expertise needed for pursuing AI safety. This report shows that such a safety discussion will need to be more technical and operational in nature. Given this distinction, where is safety in DOD’s initial steps? While recognizing both AI safety and ethics as key parts of its strategy, DOD has not yet made the same kind of concrete institutional steps to reinforce the goal of AI safety as it has in AI ethics.

The non-existent threat of Terminators aside, AI safety is a conversation that is vital for DOD to have. We note that DOD has strong policies and practices in place for safety when incorporating new technologies. At the same time, there are also strategic reasons to pursue the topic of safety specifically for AI. These reasons include the following:

  • AI is fundamentally different from other technologies in its abilities, requirements, and risks.
  • DOD is operating in a different environment with AI where cooperation with industry is vital and safety is a particular concern in that relationship.
  • There is a need for appropriate trust in this new technology, which requires considering and managing safety risks and avoiding the two extremes of overly trusting the technology and eschewing its use.
  • To maximize the asymmetric advantage of alliances in an era of great power competition, it is important to align AI efforts and be interoperable with allies, including addressing their concerns about AI safety.

In this report, in light of the Navy’s stated commitment to using AI, and given the strategic importance of the issue of safety, we provide the Navy with a first step toward a more comprehensive approach to AI safety. We use a risk management approach to frame our treatment of AI safety risks: identifying risks, analyzing them, and then suggesting concrete actions for the Navy to begin addressing them. This includes two sets of safety risks, those associated with the technology of AI in general, and those associated with specific military applications of AI in the form of autonomy and decision aids. The first type of safety risk, being technical in nature, will require a collaborative effort with industry and academia to address effectively. The second type of risk, being associated with specific military missions, can be addressed with a combination of military experimentation, research, and concept development to find ways to promote effectiveness along with safety. For each type of risk, we use examples—bias for the first type and autonomy for the second type— to show concrete ways of managing and reducing the risk of AI applications. We then discuss institutional changes that would help promote safety in the Navy’s AI efforts.

Download full report

DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Public Release. 10/22/2019

Details

  • Pages: 54
  • Document Number: DOP-2019-U-021957-1Rev
  • Publication Date: 10/22/2019
Back to Special Activities and Intelligence