skip to main content
Article Podcast Report Summary Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube
Margarita Konaev
Download full report

The role of artificial intelligence (AI) and autonomous systems in the war in Ukraine has attracted a great deal of attention in the media and from analysts tracking the use of tomorrow’s technology in today’s wars. Ukraine, with help from the United States, other North Atlantic Treaty Organization (NATO) partners, and a wide range of technology companies, is leveraging AI to continuously update their understanding of the battlefield, support decision-making and gain an advantage in intelligence and operations. Less reliable information is available about Russia’s use of AI and autonomous technologies. However, some evidence exists claiming that Russian operators have tried using AI to enhance disinformation campaigns, while the Russian armed forces are extensively using loitering munitions to attack Ukrainian cities and block the Ukrainian military’s counteroffensive.

Since Russia’s full-scale invasion of Ukraine in February 2022, AI has also reportedly been deployed aboard drones to collect intelligence, carry out strikes and process enemy battlefield communications in facial recognition technology, cyber defense, etc. In recent months, reports about AI on the battlefield have converged with the widespread news coverage of the breakthroughs in generative AI systems to create an impression that the technology is ubiquitous. A careful assessment of the topic, however, must acknowledge that AI is a relatively new technology that has seen few battlefield deployments before the war in Ukraine. The scale and nature of AI deployment in this conflict is therefore unprecedented by default. Still, it is difficult to assess whether these applications and capabilities have been used only on a few occasions or widely deployed. It is also impossible to know, based on open-source materials, whether and what type of AI and autonomous technologies are being used in classified tasks and missions, and to what effect.

As such, it would be incorrect or at least premature to conclude that either the Ukrainians or the Russian forces are employing AI at scale. Rather, it is more likely that the use of AI and autonomous technologies in the war in Ukraine has been limited to certain use cases, tasks, and conditions. Ukraine has mobilized its impressive community of IT workers and software engineers to support the war effort and many if not most of the country’s drone companies and other AI startups are working closely with military units on the front lines. However, the more advanced capabilities—such as leveraging AI to collect, fuse, analyze, and exploit different
types of commercial and classified data to enhance decision-making and guide targeting—have primarily been developed and deployed by US and allied forces positioned outside of Ukraine. These advanced capabilities are being enabled by private sector companies that are providing Ukraine and its allies with the data, equipment, and technological know-how to fight the Russian forces while gaining operational experience and battlefield data to test and refine their products.

Although the war in Ukraine is showing how new technologies are shaping the battlefield in real time, it also highlights a longer-term trend of militaries around the world accelerating investment into research and development of AI and autonomous technologies. Progress in these fields promises to reduce risk to deployed forces, minimize the cognitive and physical burden on warfighters, and significantly increase the speed of information processing, decision-making, and operations, among other advantages. Yet such technological breakthroughs and the use of these applications and systems in contested environments may also come with risks and costs—from ethical concerns about responsibility for the use of lethal force to unexpected behavior of brittle and opaque systems.

Alongside the technological progress in AI and autonomous technologies, there is also a fast growing body of research dedicated to studying the potential implications of these systems and capabilities for international security, strategic stability, and conflict dynamics. Reports about the use of AI and autonomous technologies in the war in Ukraine, although far from comprehensive, allow us to assess some of the hypotheses put forth by this literature. With that, the remainder of this chapter proceeds in four parts. The first section reviews some of the existing arguments about the potential effect of AI and autonomous technologies on strategic stability and conflict dynamics. Then, drawing on open-source information, the second section offers a brief overview of AI and autonomous technology use in the war in Ukraine. The following section considers how the use of AI and autonomous technologies in the war may have affected the risk of escalation from conventional war to the use of nuclear weapons and conflict spreading to include other parties. The last section assesses how AI and autonomous technologies may affect strategic stability between the United States and Russia beyond the current Russo-Ukrainian war, focusing specifically on the potential role of confidence building measures in minimizing the risk of inadvertent escalation.

Download full report

DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited.

Details

  • Pages: 34
  • Document Number: IOP-2023-U-036583-Final
  • Publication Date: 10/2/2023
Back to Russia Studies