skip to main content
Article Podcast Report Summary Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube
Andy Ilachinski
Download full report

We are currently riding a fast-moving artificial intelligence (AI) wave, with reports of “breakthroughs” appearing almost daily. Yet it is important to remember that, since its inception in the 1950s, AI has evolved in fits and starts. A 1958 New York Times article (“New Navy Device Learns by Doing”) proclaimed that the Navy had an “embryo” of a “thinking machine” (called a “perceptron,” a precursor of today’s deep neural networks) “that it expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence.” These grand expectations were soon dashed, however, and the first of two “AI winters” descended, after fundamental limitations on what perceptrons can achieve in practice were discovered. A second wave of research followed in the 1980s, spurred by more capable neural networks and new methods like reinforcement learning and rule-based expert systems (ES). But after a string of early successes (including “AIs” that defeated a human checkers champion and played backgammon at a human champion performance level), the second “AI winter” appeared, this time spanning roughly a decade, from the mid-1990s through 2006, and fueled mainly by, in the case of neural networks, a dearth of sufficiently powerful learning rules and computer power—and, in the case of ES, a lack of sustained progress. A third wave of AI research, launched in 2006—a wave that we are still riding—was spurred by a confluence of three factors: an exponentially dwindling cost of digital storage, coupled with an exponential growth of available data; a new generation of fast learning algorithms for multilayer neural networks; and the exponential growth in computing power, especially that of graphical processing units (GPUs) that speed up learning even more.

A plethora of new methods and “AI successes” have appeared in recent years, none more prominent than AlphaGo, a Go-playing AI developed by Google’s DeepMind, which defeated 18- time world champion Lee Sedol in the game of Go in March 2016. This was a landmark event because the number of possible moves in Go is so vast (well beyond that of chess) that it defies almost any measure of complexity. Indeed, prior to AlphaGo’s victory, most AI experts believed that no AI would defeat a highly ranked human Go player for another 15–20 years. As remarkable as that event was, less than 20 months later, an improved learning algorithm (AlphaZero) that required no human-player-data at all and only eight hours of self-play defeated AlphaGo, one hundred games to zero! And, in October 2019, DeepMind’s AlphaStar achieved a grandmaster rating in StarCraft II (which, unlike chess or Go, has 1026 actions to choose from at any moment). Demonstrably, the pace of discovery and innovation in AI is accelerating exponentially.

Yet just as demonstrably, such otherwise laudable “successes” mask an Achilles heel that afflicts many of today’s state-of-the-art AIs and portends a larger set of major technical challenges: namely, their “black-box-like” impenetrability and brittleness. They are “impenetrable” because once they make a “decision,” it is generally hard, if not impossible, to determine the reason(s) why they made it. Efforts to develop self-explaining AI systems are underway, but are nascent at best (and are likely to prove as difficult and elusive as “explainable humans”). But how does one trust a system that cannot be understood? AI systems are also generally “brittle” because, even when performing at super-human levels—at games like Go or StarCraft II—they flail when the conditions under which they were trained are changed even slightly (by, say, adding a single new row or column to a Go board). While such “flailing-until-retrained” behavior is of little consequence for game-playing AIs in research labs, it has already proven to be disastrous in real-world settings (e.g., in 2016, a Tesla autopilot that had failed to recognize a white truck against a bright sky—an “environmental exemplar” that was not in the system’s training set—crashed into the truck and killed the driver). A major—thus far, unsolved—challenge for state-of-the-art AI systems is an inherent fragility, or vulnerability to adversarial attacks, in which minor changes to images, text documents, or even sound waves (small enough to be imperceptible to humans) cause the AI to fail, with possibly catastrophic consequences.

Today’s AI is “narrow AI,” not “artificial general intelligence” (or AGI) that matches or exceeds human understanding of, and performance on, general intellectual tasks (e.g., the fictional HAL9000 in 2001: A Space Odyssey). While true AGI is, at best, decades off (and may, indeed, never materialize), regrettably it is how today’s narrow AI is often erroneously perceived. The truth is that today’s state-of-art AIs are far from panaceas to general problems.1 Narrow AI performs well (often, at superhuman levels) only on well-defined, focused tasks and applications (e.g., speech recognition, image classification, and game playing). The more ill-defined and “messy” the problem (think: real world military operations, with all their myriad entwined layers of complexity), the more difficult—and far riskier—the application of state-of-the-art AI and machine learning (ML) techniques.

Download full report

Distribution: Approved for Public Release; Distribution Unlimited. Request additional copies of this document through inquiries@cna.org.

Details

  • Pages: 150
  • Document Number: DOP-2020-U-028073-Final
  • Publication Date: 10/1/2020
Back to AI Emerging Threats