skip to main content
ArticlePodcastReportQuick LookVideotriangleplus sign dropdown arrow Case Study All

AI with AI

Episode 2.1: AI with AI: Quickly Followed by DARPA’s Counter-Balrog Challenge

Welcome to Version 2.0 of AI with AI! Dave starts off by trying to explain the weird podcast titles, and he plugs Andy’s (@ai_ilachinski) and his (@crypticnarwhal) Twitter accounts. Andy and Dave then get down to business discussing Britain’s “successful” trials of using AI (“SAPIENT”) in urban battlefield scanning to identify enemy movements; the IEEE launches an ethics certification program for autonomous and intelligent systems; the U.S. Department of Energy invests $218M in Quantum Information Science; and DARPA announces the Subterranean Challenge, for technologies to augment underground operations, and wherein Dave makes a dire prediction of Tolkien-proportions! Andy and Dave then delve greedily and deeply into a series of topics of counter-AI. They start by discussing Dedrone, which has developed a capability to detect and track swarms (of robots/drones). Researchers in Korea use an AI-enabled drone to herd flocks of birds (diverting them from designated airspace). Researchers at the University of Albany, with GE, demonstrate the ability to attack object detectors (Faster Regional Convolutional Neural Networks) using imperceptible patches on the background; and researchers at the Georgia Institute of Technology, with Intel, announce ShapeShifter, a targeted physical attack on Faster R-CNN object detectors found in “state-of-the-art” detectors (such as the current generation of self-driving vehicles). On the other side, Luca de Alfaro at the University of California, Santa Cruz, published research into creating neural networks with built-in resistance to adversarial attacks, by reducing the neural networks’ “local linearity.” After a quick touch on research from Google Research on simplifying and compacting neural networks (for resource-constrained devices) without floating-point operations or multiplications, Andy recommends a paper on Learning Causality; August Cole’s Angry Trident makes the story of the week; Interpretable Machine Learning (by Molnar) is the book of the week, along with Pattern Classification by Duda, Hart, and Stork; and Christopher Moore explores the Limits of Computation in a two-part video series.

CNA Office of Communications

John Stimpson, Communications Associate