75 years of service to our nation

AI with AI

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications. Join experts Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors. Episodes are recorded a week prior to release, and new episodes are released every Friday.

Episode 13

Jan 19, 2018

Andy and Dave discuss a newly announced method of attack on the speech-to-text capability DeepSpeech, which introduces noise to an audio waveform so that the AI does not hear the original message, but instead hears a message that the attacker intends. They also discuss the introduction of probabilistic models to AI as a way for AI to "embrace uncertainty" and make better decisions (or perhaps doubt whether or not humans should remain alive). And finally, Andy and Dave discuss some recent applications of AI to different areas of scientific study, particularly in the examination of very large data sets.

for related materials.


From images to voice

AI systems that doubt themselves: AI will make better decisions by embracing uncertainty

AI for science


Paul Scharre’s testimony before the House Armed Services Subcommittee on Emerging Threats and Capabilities (9 Jan 2018): China’s Pursuit of Emerging and Exponential Technologies. Watch clip. Transcript.

The documentary about Google DeepMind's 'AlphaGo' algorithm is now available on Netflix

Episode 12

Jan 12, 2018

Andy and Dave discuss “Tacotron 2,” the latest text-to-speech capability from Google that produces results nearly indistinguishable from human speech. They also discuss efforts at Google to create a Neural Image Assessment (NIMA), that not only can evaluate the quality of an image, but can also be trained to rate the aesthetics (as defined by the user) of an image. And after a look at some of the AI predictions for 2018, they play a musical game with two pieces of music – can Andy guess which piece Dave wrote, and which the AI composer AIVA, the Artificial Intelligence Virtual Artist, wrote?

for related materials.

Episode 11

Jan 5, 2018

It’s a smorgasbord of topics, as Andy and Dave discuss: the “AI 100” top companies report; the implications of Google’s new AI Research Center in Beijing; a workshop from the National Academy of Science and the Intelligence Community Studies Board on the challenges of machine generation of analytic products from multi-source data; Ethically Aligned Design and the IEEE; Quantum Computing; and finally, some Kasparov-related materials.

for related materials.


CB Insights (market analysis firm): AI 100: The Artificial Intelligence Startups Redefining Industries

Google Opens an AI Research Center In Beijing

(Workshop) National Academy of Science / Intelligence Community Studies Board - Challenges in Machine Generation of Analytic Products from Multi-Source Data

Ethically Aligned Design (EAD) – IEEE – toward a global, multilingual collaboration

Quantum Computing + machine learning: A Startup Uses Quantum Computing to Boost Machine Learning

  • Related: IZBM announces 50-Qubit quantum computer on 10 Nov; caveat (as for all state-of-the-art q-computers: the quantum state is preserved for 90 microseconds—a record for the industry, but still an extremely short period of time. IBM Raises the Bar with a 50-Qubit Quantum Computer

Microsoft releases a (preview of a) “Quantum Development Kit” (~ Visual Studio) (Video)


Kasparov on Deep Learning in chess:

Episode 10

Dec 29, 2017

Andy and Dave continue their discussion on the 31st Annual Conference on Neural Information Processing Systems (NIPS), covering Sokoban, chemical reactions, and a variety of video disentanglement and recognition capabilities. They also discuss a number of breakthroughs in medicine that involve artificial intelligence: a robot passing a medical licensing exam, an algorithm that can diagnose pneumonia better than expert radiologists, a venture between GE Healthcare and NVIDIA to tap into volumes of unrealized medical data, and deep-brain stimulation. Finally, for reading material and reference, Andy recommends a technical lecture on reinforcement learning, as well as two books on robot ethics.

for related materials.


NASA announcement on Dec. 14

Follow-up on AlphaGo (by DeepMind): AlphaGo Teach

31st Annual Conference on Neural Information Processing Systems (NIPS)

Imagination-Augmented Agents for Deep Reinforcement Learning

(IBM) Predicting outcomes of chemical reactions (Video)

DrNET: Unsupervised Learning of Disentangled Representations from Video


Several Milestones in Artificial Intelligence Were Just Reached in Medicine


Rich Sutton ("Father" of reinforcement learning, Department of Computing Science, University of Alberta) – 1.5hr technical lecture on a reinforcement-learning technique called temporal-difference learning (TDL)


Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by Patrick Lin, Keith Abney, and Ryan Jenkins, Oxford University Press

Towards a Code of Ethics for Artificial Intelligence, Paula Boddington, Springer-Verlag

Episode 9

Dec 22, 2017

After some brief speculation on the announcement from NASA (which was being held at the same time as this podcast was recorded), and a quick review of AlphaGo Teach, Andy and Dave discuss the 31st Annual Conference on Neural Information Processing Systems (NIPS). With over 8,000 attendees, 7 invited speakers, seminar and poster sessions, NIPS provides insight into the latest and greatest developments in deep learning, neural nets, and related fields.

for related materials.

Episode 8

Dec 15, 2017

Andy and Dave discuss how DeepMind's AI continues to bust through the record books while AlphaZero takes one step closer to world domination (of all board games). After a brief discussion on protein folding, they discuss the "AI Index," which seeks to measure the evolution and advances in AI over time.

for related materials.

Episode 7

Dec 8, 2017

Andy and Dave discuss a market analysis report that identifies where the Department of Defense is spending money in artificial intelligence, big data, and the cloud. They also elaborate on the challenge of "catastrophic forgetting," and a 4-year program at DARPA that seeks to develop "Lifelong Learning Machines," which can continuously apply the results of past experiences. After a conversation about SquishedNets, they cover a Harvard research paper that asserts the need for AI to have explanatory capabilities and accountability.

for related materials.

Episode 6a & 6b

Nov 24, 2017

Dr. Larry Lewis joins Andy and Dave to discuss the U.N. Convention on Conventional Weapons, which met in mid-November with a "mandate to discuss" the topic of lethal autonomous weapons. Larry provides an overview of the group's purpose, the group’s schedule and discussions, the mood and reaction of various parts of the group, and what the next steps might be.

for related materials.


November 13-17 meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on lethal autonomous weapons systems (86 countries)

22 countries now support a prohibition with Brazil, Iraq and Uganda joining the list of ban endorsers during the GGE meeting. Cuba, Egypt, Pakistan and other states that support the call to ban fully autonomous weapons also forcefully reiterated the urgent need for a prohibition.

States will take a final decision on the CCW’s future on this challenge, including 2018 meeting duration/dates, at the CCW’s annual meeting on Friday, 24 November.”

2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)/links

Group of Governmental Experts on Lethal Autonomous Weapons Systems (links to docs)

Recaps of the UN CCW meetings Nov 13 – 17 (by Autonomous Weapons):

  • “The vast majority of CCW high contracting parties participating in this meeting do want concrete action. The majority of those want a legally binding instrument, while others prefer—at least for now—a political declaration or other voluntary arrangements. However, China, Japan, Latvia, Republic of Korea, Russia, and the United States made it clear that they do not want to consider tangible outcomes at this time.”
  • Autonomous Weapons recap
  • Stop Killer Robots recap


Slaughterbots – Future of Life Institute “mini movie” on why autonomous weapons ought to be banned (postscript by Stuart Russell, AI researcher):

Related: In Aug 2017, Elon Musk lead 116 AI experts in open letter calling for ban of killer robots. Read.

Episode 5

Nov 17, 2017

Andy and Dave discuss the recent Geneva Convention on Conventional Weapons, which met to lay the groundwork for discussing the role of lethal autonomous weapons. They also discuss a new technique, called Capsule Networks, that aims to improve recognition of an object due to a change in spatial orientation. Andy and Dave conclude with a discussion of why fruit flies are so awesome.

for related materials.

Episode 4

Nov 10, 2017

Andy and Dave discuss MIT efforts to create a tool to train AIs, in this case, using another AI to provide the training. They discuss efforts to crack the "cocktail party" dilemma of picking out individual voices in a noisy room, as well as an AI that can "upres" photographs with remarkable use of texture (that is, taking a lower resolution photo and making it larger in a realistic way). Finally, they discuss the latest MIT Tech Review magazine, which focused on AI.

for related materials.

Episode 3

Nov 10, 2017

Andy and Dave follow up on the discussion of AlphaGo Zero and the never-before-seen patterns of play that the AI discovered, and the implications of such discoveries (which seem to be the "norm" for AI). They also discuss Google's AutoML project, which applies machine learning to help improve machine learning.

for related materials.

Episode 2

Nov 3, 2017

Andy and Dave discuss the late-breaking news of AlphaGo Zero, a new iteration of the Go playing AI, which surpassed its predecessor AI in about 3 days of learning, using only the basic rules of Go (as opposed to the 6+ months of the original, using thousands of games as examples).

for related materials.


AlphaGo Zero beats AlphaGo 100-0 after 3 days of training (compared to several months for original AlphaGo) and without any human intervention/human-game-playing-data! Read: Technology Review and Nature


AlphaGo Documentary - Local screening in Reston, VA

Episode 1

Nov 3, 2017

In the inaugural podcast for AI with AI, Andy provides an overview of his recent report on AI, Robots, and Swarms, and discusses the bigger picture of the development and breakthroughs in artificial intelligence and autonomy. Andy also discusses some of his recommended books and movies.

for related materials.


Movies & TV


  • When Will AI Exceed Human Performance? - Survey of 352 experts who had published at recent AI conferences (Oxford, Yale, and the Future of Life Institute)
  • AI Progress Measurement - Measuring the Progress of AI Research
  • New Theory Cracks Open the Black Box of Deep Learning - Tishby / information bottleneck
  • "The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts." - analogy to renormalization (as used in statistical physics), may lead to better understanding and new architectures
  • Forget Killer Robots—Bias Is the Real AI Danger - Technology Review (MIT)
  • An AI developed at Vanderbilt University in Tennessee to identify cases of colon cancer from patients’ electronic records performed well - at first - but it was discovered that the AI "learned" to associate patients with confirmed cases with a specific clinic to which they were sent.
  • Counterargument (by Peter Norvig: Google's AI research director, co-author of standard text: Artificial Intelligence: A Modern Approach)
  • "Since humans are not very good at explaining their decision-making either...the performance of an AI system could be gauged simply by observing its outputs over time"
  • If these AI bots can master the world of StarCraft, they might be able to master the world of humans (Artificial Intelligence and Interactive Digital Entertainment -AIIDE- Starcraft AI Competition at Memorial University in Newfoundland)
  • "StarCraft [is] complex enough to be a good simulation of real life...It's like playing soccer while playing chess."