75 years of service to our nation

AI with AI

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications. Join experts Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors. Episodes are recorded a week prior to release, and new episodes are released every Friday.

Episode 18

Feb 23, 2018

In another smattering of topics, Andy and Dave discuss the latest insight into the dispersion of global AI start-ups, as well as AI talent. They also describe a commercially available drone that can navigate landscapes and obstacles as it tracks a target. And they discuss an AI algorithm with “social skills” that can teach humans how to collaborate. After chat bots and Deep TAMER, Andy and Dave discuss a few recent videos, including one about door-opening dogs; and, Dave has a meltdown as he fails to recall The Earth Stood Still, but instead substitutes a different celestial body. Klaatu barada nikto.

for related materials.

Breaking News

Artificial Intelligence Trends To Watch In 2018

Follow-up to podcast #17 – DroNet: Learning to Fly by Driving

Topics

Tencent says there are only 300,000 AI engineers worldwide, but millions are needed

AI algorithm with ‘social skills’ teaches humans how to collaborate: Cooperating with machines

Human-machine collaborative chatbot, Evorus

Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces

(Google/DeepMind) IMPALA: Scalable Distributed DeepRL in DMLab-30

Books

Stanislaw Lem short story: “The Upside-Down Evolution”

Artificial Intelligence and Games, by Georgios N. Yannakakis and Julian Togelius, 2018 (hardcopy)

Video

Intel's Winter Olympics 1218-Drone Light Show

Boston Dynamics crosses new threshold with door-opening dog (SpotMini)


Episode 17

Feb 16, 2018

Andy and Dave start this week’s episode with a superconducting ‘synapse’ that could enable powerful future neuromorphic supercomputers. They discuss an attempt to use AI to decode the mysterious Voynich manuscript, and then move on to Hofstadter’s take on the shallowness of Google Translate (with mention of the ELIZA effect). After discussing DroNet’s drones that can learn to fly by watching a driving video, and updating the Domain-Adaptive Meta-Learning discussion where a robot can learn a task by watching a video, they close with some recommendations of videos and books, including Lem’s ‘Golem XIV.’

for related materials.


Episode 16a & 16b

Feb 9, 2018

Andy and Dave welcome back Larry Lewis, the Director for CNA's Center for Autonomy and Artificial Intelligence, and welcome Merel Ekelhof, a Ph.D. candidate at VU University Amsterdam and visiting scholar at Harvard Law School. Over the course of this two-part series, the group discusses the idea of "meaningful human control" in the context of the military targeting process, the increasing role of autonomous technologies (and that autonomy is not simply an issue "at the boom"), and the potential directions for future meetings of the U.N. Convention on Certain Weapons.

for related materials.


Episode 15

Feb 2, 2018

Andy and Dave discuss two recent AI announcements that employ generative adversarial networks: an AI algorithm that can crack classic encryption ciphers (without prior knowledge of English), and an AI algorithm that can "draw" (generate) an image based on simple text instructions. They start, however, with a discussion on the recent rash of autonomous (and semi-autonomous) vehicle incidents, and they also discuss "brain-on-a-chip" hardware, as well as a robot that can learn to do tasks by watching video.

for related materials.

Breaking News

Tesla ‘on Autopilot’ slams into parked fire truck on California freeway

People Keep Confusing Their Teslas for Self-Driving Cars

Waze unable to explain how car ended up in Lake Champlain

Tesla Bears Some Blame for Self-Driving Crash Death, Feds Say

Tesla Autopilot crash caught on dashcam shows how not to use the system

Topics

(Google/University of Toronto) AI code decryption

(MIT) Artificial synapse created for "brain-on-a-chip" hardware

(Microsoft) Text to Image Generation with - AI that draws what it is instructed to draw

(Google/University of Southern California) Robot learning from video

Miscellaneous Links

Point / Counterpoint "debate" on Slaughterbots discussed in podcast #5 – recall that Slaughterbots is Future of Life Institute’s "mini movie" on why autonomous weapons ought to be banned


Episode 14

Jan 26, 2018

Andy and Dave cover a series of topics that evoke broader to connect with the "meta" questions about the role and nature of AI. They begin with Google's Cloud AutoML announcement, which offers ways to more easily build your own AI. They discuss the announcement of AIs that "defeated" humans on a Standard University reading comprehension text, and the misrepresentation of that achievement. They discuss deep image reconstruction, with a neural net that "read minds" by piecing together images from a human's visual cortex. And they close with discussions about Gary Marcus's recent article, which offers a critical appraisal of Deep Learning, and a recent paper that suggests that convolutional neural nets may not be as good at "grasping" higher-level abstract concepts as is typically believed.

for related materials.

Breaking News

Google announces Cloud AutoML

Topics

AI has "defeated" humans on a Stanford University reading comprehension test

Deep image reconstruction: Japanese-designed NN can "read minds"

Gary Marcus (NYU Professor and Founder of Uber-owned ML startup Geometric Intelligence) publishes Deep Learning: A Critical Appraisal

Video

Artificial intelligence debate at New York University between Yann LeCun vs. Gary Marcus: Does AI Need More Innate Machinery? (2 hrs)


Episode 13

Jan 19, 2018

Andy and Dave discuss a newly announced method of attack on the speech-to-text capability DeepSpeech, which introduces noise to an audio waveform so that the AI does not hear the original message, but instead hears a message that the attacker intends. They also discuss the introduction of probabilistic models to AI as a way for AI to "embrace uncertainty" and make better decisions (or perhaps doubt whether or not humans should remain alive). And finally, Andy and Dave discuss some recent applications of AI to different areas of scientific study, particularly in the examination of very large data sets.

for related materials.

Topics

From images to voice

AI systems that doubt themselves: AI will make better decisions by embracing uncertainty

AI for science

Video

Paul Scharre’s testimony before the House Armed Services Subcommittee on Emerging Threats and Capabilities (9 Jan 2018): China’s Pursuit of Emerging and Exponential Technologies. Watch clip. Transcript.

The documentary about Google DeepMind's 'AlphaGo' algorithm is now available on Netflix


Episode 12

Jan 12, 2018

Andy and Dave discuss “Tacotron 2,” the latest text-to-speech capability from Google that produces results nearly indistinguishable from human speech. They also discuss efforts at Google to create a Neural Image Assessment (NIMA), that not only can evaluate the quality of an image, but can also be trained to rate the aesthetics (as defined by the user) of an image. And after a look at some of the AI predictions for 2018, they play a musical game with two pieces of music – can Andy guess which piece Dave wrote, and which the AI composer AIVA, the Artificial Intelligence Virtual Artist, wrote?

for related materials.


Episode 11

Jan 5, 2018

It’s a smorgasbord of topics, as Andy and Dave discuss: the “AI 100” top companies report; the implications of Google’s new AI Research Center in Beijing; a workshop from the National Academy of Science and the Intelligence Community Studies Board on the challenges of machine generation of analytic products from multi-source data; Ethically Aligned Design and the IEEE; Quantum Computing; and finally, some Kasparov-related materials.

for related materials.

Topics

CB Insights (market analysis firm): AI 100: The Artificial Intelligence Startups Redefining Industries

Google Opens an AI Research Center In Beijing

(Workshop) National Academy of Science / Intelligence Community Studies Board - Challenges in Machine Generation of Analytic Products from Multi-Source Data

Ethically Aligned Design (EAD) – IEEE – toward a global, multilingual collaboration

Quantum Computing + machine learning: A Startup Uses Quantum Computing to Boost Machine Learning

  • Related: IZBM announces 50-Qubit quantum computer on 10 Nov; caveat (as for all state-of-the-art q-computers: the quantum state is preserved for 90 microseconds—a record for the industry, but still an extremely short period of time. IBM Raises the Bar with a 50-Qubit Quantum Computer

Microsoft releases a (preview of a) “Quantum Development Kit” (~ Visual Studio) (Video)

Book/Video

Kasparov on Deep Learning in chess:


Episode 10

Dec 29, 2017

Andy and Dave continue their discussion on the 31st Annual Conference on Neural Information Processing Systems (NIPS), covering Sokoban, chemical reactions, and a variety of video disentanglement and recognition capabilities. They also discuss a number of breakthroughs in medicine that involve artificial intelligence: a robot passing a medical licensing exam, an algorithm that can diagnose pneumonia better than expert radiologists, a venture between GE Healthcare and NVIDIA to tap into volumes of unrealized medical data, and deep-brain stimulation. Finally, for reading material and reference, Andy recommends a technical lecture on reinforcement learning, as well as two books on robot ethics.

for related materials.

Topics

NASA announcement on Dec. 14

Follow-up on AlphaGo (by DeepMind): AlphaGo Teach

31st Annual Conference on Neural Information Processing Systems (NIPS)

Imagination-Augmented Agents for Deep Reinforcement Learning

(IBM) Predicting outcomes of chemical reactions (Video)

DrNET: Unsupervised Learning of Disentangled Representations from Video

NASNet:

Several Milestones in Artificial Intelligence Were Just Reached in Medicine

Video

Rich Sutton ("Father" of reinforcement learning, Department of Computing Science, University of Alberta) – 1.5hr technical lecture on a reinforcement-learning technique called temporal-difference learning (TDL)

Books

Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by Patrick Lin, Keith Abney, and Ryan Jenkins, Oxford University Press

Towards a Code of Ethics for Artificial Intelligence, Paula Boddington, Springer-Verlag


Episode 9

Dec 22, 2017

After some brief speculation on the announcement from NASA (which was being held at the same time as this podcast was recorded), and a quick review of AlphaGo Teach, Andy and Dave discuss the 31st Annual Conference on Neural Information Processing Systems (NIPS). With over 8,000 attendees, 7 invited speakers, seminar and poster sessions, NIPS provides insight into the latest and greatest developments in deep learning, neural nets, and related fields.

for related materials.


Episode 8

Dec 15, 2017

Andy and Dave discuss how DeepMind's AI continues to bust through the record books while AlphaZero takes one step closer to world domination (of all board games). After a brief discussion on protein folding, they discuss the "AI Index," which seeks to measure the evolution and advances in AI over time.

for related materials.


Episode 7

Dec 8, 2017

Andy and Dave discuss a market analysis report that identifies where the Department of Defense is spending money in artificial intelligence, big data, and the cloud. They also elaborate on the challenge of "catastrophic forgetting," and a 4-year program at DARPA that seeks to develop "Lifelong Learning Machines," which can continuously apply the results of past experiences. After a conversation about SquishedNets, they cover a Harvard research paper that asserts the need for AI to have explanatory capabilities and accountability.

for related materials.


Episode 6a & 6b

Nov 24, 2017

Dr. Larry Lewis joins Andy and Dave to discuss the U.N. Convention on Conventional Weapons, which met in mid-November with a "mandate to discuss" the topic of lethal autonomous weapons. Larry provides an overview of the group's purpose, the group’s schedule and discussions, the mood and reaction of various parts of the group, and what the next steps might be.

for related materials.

Topics

November 13-17 meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on lethal autonomous weapons systems (86 countries)

22 countries now support a prohibition with Brazil, Iraq and Uganda joining the list of ban endorsers during the GGE meeting. Cuba, Egypt, Pakistan and other states that support the call to ban fully autonomous weapons also forcefully reiterated the urgent need for a prohibition.

States will take a final decision on the CCW’s future on this challenge, including 2018 meeting duration/dates, at the CCW’s annual meeting on Friday, 24 November.”

2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)/links

Group of Governmental Experts on Lethal Autonomous Weapons Systems (links to docs)

Recaps of the UN CCW meetings Nov 13 – 17 (by Autonomous Weapons):

  • “The vast majority of CCW high contracting parties participating in this meeting do want concrete action. The majority of those want a legally binding instrument, while others prefer—at least for now—a political declaration or other voluntary arrangements. However, China, Japan, Latvia, Republic of Korea, Russia, and the United States made it clear that they do not want to consider tangible outcomes at this time.”
  • Autonomous Weapons recap
  • Stop Killer Robots recap

Video

Slaughterbots – Future of Life Institute “mini movie” on why autonomous weapons ought to be banned (postscript by Stuart Russell, AI researcher):

Related: In Aug 2017, Elon Musk lead 116 AI experts in open letter calling for ban of killer robots. Read.


Episode 5

Nov 17, 2017

Andy and Dave discuss the recent Geneva Convention on Conventional Weapons, which met to lay the groundwork for discussing the role of lethal autonomous weapons. They also discuss a new technique, called Capsule Networks, that aims to improve recognition of an object due to a change in spatial orientation. Andy and Dave conclude with a discussion of why fruit flies are so awesome.

for related materials.


Episode 4

Nov 10, 2017

Andy and Dave discuss MIT efforts to create a tool to train AIs, in this case, using another AI to provide the training. They discuss efforts to crack the "cocktail party" dilemma of picking out individual voices in a noisy room, as well as an AI that can "upres" photographs with remarkable use of texture (that is, taking a lower resolution photo and making it larger in a realistic way). Finally, they discuss the latest MIT Tech Review magazine, which focused on AI.

for related materials.


Episode 3

Nov 10, 2017

Andy and Dave follow up on the discussion of AlphaGo Zero and the never-before-seen patterns of play that the AI discovered, and the implications of such discoveries (which seem to be the "norm" for AI). They also discuss Google's AutoML project, which applies machine learning to help improve machine learning.

for related materials.


Episode 2

Nov 3, 2017

Andy and Dave discuss the late-breaking news of AlphaGo Zero, a new iteration of the Go playing AI, which surpassed its predecessor AI in about 3 days of learning, using only the basic rules of Go (as opposed to the 6+ months of the original, using thousands of games as examples).

for related materials.

Topics

AlphaGo Zero beats AlphaGo 100-0 after 3 days of training (compared to several months for original AlphaGo) and without any human intervention/human-game-playing-data! Read: Technology Review and Nature

Video

AlphaGo Documentary - Local screening in Reston, VA


Episode 1

Nov 3, 2017

In the inaugural podcast for AI with AI, Andy provides an overview of his recent report on AI, Robots, and Swarms, and discusses the bigger picture of the development and breakthroughs in artificial intelligence and autonomy. Andy also discusses some of his recommended books and movies.

for related materials.

Books

Movies & TV

AI/general

  • When Will AI Exceed Human Performance? - Survey of 352 experts who had published at recent AI conferences (Oxford, Yale, and the Future of Life Institute)
  • AI Progress Measurement - Measuring the Progress of AI Research
  • New Theory Cracks Open the Black Box of Deep Learning - Tishby / information bottleneck
  • "The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts." - analogy to renormalization (as used in statistical physics), may lead to better understanding and new architectures
  • Forget Killer Robots—Bias Is the Real AI Danger - Technology Review (MIT)
  • An AI developed at Vanderbilt University in Tennessee to identify cases of colon cancer from patients’ electronic records performed well - at first - but it was discovered that the AI "learned" to associate patients with confirmed cases with a specific clinic to which they were sent.
  • Counterargument (by Peter Norvig: Google's AI research director, co-author of standard text: Artificial Intelligence: A Modern Approach)
  • "Since humans are not very good at explaining their decision-making either...the performance of an AI system could be gauged simply by observing its outputs over time"
  • If these AI bots can master the world of StarCraft, they might be able to master the world of humans (Artificial Intelligence and Interactive Digital Entertainment -AIIDE- Starcraft AI Competition at Memorial University in Newfoundland)
  • "StarCraft [is] complex enough to be a good simulation of real life...It's like playing soccer while playing chess."

AI/military