AI with AI

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications. Join experts Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors. Episodes are recorded a week prior to release, and new episodes are released every Friday. Recording and engineering provided by Jonathan Harris.

Season 3

Episode 3.4

November 15, 2019

Facebook announces the Deepfake Detection Challenge, a rolling contest to develop technology to detect deepfakes. The US Senate passes the Deepfake Report Act, bipartisan legislation to understand the risks posed by deepfake videos. And US Representatives Hurd and Kelly announced a new initiative to develop a bipartisan national AI strategy with the Bipartisan Policy Center. In research, AI allows a paralyzed person to “handwrite” using his mind. From the University of Grenoble, a paralyzed man is able to walk using a brain-controlled exoskeleton. From the Moscow Institute of Physics and Technology, researchers use a neural network to reconstruct human thoughts from brain waves in real time using electroencephalography. A report from Elsa Kania and Sam Bendett looks at technology collaborations between Russia and China in A New Sino-Russian High-Tech Partnership. In another response to the National Security Commission on AI, Margarita Konaev publishes With AI, We’ll See Faster Fights, But Longer Wars on the War on the Rocks. James, Witten, Hastie, and Tibshirani release An Introduction to Statistical Learning. Open Science Framework makes THINGS available, an object concept and object image database of nearly 14 GB, over 1800 object concepts and more than 26,000 naturalistic object images. And finally, Janelle Shane explains why the danger of AI is Weirder Than You Think.

for related materials.


DeepFake Detection challenge (DFDC) – Initial Dataset Released
Senate Passes the Deepfake Report Act
New AI Initiative with the Bipartisan Policy Center (BPC)

2018 Whitepaper ("Rise of the Machines: Artificial Intelligence and its Growing Impact on U.S. Policy"):

Bipartisan Policy Center (BPC)


AI allows paralyzed person to ‘handwrite’ with his mind
Paralyzed man walks again with brain-controlled exoskeleton
Neural network reconstructs human thoughts from brain waves in real time

Reports of the Week

A New Sino-Russian High-Tech Partnership

Latest Response to Call for Ideas of National Security Commission on AI

Free Book of the Week

An Introduction to Statistical Learning with Applications in R

Resource of the Week

THINGS object concept and object image database

Videos of the Week

The danger of AI is Weirder Than you Think - TEDTalk

Episode 3.3

November 8, 2019

In news items, Microsoft wins bid for the Pentagon’s $10B Joint Enterprise Defense Infrastructure (JEDI) contract. DARPA’s Spectrum Collaboration Challenge (SC2), which aimed to create devices that work together to optimize spectrum use, names GatorWings (from the University of Florida) as the winner. A report from the Stanford University Institute for Human-Centered AI calls for the US Government to invest $120B in the nation’s AI ecosystem over the next 10 years. And CSET provides a translation of Russia’s National AI Strategy. In research, Google announces Quantum Supremacy, that is, they perform a calculation with Sycamore, their 53-qubit computer, taking 200 seconds to perform, that a classical computer “cannot” (saying that it would take 10,000 years). In response, IBM postulated that a classical computer could take advantage of hard drive space to do the calculation in a couple days. In reports, the Center for Security and Emerging Technology (CSET) publishes an examination of China’s Access to Foreign AI Technology, particularly noting that China’s “copycat” reputation oversimplifies its indigenous science and technology capacity and ability to innovate. Geist and Blumenthal from RAND pen “Military Deception: AI’s Killer App” for War on the Rocks, in response to the National Security Commission on AI’s call for ideas.. Stuart Russell releases Human Compatible, where he describes his approach to avoiding the threat of superhuman AI destroying civilization, which includes inherent uncertainty about the human preferences that they are required to satisfy. For resources, Nikola Plesa provides a centralized list of the biggest datasets available for machine learning. And “Bosstown Dynamics” by Corridor Digital provides a humorous look at military robots.

for related materials.

Episode 3.2

November 1, 2019

Andy and Dave discuss the AI-related supplemental report to the President’s Budget Request. The California governor signs a bill banning facial recognition use by the state’s law enforcement agencies. The 2019 Association of the US Army meeting focuses on AI. A DoD panel discussion explores the Promise and Risk of the AI Revolution. And the 3rd Annual DoD AI Industry Day will be 13 November in Silver Spring, MD. Researchers at the University of Edinburgh, the University of Cambridge, and Leiden University announce using a deep neural network to solve the chaotic 3-body problem, providing accurate solutions up to 100 million times faster than a state-of-the-art solver. Research from MIT uses a convolutional neural network to recover or recreate probable ensembles of dimensionally collapsed information (such as a video collapsing to one single image). Kate Crawford and Meredith Whittaker take a look at 2019 and the Growing Pushback Against Harmful AI. Air University Press releases AI, China, Russia, and the Global Order, edited by Nicholas Wright, with contributions from numerous authors, including Elsa Kania and Sam Bendett. Michael Stumborg from CNA pens a response to the National Security Commission’s request for ideas, on AI’s Long Data Tail. Deisenroth, Faisal, and Ong make their Mathematics for Machine Learning available. Melanie Mitchell pens AI: A Guide for Thinking Humans. An article in the New Yorker by John Seabrook examines the role of AI/ML in writing, with The Next Word. And the Allen Institute for AI updates its Semantic Scholar with now more than 175 million scientific papers across even more fields of research.

for related materials.


Supplemental Report to the President’s Budget Request: AI Related
CA Governor Signs Bill Banning Facial Recognition Use By State's Law Enforcement Agencies
2019 AUSA Warriors Corner - AI
DoD Panel Discussion: Promise and Risk of the AI Revolution
3rd Annual DoD AI Industry Day


Newton vs the machine: solving the chaotic 3-body problem using deep neural networks
Visual Deprojection: Probabilistic Recovery of Collapsed Dimensions

Reports – and Videos – of the Week

AI in 2019: A Year in Review - The Growing Pushback Against Harmful AI


Artificial Intelligence, China, Russia, and the Global Order

 (291 page) Report

Response to the “National Security Commission on AI Request for Ideas” of the Week

See You in a Month: AI’s Long Data Tail

Free Book of the Week

Mathematics for Machine Learning

Newly Released Book of the Week

Artificial Intelligence: A Guide for Thinking Humans

Mainstream Article of the Week

The Next Word: Where will predictive text take us?

Resources of the Week

Allen Institute for AI - Semantic Scholar (Update)

Episode 3.1

October 25, 2019

Welcome to Season 3.0! Andy and Dave discuss the AI in Advancement Advisory Council’s State of AI Advancement report, which takes a look at the impact of AI on roles within advancement. Researchers at Fudan and Changchun Institute of Optics announce a 500 MP camera (with associated cloud-powered AI) capable of identifying a face among tens of thousands. The U.S. National Science Foundation announces the National AI Research Institute, which anticipates approving $120M in grants next year. A recent solicitation from the Defense Innovation Unit seeks to understand trends in world events. And the JAIC has a new website. In research, OpenAI announces Dactyl, a robot hand capable of solving Rubik’s cube, as part of an effort to build a general purpose robot (transferring learning from simulation to the real world), and robust to perturbations such as broken fingers or intrusions by plush giraffes. Research accepted to ICLR 2020 demonstrates the application of deep learning to symbolic mathematics. Dan Gettinger of Bard College publishes The Drone Databook, cataloging the drones from 101 countries. The Carnegie Endowment for International Peace takes a look at the origins of AI Surveillance Technology in use around the globe. The Oliver Wyman Forum measures Global Cities’ AI Readiness, and Oxford Insights updates its Government AI Readiness Index. Arthur I Miller publishes the Artist in the Machine, while Marcus du Sautoy takes a look at The Creativity Code: Art and Innovation in the Age of AI. Lex Fridman and Gary Marcus have a discussion on AI. And Alexa will soon channel the voice of Samuel L Jackson.

for related materials.


Announcements / News
AAAC Issues First State of Artificial Intelligence in Advancement Report

(27 page) Report:

China’s New 500MP ‘Super Camera’ Can Identify a Face Among Tens of Thousands

New NSF Program: National Artificial Intelligence Research Institutes


Program Solicitation:

Recent Defense Innovation Unit (DIU) Solicitation

CSO summary:

Joint AI Center (JAIC) has a New Website

Solving Rubik’s Cube with a Robot Hand

Nontechnical summary:

(51 page) Technical paper:

OpenAI’s summary (with visualizations):

(4 min) Video Demo:

Other project videos:

Skynet Today: OpenAI's dexterous robotic hand — separating progress from PR

Deep Learning For Symbolic Mathematics

Report of the Week
The Drone Databook

 (353 page) Report:

Survey of the Week
The Global Expansion of AI Surveillance

 (42 page) Report:

Full Index:

AI-Readiness Reports of the Week
Global Cities' AI Readiness Index

Government Artificial Intelligence Readiness Index 2019

Books of the Week
The Artist in The Machine – by Arthur I. Miller

The Creativity Code: Art and Innovation in the Age of AI – by Marcus du Sautoy

Video of the Week
Discussion Between Lex Fridman and Gary Marcus

 (1.5 hr) Video:

Fun Fact of the Week
Samuel L. Jackson to Invade Your Home As the New Voice of Amazon’s Alexa

Season 2

Episode 2.43B

October 18, 2019

This week, Microsoft Research and University of Montreal show that machines can learn through interactive language by answering questions (question answering with interactive text, or QAit). The Allen Institute for AI’s Aristo system, a suite of eight solves, can pass (90%+) the New York 8th Grade regents science exams (for non-diagram, multiple choice questions), and can exceed 83% on the 12th grade exam, though Melanie Mitchell suggests the achievement may not be as profound as it seems. A “meta-research” paper from Milan and Klagenfurt takes a broader look at neural network research and highlights concerns of reproducibility (or lack thereof) as well as utility (or lack thereof, where simple heuristic methods can outperform the neural networks). From a workshop organized by Max Tegmark and Emilia Javorsky, a group of diverse authors produced a “possibility of a middle road” look at roadmapping a way ahead for Autonomous Weapons Systems. An opinion piece from Zachary Kallenborn on War on the Rocks look at What If the US Military Neglects AI? A paper in Nature provides an overview of open-ended evolution, as a part of artificial life. Gary Marcus and Ernest Davis publish a book on Rebooting AI: Building AI We Can Trust. The 57th Annual Meeting of the Association for Computational Linguistics occurred at the end of July, and Kate Koidan provides a summary of the top trends. The IEEE ranks robot creepiness with the top 100 creepy robots. Booz Allen releases a documentary on the Dawn of Generation AI. And the Naval Facilities Engineering and Expeditionary Warfare Center (NAVFAC EXWC) will host an industry day conference on cyber, control system, and machine learning in December.

for related materials.


Interactive Language Learning by Question Answering
From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project

Meta-Research Papers of the Week

Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches

Position Papers of the Week

Autonomous Weapon Systems: A Roadmapping Exercise
What if the US Military Neglects AI? AI Futures and US Incapacity

Survey Papers of the Week

Neural interfaces – 2019 report
An Overview of Open-Ended Evolution

Book of the Week

Rebooting AI: Building Artificial Intelligence We Can Trust

Conference “Catch Up” of the Week

57th Annual Meeting of the Association for Computational Linguistics (ACL)

Robot Creepiness of the Week

IEEE Ranks Robot Creepiness: Sophia Is Not Even Close to the Top


Full ranked list (224 robots) – page also includes list for “top rated” and “most wanted”

Video of the Week

The Dawn of Generation AI

(50 min) Video

Upcoming Conferences

NAVFAC EXWC INDUSTRY DAY - Cyber, Control Systems and Machine Learning

Episode 2.43A

October 11, 2019

Andy and Dave discuss the U.S. Air Force’s recently released AI strategy. NATO releases a draft reports on the implications of AI for NATO forces. A report collects 2,602 uses of AI for social good. And California legislature bans facial recognition for policy body cameras. In research, OpenAI takes a multi-agent game of hide-and-seek to 11, and discovers emergent tool use as the hiders and seekers try to gain advantages. Research from the Freie Universitat Berlin samples equilibrium states of many-body systems using deep learning to speed up sampling calculations.

for related materials.


Announcements / News

Air Force releases 2019 Artificial Intelligence Strategy 
Artificial Intelligence: Implications for NATO’s Armed Forces – Revised Draft Report
2,602 uses of AI for social good, and what we learned from them 
California legislature bars facial recognition for police body cameras  


Emergent Tool Use from Multi-Agent Interaction
Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning

Episode 2.42

October 4, 2019

Two special guests join Andy and Dave for a discussion about research in AI and autonomy. First, Dr. Andrea Gilli is a researcher at the NATO Defense College in Rome, where he works on defense innovation, military transformation, and armed forces modernization. And second, Ms. Zoe Stanley-Lockman is a fellow at the Maritime Security Programme of the Institute of Defence and Strategic Studies at the Rajartnam School of International Studies in Singapore, where she is researching, among other things, the roles of ethics in AI.

for related materials.


Andrea Gilli

“Andrea Gilli is an affiliate at CISAC and a Researcher at the NATO Defense College in Rome where he works on defense innovation, military transformation and armed forces modernization. Andrea holds a PhD in Social and Political Science from the European University Institute (EUI) in Florence. In 2015 he was awarded the European Defence Agency and Egmont Institute’s bi-annual prize for the best dissertation on European defense, security and strategy. Andrea has provided consulting services to both private and public organizations, including the EU Military Committee and the U.S. Department of Defence's Office of Net Assessment, and worked and conducted research for or been associated with several institutions, including the Royal United Services Institute, the European Union Institute for Security Studies, the Saltzman Institute for War and Peace Studies at Columbia University in New York, the Center for International Security and Cooperation at Stanford University and the Belfer Center for Science and International Affairs at the John F. Kennedy School of Government of Harvard University. Andrea’s research has been published or is forthcoming in International Security, Security Studies, The RUSI Journal, and Washington Post’s Monkey Cage.”

Zoe Stanley Lockman

“Zoe Stanley-Lockman is an Associate Research Fellow in the Maritime Security Programme of the Institute of Defence and Strategic Studies (IDSS) at the S. Rajaratnam School of International Studies (RSIS). Previously she was a Visiting Fellow in the Military Transformation Programme at the RSIS. Zoe holds a Master’s degree in International Security with a concentration in Defence Economics from Sciences Po Paris and a Bachelor’s degree from Johns Hopkins University. Prior to joining the RSIS, she spent two years at the European Union Institute for Security Studies (EUISS), first as a Junior Analyst and then as the Institute’s Defence Data Research Assistant, researching defence-industrial issues, arms exports, innovation, and military capability development. Throughout her studies, Zoe’s practical experience included working on dual-use export controls with the US government and consulting for defence contractors.”

Episode 2.41

September 27, 2019

Andy and Dave discuss research from DeepMind, University College London, and Oxford, that shows that human mental replay spontaneously reorganizes experience, implied by abstract knowledge, and which further suggests AI could use this approach to learn and improve. In other research, adversarial triggers cause natural language processing algorithms (such as GPT-2) to generate incorrect sentiment analysis, or to generate racist output (even in non-racial contexts). And researchers from Dalian, Peng Cheng, and City University of Hong Kong create a segmentation method for visual classifiers to identify and processes mirrors and reflective surfaces, which may otherwise cause confusing results. FutureGrasp provides a report on an overview of State initiatives in AI. An article in nature examines the global landscape of AI ethics guidelines. Patrick Walker pens War Without Oversight: Challenges to the Deployment of Autonomous Weapon Systems. Springer Nature publishes “the first research book generated using machine learning,” on lithium-ion batteries. Henrik Saetra publishes The Ghost in the Machine, on what it means to be human in the age of AI/ML. The Alife 2019 conference provides open access to its 2019 proceedings. And Mackmyra Whisky announces the world’s first AI-created whisky.

for related materials.


Human Replay Spontaneously Reorganizes Experience
Universal Adversarial Triggers for Attacking and Analyzing NLP
Where Is My Mirror?

Report of the Week

AI: An Overview of State Initiatives

Survey Paper of the Week

The global landscape of AI ethics guidelines

(Human) Book of the Week

War Without Oversight: Challenges to the Deployment of Autonomous Weapon  Systems

(AI/ML) Book of the Week

Lithium-Ion Batteries: A Machine-Generated Summary of Current Research

Opinion Essay on Being Human in an Age of AI/ML’ of the Week

The Ghost in the Machine

Conference Proceedings of the Week (Open-Access)

Alife 2019 Proceedings

Not-Entirely-Silly-AI-Silliness and Video of the Week

The world’s first AI-created whisky

Episode 2.40

September 20, 2019

Andy and Dave discuss the establishment of the Artificial Intelligence and Technology Office under the U.S. Department of Energy. DARPA announces Context Reasoning for Autonomous Teaming (CREATE), a new program to investigate team between groups of systems that have limited centralized coordination. Defense One and Nextgov sponsored a one-day “Genius Machines” conference in Hawaii, where it was revealed that AI is being developed to predict Chinese and Russian movement in the Pacific. MIT Lincoln Lab releases a large data set for public safety, which includes images of flooding and other disasters. And a video appears to show a Tesla driver asleep in a moving car. Finally, Russia expert Sam Bendett joins Andy and Dave to discuss his latest article in Defense One, on the draft of the Russian AI strategy.

for related materials.

Announcements / News

U.S. Secretary of Energy Rick Perry Stands Up Office for Artificial Intelligence and Technology
New DARPA Program: Context Reasoning for Autonomous Teaming (CREATE)
Genius Machines (#19): Great Powers and the Pacific Edge
Large Scale Organization and Inference of an Imagery Dataset for Public Safety
Video Appears to Show Tesla Driver Asleep in Moving Car

Sam Bendett’s Interview

Sneak Preview: First Draft of Russia’s AI Strategy

Episode 2.39

September 13, 2019

Andy and Dave discuss the Joint Artificial Intelligence Center's efforts to tackle deep fakes through DARPA's Media Forensics program, as well as the announcement that the JAIC's biggest project for FY20 will include "AI for maneuver and fires." Intel reveals its first AI chips, on the Nervana Neural Network Processor line, with one to train AI systems and another to handle inference. Cerebras Systems announces the world's largest chip, with 1.2 trillion transistors and 400,000 cores. A Russian Soyuz spacecraft docked with the International Space Station; it had Roscosmos's Skybot F-850 humanoid robot aboard. Researchers at Hong Kong University of S&T demonstrate an all-optical neural network for deep learning. Researchers at MIT and Tubingen identify four types of neuronal cells based on their electrical spiking activity. And a larger team of researchers, primarily based in China, unveil the Tianjic chip, as a hybrid that combines computer science (with a binary focus) with neuroscience (with a neural burst and spike focus) on one chip. In the book of the week, K. Eric Drexler of Oxford publishes a large report on Reframing Superintelligence. An article from Melanie Mitchel in Popular Computing in 1985 seems hardly out of place in 2019 with its look at what people were predicting for the future. A report from PAX surveys the tech sector's stance on lethal autonomous weapons. The Intelligence Community Studies Board releases the proceedings of a workshop on Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies. Jonathan Clifford pens a piece in War on the Rocks on how "AI will change war, but not in the way you think." In a video, Elon Musk and Jack Ma discuss AI at the World AI Conference in Shanghai. And the Australian Defence College will host a seminar on Science Fiction and the Future of War on 3 October 2019.

for related materials.

Announcements / News

Tackling Deepfakes: DARPA’s Media Forensics (MediFor) program
“AI for Maneuver and Fires” program will be JAIC’s “biggest project” in FY-20
Intel reveals first AI chips
TensorFlow Optimizations for the INTEL Xeon Scalable Processor
Cerebras Systems’ enromous chip w/1.2 trillion transistors to turbocharge AI applications
Russian humanoid robot makes its way to the International Space Station


Researchers demonstrate all-optical neural network for deep learning
Four Types of Brain Cells Identified Based on Electrical Spiking Activity
Towards artificial general intelligence with hybrid Tianjic chip architecture

Book of the Week

Reframing Superintelligence

“Classic” Hype

AI and the Popular Press (1985) – Melanie Litchell

Report of the Week

Don’t be Evil? A Survey of the tech sector’s stance on lethal autonomous weapons

Book of the Week

Robust Machine Learning Algorithms and Systems for Detection and Mitigation of Adversarial Attacks and Anomalies: Proceedings of a Workshop

Opinion Piece of the Week

AI will change war, but not in the way you think

Video of the Week

Elon Musk and Jack Ma discuss AI at the World Artificial Intelligence Conference in Shanghai

Upcoming Conference

Science Fiction and the Future of War Seminar

Episode 2.38

September 6, 2019

Happy 100th Episode to AI with AI! Andy and Dave celebrate the 100th episode of the AI with AI podcast, starting with a new theme song, inspired by the Mega Man series of games. Andy and Dave take the time to look at the past two years of covering AI news and research, including at how the podcast has grown from the first season to the second season. They also take a look back at some of the recurring themes and favorite topics, including GPT2 and the Lottery Ticket hypothesis, among many others; they also look forward to (hopefully!) all the latest and greatest news to come. Throughout this episode, we hear from listeners, supporters, and colleagues who have appeared on the podcast. Here’s to another 100, and thanks for listening!

for related materials.

Andy’s Favorites

Technology « Robotics  « Neurscience/Neurobiology « Artificial NNs
ML applied to ML / dealing with larger (unspecifiable) computational space
Not solving a given (narrow) “problem,” per se, but in support of a larger enterprise
Pushing the “envelope” of basic theory of ML/AI
Probing fundamental limits


Episode 2.37B

August 30, 2019

Researchers at Berkeley, Washington, and Chicago identify “natural adversarial” examples that cause classifier accuracy to significantly degrade, likely due to an over-reliance on color, texture, and background cues. Andy and Dave then discuss a series of events following a Nature paper on application of deep learning to aftershock patterns of earthquakes, wherein other researchers raised questions on the researcher (one demonstrating that a simple logistic regression does better; and another showing that the original researchers included their test data set in their training data set). A new study by the Insurance Institute for Highway Safety shows that drivers overestimate the capability of vehicle automated systems, with Telsa’s Autopilot leading the rest in overestimation. Goodfellow, Bengio, and Courville publish their 800 page tome on Deep Learning. The Classic Paper of the Week comes from Pattie Maes and Rodney Brooks, who published Learning to Coordinate Behaviors in 1990. The video presentation of the octopus research makes the video of the week. NASA streams 24/7 with OUTERHELIOS, a neural network trained on Coltrane to produce non-stop free jazz (though the feed may now be “static only”).

Click here to leave a testimonial for the 100th episode!

for related materials.

Episode 2.37A

August 24, 2019

The National Security Commission on AI solicits creative and original ideas to challenge the status quo assumptions on maintaining US global leadership in AI. Researchers at MIT and Colgate publish an engineering *concept* that would use superconducting nanowires to mimic artificial neurons in a way that would theoretically match the energy efficiency of brains. Microsoft invests $1B in OpenAI to create brain-like machines. A proposed bill would prohibit the use of facial recognition technology for all public housing units that receive funding from the Department of Housing and Urban Development. Researchers at the University of Washington, Seattle demonstrate that octopuses' arms are capable of making decisions without input from their brains, with more than 350 million of its 500 million neurons in their arms. Google DeepMInd uses a generative adversarial model to generate fictional videos with DVD-GAN.

Click here to leave a testimonial for the 100th episode!

for related materials.

Episode 2.36B

August 16, 2019

The University of Singapore creatures an artificial skin that can sense temperature, pressure, and humidity. The International Center for Ethics in the Sciences and Humanities releases its Evaluation of (AI) Guidelines. A report from FutureGrasp takes a global look at the AI initiatives (or lack thereof) of States. Hayden Klok and Yoni Nazarathy release a draft of Statistics with Julia. Metaacademy provides learning plans and resources for learning about topics, from beginner to advanced. Claude Shannon’s 1948 paper “A Mathematical Theory of Communication” makes Andy’s Classic Paper for the week. Stephen Wolfram’s testimony on AI before the US Senate Commerce Committee becomes available, including his blog write-up about the testimony. And Fedor Kitashov publishes an essay on using AI to restore and colorize photos.

Click here to leave a testimonial for the 100th episode!

for related materials.

An AI that Can Visualize Objects Using Touch
Artificial Skin Can Sense Temperature, Pressure, and Humidity

Reports of the Week

The Ethics of AI Ethics: An Evaluation of Guidelines
AI: An Overview of State Initiatives

Book of the Week

Statistics with Julia: Fundamentals for Data Science, Machine Learning, and AI

Resources of the Week

Meta-Academy Roadmaps – A Package Manager for Knowledge

Classic Paper of the Week

"A Mathematical Theory of Communication" by Claude Shannon

Video of the Week

Stephen Wolfram’s testimony about AI at a hearing of the US Senate Commerce Committee’s Subcommittee on Communications, Technology, Innovation and the Internet

Interesting Site of the Week

A Technical Look at Creating an AI to Restore and Colorize Photos

Episode 2.36A

August 9, 2019

Andy and Dave discuss the Digital Modernization Strategy that the US Department of Defense released on 12 July 2019. Todd Austin at the University of Michigan presents research at a conference on Morpheus, a project to create a chip that randomizes elements of its code, in an attempt to slow would-be hackers. Also in chip-related news, Intel introduces Pohoiki Beach, a new 8 million-neuron neuromorphic system with 64 Loihi research chips, with expectations that they will produce a system capable of simulation 100 million neurons by the end of 2019. Baylor College of Medicine in collaboration with the University of California and Second Sight Medical Products announce Project Orion, an implant that transmits video images directly to the visual cortex, bypassing the eye and optic nerve. And the Naval Information Warfare Systems Command and PEO C4I announce the AI Applications to Autonomous Cybersecurity (AI ATAC), a contest for using AI/ML to bolster network security operations. Research from University of Wisconsin, Madison, demonstrates that optical waves passing through a nanophotonic medium can perform artificial neural computing – here, that a sheet of glass can identify numbers by “looking,” or in this case, by making use of bubbles and other impurities in the glass to function as a neural processor. Research from Stanford creates an convolutional neural network that can play Go without game tree search, more mimicking a human-level understanding and approach.

for related materials.


DOD Releases Digital Modernization Strategy
Chip News #1: Morpheus
Chip News #2: Intel Introduces New Self-Learning 64-Chip Neuromorphic System – Pohoki Beach
Neural Implant Sends Camera Feed Into Blind People’s Brains - Orion
Artificial Intelligence Applications to Autonomous Cybersecurity (AI ATAC) Challenge


AI made from a sheet of glass recognizes numbers just by “looking”
Playing Go without Game Tree Search Using Convolutional Neural Networks

Episode 2.35B

August 2, 2019

Continuing in research, Andy and Dave discuss research from Imperial College and the Samsung AI Centre, which can take a single image of any face, and create realistic speech-driven facial animations, using a GAN. From the Conference on Computer Vision and Pattern Recognition, researchers create an algorithm that can learn individual styles of conversational gesture, and then produce plausible gestures to accompany other audio input. And research in Nature examines 3.3 million material-science abstracts with unsupervised word embeddings to capture “latent knowledge.” The survey paper of the week looks at the reproducibility of machine learning in health-related fields, and finds health consistently lags behind other subfield of machine learning. Safety First for Automated Driving identifies the guiding principles for autonomous cars to be safe, with input from 11 authors; among the information, the report finds that verification and validation of the systems is still lacking in the existing literature. The Berkman Klein Center at Harvard compiles an infographic on all of the published AI “principles” from governments, industry, and other organizations. The “classic paper” of the week comes from Alan Turing’s 1948 paper on “Intelligent Machinery.” The 36th International Conference on Machine Learning releases over 150 videos from its June session. CognitionX 2019 releases a video on managing security in an insecure world. Manlio de Domenico and Hiroki Sayama (and many others!) provide an interactive site for explaining and exploring complexity. Wendy Anderson and August Cole explore what war in the late 2020s might look like for the Secretary of Defense, in The Secretary of Hyperwar. And for click-bait of the week, astrophysicists get “baffled” by their simulation of the universe using AI.

for related materials.


Realistic Speech-Driven Facial Animation with GANs
Learning Individual Styles of Conversational Gesture
Unsupervised word embeddings capture latent knowledge from materials science literature

Automated Cognome Construction and Semi-automated Hypothesis Generation (2012)

Mining / predicting astronomical research (2012)

Automated Hypothesis Generation Based on Mining Scientific Literature (2014)

Latent pattern discovery using semi-metric behavior in document networks metrics (2002)

Survey of the Week

Reproducibility in Machine Learning for Health (ML4H)

Report of the Week

Companies spell out guiding principles for autonomous cars to be safe

Infographic of the Week

Principled AI: A Map of Ethical and Rights-Based Approaches

Classic Paper of the Week

"Intelligent Machinery" by Alan Turing

Videos of the Week

Video Collection from ICML 2019

Recent Advances in Population-Based Search for Deep Neural Networks: Quality Diversity, Indirect Encodings, and Open-Ended Algorithms

Managing Security in an Insecure World: Balancing the Need For Privacy and Security

Resources of the Week

Complexity Explained

Short Story of the Week

The Secretary of Hyperwar: OpTempo at Machine Speed

“Click bait” of the Week – But that also links to interesting work!

“Astrophysicists baffled by their own AI simulation of the universe”

Episode 2.35A

July 26, 2019

Andy and Dave discuss a scathing report on Scotland Yard’s facial recognition software, which researchers at the University of Essex found to have an 81% error rate (but that the Met Police say has an error rate of 0.1%). In related news, Axon announced that it will ban the use of facial recognition systems on its devices; Axon supplies 47 of the 69 largest police agencies in the U.S. with body cameras and software. DARPA announces IDAS, the Intent-Defined Adaptive Software (IDAS), in an attempt to reduce the need for manual software modifications. NIST posts the first draft guideline for developing AI technical standards. Elon Musk says that its Neuralink is almost ready for the first human volunteers; Neuralink uses ultrafine threads that can be implanted into the brain to detect the activity of neurons. And the Bank of England announced that Alan Turing will be on the new Fifty Pounds note. In research, Andy and Dave discuss Pluribus, the latest AI for multiplayer poker from CMU and Facebook AI, which won during a 12-day poker marathon in 6-player no-limit Texas hold’em; the AI runs on two Intel processors and a “modest” 128GB during play.

for related materials.


Scathing Report of Scotland Yard’s Facial Recognition Software
A Major Police Body Cam Company Just Banned Facial Recognition
New DARPA Program: Intent-Defined Adaptive Software (IDAS)
Follow-up (to an earlier follow up) to NIST’s RFI about Federal Engagement in AI Standards  
Elon Musk Says Neuralink Almost Ready for 1st Human Volunteers!
Alan Turing’s Portrait to be Featured on Bank of England’s £50 note


Superhuman AI for multiplayer poker: Pluribus

Episode 2.34B

July 19, 2019

More research from Berkeley and also University of Southern California creates a method to “protect” world leaders against deep fakes, by identifying, among other things, 17 Facial Action Units (such as subtle movements of eyebrows, cheeks, nose, etc, during speech). And research from MIT can take an audio clip and convert it to a generic human face. A report from RAND looks at Ethics in Scientific Research. Deakin University and Harvard provide a survey of deep reinforcement learning in cyber security. Another survey from Dublin University and Intel Labs looks at Generative Adversarial Networks and their taxonomy. Vishal Maini and Samer Sabri provide Machine Learning for Humans. Andy recommends Ludwig von Bertalanffy’s General System Theory from 1968. Matt Turek takes a look at the history of media forensics. The House Homeland Security Subcommittee on Intelligence and Counterterrorism holds a hearing on AI and Counterterrorism. And the Computer Vision and Pattern Recognition 2019 conference begins to post its tutorials, workshops, and its 80 page program guide.

for related materials.


Protecting World Leaders Against Deep Fakes
Speech2Face: Learning the Face Behind a Voice

Reports of the Week

Ethics in Scientific Research: An Examination of Ethical Principles and Emerging Topics

Surveys Paper of the Week

Deep Reinforcement Learning for Cyber Security
Generative Adversarial Networks: A Survey and Taxonomy

Book of the Week

Machine Learning for Humans

Classic Book of the Week

General System Theory, by Ludwig von Bertalanffy

Videos of the Week

AI Colloquium: Media Forensics
House Homeland Security Subcommittee on Intelligence and Counterterrorism hearing on: AI and Counterterrorism - Possibilities and Limitations

Conference of the Week

Computer Vision and Pattern Recognition (CVPR) – 2019

Episode 2.34A

July 12, 2019

Andy and Dave discuss the update to the US National AI Research and Development Strategic Plan, which establishes 8 objectives for federally funded AI research. Meanwhile, the European Commission starts its pilot phase for ethics guidelines for trustworthy AI, with the first AI Alliance Assembly meeting in Brussels and the High-Level Expert Group of AI (AI HLEG). The Joint AI Center, in conjunction with CMU, CrowdAI, and DIU, plans to make available xBD (x-Building-Damage), an open-source labeled data set of satellite imagery of some of the largest natural disasters in the past decade; it will contain ~700k building annotations across over 5,000 km^2 of imagery from 15 countries. The JAIC also announced a partnership with the Singapore’s Defence Science and Technology Agency to collaborate on AI in humanitarian assistance and disaster relief. A white paper by Pactera suggests that 85% of AI projects fail. A new DARPA program, Virtual Intelligence Processing (VIP) aims to explore “brain-inspired” methods for dealing with incomplete, spare, and noisy data. Facebook releases AI Habitat, an open source environment for training and testing AI agents. And NIST’s RFI on AI Standards receives nearly 100 respondents. Researchers at Adobe Research and Berkeley use AI to detect facial image manipulations that were done by Photoshop’s “Face Aware Liquify” feature; while humans were able to judge an altered face 53% of the time, the Convolutional Neural Network tool achieved results as high as 99%.

for related materials.

Episode 2.33

July 5, 2019

Russia expert Sam Bendett joins Andy and Dave for a discussion and update on Russia’s latest developments and efforts in AI and autonomy. The group discusses a 30 May meeting, in which Russian President Vladimir Putin outlined the national AI priorities; the Russian AI strategy, originally expected in June, is now expected in the June-to-October timeframe. They also discuss the growing AI infrastructure, and the opening of AI centers across the country, with a mindset similar to a “startup culture,” with Russian AI developers getting international recognition. The group touches on relations between Russia and China, particularly in the wake of the Huawei issues. The “Army-2019” military expo in June should also provide useful insights about the Russian military development and employment of AI and related capabilities.

for related materials.

Defense One: Putin Drops Hints about Upcoming National AI Strategy

Episode 2.32B

June 28, 2019

Researchers at the University of Tubingen demonstrate that virtual neurons spontaneously develop a “number sense” when assessing the number of visual items (such as dots) in a set. The Allen Institute for AI create Grover, a neural network that can generate fake news, but that can also detect NN-generated fake news; Grover uses the same architecture as GPT-2 (the previous “unreleasable for the safety of humanity” algorithm), but these researchers highlight the importance of making available such generators. In related news, Witness Media Lab releases a report on the current state of deepfake tech; a CNN report looks at how Finland is fighting fake news; and a NY Times article examines the “weaponization” of AI-generated disinformation. A Mashable article from Marcus Gilmer looks at the state of software that attempts to identify deepfakes. The International Committee of the Red Cross releases a report a “human-centered approach” to AI and machine learning in armed conflict. A paper from Springer-Verlag provides a history and references for the “neural-symbolic debate.” Hiroki Sayama at SUNY Binghamton makes available “Introduction to the Modeling and Analysis of Complex Systems.” The US-China Commission releases testimony on a day-long session, with testimony from experts on three topics, including the US-China Competition in AI. The Allen Institute brain atlases available for exploring online. The 36th International Conference on Machine Learning meets in Long Beach, CA, with over 6,000 participants. Meanwhile, CogX meets in King’s Cross, London. And former Secretary of Defense Ash Carter pens a “letter to a young Googler” on the morality of defending America.

for related materials.

Number detectors spontaneously emerge in a DNN designed for visual object recognition
Grover - A State-of-the-Art Defense against Neural Fake New
Automated Speech Generation from UN General Assembly Statements
Witness Media Lab - Prepare, Don’t Panic: Synthetic Media and Deepfakes
(CNN Report) Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy
(NY Times, Cade Metz and Scott Blumenthal) How A.I. Could Be Weaponized to Spread Disinformation

Investigative Report of the Week

As concern over deepfakes shifts to politics, detection software tries to keep up

Report of the Week

AI and machine learning in armed conflict: A human-centered approach

Survey Paper of the Week

The neural-symbolic debate and beyond

Book of the Week

Introduction to the Modeling and Analysis of Complex Systems

Video of the Week

Expert testimony before the US-China Commission (USCC)


Interesting Site of the Week

Allen Institute’s

Conferences of the Week

36th International Conference on Machine Learning (ICML)
CogX 2019

“Opinion” of the Week

The morality of defending America: A letter to a young Googler

Episode 2.32A

June 21, 2019

Andy and Dave discuss early thoughts from the House Intelligence Committee hearing on deep fakes, manipulated media, and AI; artists take a shot at Mark Zuckerberg to demonstrate the power of fake videos; the House Armed Services Committee doubles Joint AI funding; Google AI releases the Google Research Football Environment; a study examines the amount of CO2 released when training AI models; Microsoft provides an AI curriculum for government decision-makers; Microsoft also removes access to a database with 10 million “celebrity” images; and Rodney Brooks and Gary Marcus launch startup Robust.AI, which aims to build the first industrial-grade cognitive platform for robots. Research from CMU, Google AI, and Stanford “peeks into the future” by predicting the future activities and locations of people in videos.

for related materials.


House Intelligence Committee Holds Open Hearing on Deepfakes and AI
House Armed Services Committee (HASC) Doubles Joint AI Funding
Google AI releases Google Research Football Environment (GRFE)
Training a single AI model can emit as much carbon as five cars in their lifetimes
Microsoft unveils an AI curriculum for government decision-makers
Microsoft quietly deletes largest public face recognition data set
AI mavericks” launch startup to build a better brain for industrial robots


Peeking into the Future: Predicting Future Person Activities and Locations in Videos

Episode 2.31

June 14, 2019

In news items, Andy and Dave discuss China’s call for international cooperation on a code of ethics for AI. The Organisation for Economic Co-operation and Development (OECD) unveils the first intergovernmental standards for AI policies, with support from 42 countries. The US Army has invited the design of prototypes for the Next-Generation Squad Weapon, which may include wind-sensing and even facial-recognition technology. DARPA’s Spectrum Collaboration Challenge (SC2) presents an essay at IEEE Spectrum, which describes the challenges of making the most out of an increasingly crowded electromagnetic spectrum, including running contests for better spectrum management, and using Colosseum as the test ground. Google announces the ‘AI Workshop,’ which offers early access to AI capabilities and experiments. In research, Google DeepMind announces an AI that has achieved human-level performance in Quake III Arena Capture the Flag mode; among other things, human players rated the AI as “more collaborative than other humans” (though had mixed reaction to the AI as their teammates). Google Research presents HOList, an environment for machine learning of higher-order theorem proving. Research from Oxford University creates a model for human-like machine thinking by mimicking the prefrontal cortex for language-guided imagination. A paper from Jeff Cline at Uber AI Labs suggests a different approach to Artificial General Intelligence, by means of AI-generating algorithms that learn how to produce AgI. MacroPolo produces a series of 6 charts on Chinese AI talent. CBInsights compiles the view of 52 “experts” on “How AI Will Go Out of Control.” Blum, Kopcroft, Kannan, and Microsoft release Foundations of Data Science; Hutter, Kotthoff, Vanschoren, and Springer-Verlag make Automated Machine Learning available. The Purdue Symposium on Ethics, Technology, and the Future of War and Security releases a video on the Ethical, Legal, and Social Implications of Autonomy and AI in Warfare. The University of Colorado Boulder creates an Index of Complex Networks (ICON). And Alexander Reben creates a repository of 1 million fake AI-generated faces.

for related materials.


China releases a code of ethics for AI, Calls For International Cooperation
42 Countries Agree to International Principles for Artificial Intelligence
Army’s Next Infantry Weapon Could Have Facial-Recognition Technology
DARPA’s Spectrum Collaboration Challenge (SC2)
Google's New 'AI Workshop' Offers Early Access to The Frontier Of AI Research


AI achieves human levels of performance in a modified version of Quake III Arena in CTF mode
HOList: An Environment for Machine Learning of Higher-Order Theorem Proving
Human-like machine thinking: Language guided imagination

Speculative Paper of the Week

AI-generating algorithms, an alternate paradigm for producing general artificial intelligence

Reports / Surveys of the Week

Chinese AI Talent in Six Charts
How AI Will Go Out Of Control According To 52 Experts

Books of the Week

Foundations of Data Science
Automated Machine Learning

Video of the Week

The Ethical, Legal and Social Implications of Autonomy and AI in Warfare

Interesting Sites of the Week

Index of Complex Networks
1 million fake AI generated faces for anyone to download at 1024x1024 resolution

Video artwork (compiled into a 9+ hr video)

Episode 2.30b

June 7, 2019

Continuing in research topics, Andy and Dave discuss research from MIT that treats image classification adversarial examples not as bugs, but as features – and intentionally mislabeled pictures; the approach adds robustness to vulnerability, and provides evidence that adversarial vulnerability is caused by non-robust features and is not inherently tied to the standard training framework. The Bulletin of the Atomic Scientists releases The Global Competition for AI Dominance in its May 2019 issue. Isaac Godfrie provides a summary of “few shot” learning papers that were presented at ICLR 2019. A research paper shows the interface between machine learning and the physical sciences. A new survey from Alegion and Dimensional Research examines the data issues impacting AI/ML research (for example, 96% of companies surveyed said they ran into problem with data quality). Georgios Mastorakis examines issues that arise from taking a human-like approach to training algorithms. Mohri, Rostamizadeh, and Talwalkar release a graduate-level book on Foundations of Machine Learning through MIT Press. CollegeHumor produces “A Computer Co-Wrote this Sketch,” in which the characters appear to become aware of their situation. And finally, the Genetic and Evolutionary Computation Conference is scheduled for 13-17 July 2019 in Prague, Czech Republic

for related materials.

Related Links

Adversarial Examples Are Not Bugs, They Are Features

Magazine Issue of the Week

Bulletin of the Atomic Scientists – The Global Competition for AI Dominance

Reports / Surveys of the Week

AI/ML Research Survey - Artificial Intelligence and Machine Learning Obstructed by Data Issues

ICLR 2019: Overcoming limited data

Machine learning and the physical sciences

Human-like machine learning: limitations and suggestions

Book of the Week

Foundations of Machine Learning

Short Story & Video of the Week

A Computer Co-Wrote this Sketch

Upcoming Conferences

Genetic and Evolutionary Computation Conference (GECCO)

Episode 2.30a

May 31, 2019

Andy and Dave discuss a new IARPA program, Camera Network Research Data Collection, which intends to identify and track subjects across areas as large as six miles via a security camera footage of varying type and quality. DARPA announces the recipients of its Next-Generation Non-Surgical Neurotechnology (N3) program, which includes efforts to read from and write to the brain. The Joint Artificial Intelligence Center adds two new areas of focus: cybersecurity, and robotic process automation. Roborder, a provider of autonomous swarms of heterogeneous robots for border surveillance, will be running three pilot programs in Europe. Ford announced a team-up with Agility Robotics to launch a self-driving vehicle service by 2021, using Digit to deliver packages to doorsteps. The Computing Community Consortium and the Association for the Advancement of AI have made a request for comments on a draft of a “20-Year Community Roadmap for AI Research in the US.” In research items, Facebook AI, UT Austin, and UC Berkley announced research that uses “active observation completion” to demonstrate the emergence of look-around behaviors. And other research from UT Berkley explores the benefits of self-driving vehicles using “social perception” of the nearby drivers in order to gain additional information.

for related materials.


New IARPA Program - Camera Network Research Data Collection (CNRDC)
DARPA Announces Recepients of Next-Generation Non-Surgical Neurotechnology (N3) Program
DOD’s New AI Center Ramps Up
Autonomous Swarm of Heterogeneous Robots for Border Surveillance (Roborder)
Ford Self-Driving Vans Will Use Legged Robots to Make Deliveries
A 20-Year Community Roadmap for AI Research in the US


Emergence of exploratory look-around behaviors through active observation completion
Behavior Planning of Autonomous Cars with Social Perception

Episode 2.29

May 24, 2019

Andy and Dave take a look at the reintroduction of the "AI in Government Act," a bill that intents to get more AI technical experts into the US Government. San Francisco bans facial recognition software (but leaves the door open in the future), while Moscow announces plans to weave AI facial recognition into its urban surveillance net. Facebook opens up its data to academic researchers for analysis. DARPA announces the Air Combat Evolution (ACE) program, to automate air-to-air combat; DARPA also announces Teaching AI to Leverage Overlooked Residuals (TAILOR), to make soldiers fitter, happier, and more productive. And IARPA announces Trojans in AI (TrojAI), an effort to inspect AI for malicious code. In research, Andy and Dave discuss research from Frankle at MIT that proposes a "Lottery Ticket" hypothesis, which suggests only certain "winning combinations" are necessary for training a neural networks, and that researchers have been training neural networks that are much larger than they need to be to increase the chances of includes one of these winning combinations. Leon Bottou at Facebook AI proposes a method for using AI to identify causal relationships in data (and which goes against common modern practice of combining data sets into one giant dataset). And research from Cambridge, George IT, and the University of Pennsylvania demonstrates that Magic: the Gathering is officially the world’s most complicated game (and is Turing complete). In reports of the week, the Stockholm International Peace Research Institute releases the Impact of AI on Strategic Stability and Nuclear Risk. IKV and Pax Christi release The State of AI. Analytics Vedhya has compiled a list of 25 open datasets for deep learning. Benedek Rozemberczki has curated a list of decision tree research papers. The IEEE Spectrum releases a report on Accelerating Autonomous Vehicle Technology. The May 2019 issue of The Scientist contains 15 articles on how Biology is tackling AI. David Kriesel provides A Brief Introduction to Neural Networks. COL Jasper Jeffers wins the 2019 Sci-Fi Writing Contest with AN41. The ICLR 2019 provides video on four talks, including Frankle’s Lottery Ticket hypothesis, and Bottou’s Casual Invariance. Melanie Mitchell gives a Ted Talk on the Collapse of AI and the possibility of an AI winter. And the National Academies-Royal Society Public Symposium will be meeting in DC on 24 May for an International Dialogue on AI.

for related materials.


The "Artificial Intelligence in Government Act" (AIGA) is Reintroduced
San Francisco bans facial recognition software
Additional SF Ban Coverage – Wide Deployment
Moscow to Weave AI Face Recognition into Its Urban Surveillance Net
Facebook opens up its data to academics to see how it impacts elections
New DARPA AI Program - Air Combat Evolution (ACE)
New DARPA AI Program - Teaching AI to Leverage Overlooked Residuals (TAILOR)
New IARPA AI Program - Trojans in Artificial Intelligence (TrojAI)


The "Lottery Ticket" hypothesis
Learning Representations using Causal Invariance
"Magic: The Gathering" is officially the world’s most complex game

Reports of the Week

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk
The State of AI – PAX Report

Useful Links of the Week

Open Datasets for Deep Learning
Decision tree research papers

Nontechnical paper of the Week

Accelerating Autonomous Vehicle Technology

Magazine of the Week

The Scientist – AI Tackles Biology

Book of the Week

A Brief Introduction to Neural Networks

Sci-Fi Story of the Week

AN41 – Winning Entry in Army Mad Scientist 2019 Sci-Fi Writing Contest

Videos of the Week

ICLR 2019
The Collapse of Artificial Intelligence

(Upcoming) Conference of the Week

AI: An International Dialogue - A National Academies-Royal Society Public Symposium

Episode 2.28

May 17, 2019

“Bots” reign supreme in this week’s episode, though Andy and Dave start the discussion NIST’s RFI on the development of technical standards for AI. A Harvard Medical School project demonstrates a catheter that can autonomously move inside a live, beating pig’s heart. Zipline uses medical delivery drones in Rwanda. University of Maryland researchers demonstrate drone delivery of a kidney for transplant. NASA tests a CACADA swarm, and is also investigating Marsbees. And Starship robo-couriers deliver food to students at GMU. In research from Berkeley, a robot learns to use improvised tools to complete tasks, including those with physical cause-and-effect relationships. Researchers at MIT, MIT-IBM Watson, and DeepMind create the Neuro-Symbolic Concept Learner (NSCL), which uses a hybrid connectionist/symbolic approach, and seeming to be a “true” AI implementation of Winograd’s SHRDLU system from the 60s. Research from Tsinghua University and Google demonstrates Neural Logic Machines, a neural-symbolic architecture for both inductive learning and logic reasoning. Two papers compare logistic regression with machine learning methods for clinical predictions; one shows no benefit of one method over the other, while the other claims better performance with neural network methods (although Andy and Dave wonder whether this statement is true, given the error bars in the results). Algorithm Watch publishes a Global Inventory of AI Ethics Guidelines. Times Higher Education (THE) and Microsoft release a survey on AI of more than 100 AI experts and university leaders. The Department of Information Technology at the University of Uppsala in Sweden has made its lecture notes for a statistical machine learning course available. The Santa Fe Institute reprints a classic collection of essays from its Founding Workshops. Robert Kranekg pens a story about an Angry Engineer. And the OpenAI Robotics Symposium 2019 releases the full video proceedings online.

for related materials.

News – Mostly Bots, Bots, and more Bots

NIST Releases RFI on the Development of Technical Standards for AI
A robotic catheter has autonomously wound its way inside a live, beating pig’s heart
Zipline’s Medical Delivery Drones in Rwanda
First Use of Unmanned Aircraft to Successfully Deliver Kidney for Transplant
NASA test a swarm of 100 US Navy Cicada drones
Bots deliver food to students at GMU


Robots that Learn to Use Improvised Tools
The Neuro-Symbolic Concept Learner (NSCL) - a Hybrid Connectionist/Symbolic Approach
Neural Logic Machines (NLMs)
Two Contradictory(?) Reviews of Benefits of ML over Logistic Regression for Clinical Predictions?
A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models

Reviews / Surveys

The AI Ethics Guidelines Global Inventory
The THE-Microsoft survey on AI

Books of the Week

Lecture Notes on Machine Learning
Emerging Syntheses in Science: Proceedings of the Founding Workshops of the Santa Fe Institute
  • Kindle versionprint replica - is only $2.99 (paperback for $13.99):

Sci-Fi Story of the Week

Angry Engineer - by Robert Hranekg

(March 2019) LecVideos of the Week

OpenAI Robotics Symposium 2019

(Upcoming) Conference of the Week

Seventh International Conference on Learning Representations (ICLR 2019)

Will be held May 6 – 9, in New Orleans.

Episode 2.27

May 10, 2019

Professor Jennifer McArdle, Assistant Professor of Cyber Defense at Salve Regina University, joins Andy and Dave for a discussion on AI and machine learning. Jenny is leading a group of graduate students who are working on creating a strategic-level primer on AI, particularly aimed at those who may be less familiar with the technical aspects, as well as a War on the Rocks article on AI in training and synthetic environments. Her students are studying in a variety of areas, including cyber defense and digital forensics, cyber and synthetic training, cyber intelligence, healthcare and healthcare administration, and administrative justice. Graduate students Mackenzie Mandile and Saurav Chatterjee also join for a discussion on their research topics. In the photo (from left to right): Maria Hendrickson, Gabrielle Cusano, Abigail Verille, Erin Rorke, (John Cleese), Saurav Chatterjee, Allegra Graziano, Santiago Durango, Eric Baucke, Mackenzie Mandile, Dave Broyles, Jennifer McArdle, Andy Ilachinski, John Crooks, (Getafix), and Lt. Col. David Lyle.

for related materials.

Episode 2.26

May 3, 2019

Andy and Dave welcome Dr. Anna Williams and Dr. Larry Lewis to discuss the recent UN Convention on Certain Conventional Weapons, and the latest developments in the global discussion on Lethal Autonomous Weapons Systems (LAWS).

for related materials.

Episode 2.25

April 26, 2019

Andy and Dave discuss the Department of Energy’s attempt to create the world’s longest acronym, with DIFFERENTIATE (Design Intelligence for Formidable Energy Reduction Engendering Numerous Totally Impactful Advanced Technology Enhancements), and to accelerate incorporation of ML into energy technology and product design. Google cancels its AI ethics board after thousands of employees sign a petition calling for the removal of one member with anti-LGBTQ and anti-immigrant views. NASA unveils the Astrobees, one-foot cube robots that will work autonomously on the International Space Station to check inventory and monitor noise levels, among other things. And Microsoft partners with French online education platform OpenClasrooms to train and recruit promising students in AI. Research from Eindhoven University of Technology and the University of Trento takes a biologically “inspired” approach to neural net learning, through Neuron Elevation Traces (NATs), that allow additional data storage in each synapse; the result appears to increase the plasticity of the synapses. A mathematical reasoning model from DeepMind can solve some arithmetic, algebra, and probability problems, though sometimes gets simple calculations incorrect (such as 1 + 1 + … + 1, for n>=7). And research creates a musculoskeletal system that can use muscle activation to simulate movement and control. A report from Element AI examines the Global AI Talent distributions in 2019, to include (perhaps not surprisingly) the observation that the supply of top-tier AI talent does not meet the demand. A paper in Nature Reviews Physics surveys the physics of brain network structure, function, and control. A short sci-fi story from Jeffrey Ford describes The Seventh Expression of the Robot General. And Andy highlights a video from 1961 on The Thinking Machine.

for related materials.

Episode 2.24

April 19, 2019

Andy and Dave discuss the first image of a black hole, and its link to machine learning -- with research from Katie Bouman while she was at MIT, developing Continuous High-resolution Image Reconstruction using Patch priors (CHIRP), as a way to stitch together different sources to create a continuous whole. Next, Andy and Dave discuss research from the Sorbonne and IST Austria that tries to deduce the reward function of a recurrent neural network by assuming the neurons are agents. And research from Hopfield and Krotov examine a way to approach neural network learning in a more “plausible” biological fashion, with a more physically local method of plasticity. In reports, the European Comission releases its 41-page report on Ethics Guidelines for Trustworthy AI. Elizabeth Holm publishes a short paper in defense of the black box. A paper in IEEE Spectrum examines the actual health care products (compared to the partnerships and promises) of IBM Watson. Sean Luke publishes the second edition of The Essentials of Metaheuristics. And the video of the week is a 2016 TED Talk by Katie Bouman on the development of the software that combines the data collected by individual telescopes.

for related materials.


Image of Black hole captured for first time in space breakthrough


Inferring the function performed by a recurrent neural network

Biologically “inspired” approach to NN learning #1

Report of the Week

Ethics Guidelines for Trustworthy AI

“Ethical washing made in Europe” – by Thomas Metzinger

(Short, Nontechnical) Papers of the of the Week

In defense of the black box

How IBM Watson Overpromised and Underdelivered on AI Health Care

Book of the Week

Essentials of Metaheuristics (2nd Edition)

Video of the Week

How to Take a Picture of a Black Hole – 13 minutes long

(From 2016) TED Talk by Katie Bauman.

  • Discusses the development of the software used to combine the “images” collected by individual telescopes.

Episode 2.23

April 12, 2019

Andy and Dave discuss Simulated Policy Learning (SimPLe), from Google Brain, which attempts to help reinforcement learning methods learn effective policies for complex tasks, such as Atari games (using the Atari Learning Environment, ALE); the method trains a policy in a simulated environment so that it achieves good performance in the original environment. From Google and Princeton University, the TossingBot learns to throw arbitrary objects into bins; research use “residual physics” to provide a baseline knowledge of the world (e.g., ballistics) to further improve tossing accuracies. Researchers at Rutgers demonstrate a probabilistic approach for reasoning the 3D shapes of unknown objects, as a robot manipulates its environment. DeepMind publishes results that use the AI itself to figure out where the AI will fail. And research from Northwestern, University of Chicago, and the Santa Fe Institute examines the dynamics of failure across science, startups, and security efforts. In clickbait-y news, scientists create an AI that can predict when a person will die (when in actual, they used machine learning methods to examine prediction of premature death, and compared with standard epidemiological approaches). Researchers create a memristor-based hybrid analog-digital computing platform to demonstrate deep-Q reinforcement learning. Microsoft demonstrates end-to-end automation of DNA data storage (21 hours to encode the word “hello”). The US Air Force is exploring AI-powered autonomous drones in its Skyborg program. Keen Security Lab of Tencent reports vulnerabilities of Telsa Autopilot, to include inducing the vehicle to switch lanes. A paper in the Springer AI Review Journal provides a survey of ML and DL frameworks and libraries for large-scale data mining. Los Alamos Labs publishes a survey of quantum algorithm implementations. Scott Cunningham publishes Causal Inference. Yaneer Bar-Yam makes a 2003 work, Dynamics of Complex Systems, available. Easley and Kleinberg publish Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Andy highlights a sci-fi story from 2008 from Elizabeth Bear, Tideline. Paul Oh pens a fictional story of the Army’s C2 AI program, Project AlphaWare. The National Academies-Royal Society Public Symposium will hold a discussion on 24 May, AI: An International Dialogue. More videos appear from DARPA’s AI Colloquium. A website compiles datasets for machine learning. And Stephen Jordan provides a comprehensive catalog of quantum algorithms.

for related materials.


Simulated Policy Learning (SimPLe): Model Based Reinforcement Learning for Atari

TossingBot: Learning to Throw Arbitrary Objects with Residual Physics

Inferring 3D Shapes of Unknown Rigid Objects in Clutter through Inverse Physics Reasoning

An Adversarial Approach to Uncover Catastrophic Failures

Quantifying dynamics of failure across science, startups, and security

Nontechnical summary

Technical paper

Recent Announcements

Scientists created an AI that can predict when a person will die

Reinforcement learning with memristor-based hybrid analog-digital computing platform

Microsoft’s latest breakthrough could make DNA-based data centers possible

The Air Force is exploring AI-powered autonomous drones with Skyborg program

Reports of the Week

Experimental Security Research of Tesla Autopilot

Survey Papers of the Week

Machine Learning and DL frameworks and libraries for large-scale data mining: a survey

Quantum Algorithm Implementations (for Beginners – NOT)

Book of the Week

Causal Inference

Dynamics of Complex Systems

Networks, Crowds, and Markets: Reasoning About a Highly Connected World

Short Stories of the Week

Tideline, by Elizabeth Bear

Classic paper: “Ten Challenges for Making Automation a ‘Team Player’ in Joint Human-Agent Activity”

Project AlphaWare, by Paul Oh

Conference of the Week

Artificial Intelligence: An International Dialogue

Video of the Week

Artificial Intelligence Colloquium: Explainable AI

Reference Sites of the Week

Datasets for machine learning

Quantum Algorithm Zoo

Episode 2.22

April 5, 2019

The Institute of Electrical and Electronics Engineers (IEEE) has released its first edition of Ethically Aligned Design (EAD1e), a nearly 300-page report involving thousands of global experts; the report covers 8 major principles including transparency, accountability, and awareness of misuse. DARPA announces the Artificial Social Intelligence for Successful Teams program, which will attempt to help AI build shared mental models and understand the intentions, expectations, and emotions of its human counterparts. DARPA also announced a program to design chips for Real Time Machine Learning (RTML), which will generate optimized hardware design configurations and standard code, based on the objectives of the specific ML algorithms and systems. The U.S. Army awarded a $152M contract to QinetiQ North America for producing “backpack-sized” robots; the common robotic system individual (CRS(I)) is a remotely operated, unmanned ground vehicle. The White House has launched a site to highlight AI initiatives. Anduril Industries gets a Project MAVEN contract to support the Joint AI Center. And the 2019 Turing Award goes to neural network pioneers Hinton, LeCun, and Bengio. Researchers at Johns Hopkins demonstrate that humans can decipher adversarial images; that is, they can “think like machines” and anticipate how image classifiers will incorrectly identify unrecognizable images. A group of researchers at MIT, Columbia, Cornell, and Harvard demonstrate “particle robots” inspired by biological cells; these robots can’t move, but can pulsate from a size of 6in to about 9in, and as a collective, they can demonstrate movement and other collective behavior (even with a 20% failure of the components). Researchers at the Harbin Institute of Technology and Machine State University control a swarm of “microbots” (here, single grains of hematite) through application of different magnetic fields. And researchers use honey bees (in Austria) and zebrafish (in Switzerland) to influence each other’s collective behavior through robotic mediation. A report from the Interregional Crime and Justice Research Institute released a report on AI in law enforcement, from a recent meeting organized by INTERPOL. DefenseOne publishes a report from Tucker, Glass, and Bendett, on how the U.S. military services are using AI. An e-book from Frontiers in Robotics and AI collects 13 papers on the topic of “Consciousness in Humanoid Robots.” Andy highlights a book from 2007, “Artificial General Intelligence,” which claims to be the first to codify the use of AGI as a term-of-art. MIT Tech Review’s EnTech Digital 2019 has released the videos from its 25-26 March event. And DARPA has released more videos from its AI Colloquium. The U.N. Group of Governmental Experts is meeting in Geneva to discuss lethal autonomous weapons systems (LAWS). A short story from Husain and Cole describes a hypothetical future war in Europe between Russian and NATO forces. And Ian McDonald pens a story that captures the life of military drone pilots in Sanjeev and Robotwallah.

for related materials.

Episode 2.21

March 29, 2019

Andy and Dave begin with an AI-generated podcast, using the “dumbed down” GPT-2 with the repository of podcast notes; GPT-2 ends the faux podcast with a video called “The World Ends with Robots” and Dave later discovers that a Google search on the title brings up zero hits. Ominous! Andy and Dave continue with a discussion of the Boeing 737 MAX crashes and the implications for autonomous systems. Stanford University launches the Stanford Institute for Human-centered Artificial Intelligence (HAI), which seeks to advance AI research to improve the human condition. Ahead of the Convention on Certain Conventional Weapons in Geneva, Japan announces its intention to submit a plan for maintaining control over lethal autonomous weapons systems. A new report from Hal Hodson at the Economist reveals that, should DeepMind successfully create artificial general intelligence, its Ethics Board will have legal “control” of the entity. And Steve Walker and Vint Cerf discuss other US Department of Defense projects that Google is working on, including the identification of deep fakes, and exploring new architectures to create more computing power. NVidia announces a $99 AI development kit, the AI Playground, and the GauGAN. In research topics, Google explores whether neural networks show gestalt phenomena, looking specifically at the law of closure. Researchers with IBM Watson and Oxford examine supervised learning with quantum-enhanced feature spaces. Shashu and co-workers explore quantum entanglement in deep learning architectures. Dan Falk takes a look at how AI is changing science. And researchers at Facebook AI and Google AI examine the pitfalls of measuring emergent communication between agents. The World Intellectual Property Organization releases its 2019 trends in AI. A report takes a survey of the European Union’s AI ecosystem. While another paper surveys the field of robotic construction. Kiernan Healy releases a book on Data Visualization. Allen Downey publishes Think Bayes: Bayesian Statistics Made Simple. The Defense Innovation Board releases a video from its public listening session on AI ethics at CMU from 14 March. The 2019 Human-Centered AI Institute Symposium releases a video. And Irina Raicu compiles a list of readings about AI ethics.

for related materials.

Announcements / Popular-Press Reports & Stories

Boeing 737 MAX Crashes Raise Public Distrust of Autonomous Systems
Stanford University launches the Institute for Human-Centered Artificial Intelligence
Japan to seek global rules on autonomous ‘killer robots’
DeepMind's Ethics Board Will Reportedly 'Control' AGI If It's Ever Created
A couple other DoD projects other than ‘Maven’ that Google is working on
NVidia Announcements at Its GPU Technology Conference (GTC)

$99 pocket-sized AI computer

AI Playground



Do Neural Networks Show Gestalt Phenomena
Supervised learning with quantum-enhanced feature spaces
Quantum Entanglement in Deep Learning Architectures
How Artificial Intelligence Is Changing Science
On the Pitfalls of Measuring Emergent Communication

Reports of the Week

WIPO Technology Trends 2019 – Artificial Intelligence
A Survey of the European Union’s AI Ecosystem

Survey Paper of the Week

A review of collective robotic construction

Books of the Week

Data Visualization: A practical introduction
Think Bayes: Bayesian Statistics Made Simple

Video of the Week

Defense Innovation Board Public Listening Session on AI Ethics

Conference of the Week

2019 Human-Centered Artificial Intelligence Institute Symposium

Some Last Minute Items

Readings in AI Ethics


AST= Artistic Style Transfer

CAC = Convolutional Arithmetic Circuit

CI = Context Independence

CIC = Causal Influence of Communication

CRC = Collective robotic construction


HAI = Human-Centered Artificial Intelligence

MAML = Model Agnostic Meta-Learning

MCAS  = Maneuvering Characteristics Augmentation System

MCG = Matrix Communication Game

PIS =  Photorealistic Image Synthesis

RAC = Recurrent Arithmetic Circuit

RBM = Restricted Boltzmann machines

SC = Speaker consistency

WIPO = World Intellectual Property Organization

Episode 2.20

March 22, 2019

Andy and Dave discuss “activation atlases,” recent work from OpenAI and Google that offers a new technique for visualizing interactions between the neurons in an image classifying deep neural network. The UCLA Center for Vision, Cognition, Learning, and Autonomy together with the International Center for AI and Robot Autonomy publish work on RAVEN – a dataset for Relational and Analogical Visual rEasoNing, which uses John Raven’s Progressive Matrices for testing joint spatial-temporal reasoning; in combination with a dynamic residual tree method, they see improvement over other methods, but still short of human performance. Research from the University of New South Wales uses machine learning to predict which of two patterns a subject will choose, before the subject is aware which one they have chosen. And Google Brain publishes research that demonstrates BigGAN, capable of generating high-fidelity images with much fewer (10-20%) labeled data. In announcements, DARPA holds its AI Colloquium on 6-7 March; the US Army is investing $72M into CMU for AI research; OpenAI launches OpenAI LP, a new company for funding safe artificial *general* intelligence; and the IEEE is set to release on 29 March the first edition of its Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. In reports of the week, the Allen Institute for AI examines the quality of AI papers and predicts that China will soon overtake the US in quality AI research; MMC publishes an examination of the State of AI in Europe; a paper looks at predicting research trends in the publications on Arxiv; and another paper surveys deep learning advances on different 3D data representations. Dive into Deep Learning is the book of the week, available online. The University of Vermont uses an AI and Project Gutenberg stories to identify six main arcs of storytelling. Dear Machine, by Greg Kieser, is the AI sci-fi story of the week. John Sunda Hsia’s website compiles the “ultimate guide” to all of the upcoming AI and ML conferences. And the Allen Institute releases a “dumbed down” version of OpenAI’s GPT-2, with some resulting humorous reflections.

for related materials.


Activation Atlases
RAVEN: A Dataset for Relational and Analogical Visual rEasoNing
Decoding the contents and strength of imagery before volitional engagement
High-Fidelity Image Generation with Fewer Labels

Announcements / Initiatives

DARPA's AI Colloquium, held 6,7 Feb at Hilton (Mark Center, Alexandria, VA)
US Army is investing $72 million into CMU for AI research
OpenAI launches new company for funding safe artificial general intelligence
IEEE to Release Ethically Aligned Design, First Edition: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems

Reports of the Week

China to Overtake US in AI Research
State of AI: Divergence

Survey Papers of the Week

Predicting Research Trends From Arxiv
Deep Learning Advances on Different 3D Data Representations: A Survey

Book of the Week

Dive into Deep Learning

“AI Storytelling” of the Week

AI Identifies the 6 Main Arcs in Storytelling

AI Sci-Fi of the Week

Dear Machine: A Letter to a Super-Aware/Intelligent Machine

Upcoming Conferences

Ultimate Guide to 2019 Artificial Intelligence and Machine Learning Conferences

Some Last Minute Items

Allen Institute’s GPT-2 explorer

Episode 2.19

March 15, 2019

Andy and Dave discuss research from Neil Johnson, who looked to the movements of fly larvae to model financial systems, where a collection of agents share a common goal, but have no way to communicate and coordinate their activities (a memory of five past events ends up being the ideal balance). Researchers at Carnegie Mellon demonstrate that random search with early-stopping is a competitive Neural Architecture Search baseline, performing at least as well as “Efficient” NAS. Unrelated research, but near simultaneously published, from AI Lab Swisscom shows that random search outperforms state-of-the-art NAS algorithms. Researchers at DeepMind investigate the possibility of creating an agent that can discover its world, and introduce NDIGO (Neural Differential Information Gain Optimization), designed to be “information seeking.” And the Electronics and Telecomm Research Institute in South Korea creates SC-FEGAN, a face-editing GAN that builds off of a user’s sketches and other information. Georgetown University announces a $55M grant to create the Center for Security and Emerging Technology (CSET). Microsoft workers call on the company to cancel its military contract with the U.S. Army. DeepMind uses machine learning to predict wind turbine energy production. Australia’s Defence Department invests ~$5M to study how to make autonomous weapons behave ethically. And the U.K. government invests in its people and funds AI university courses with £115. Reports suggest that U.S. police departments are using biased data to train crime-predicting algorithms. A thesis on Neural Reading Comprehension and Beyond by Danqi Chen becomes highly read. A report looks at the evaluation of citation graphs in AI research; and researchers provide a survey of deep learning for image super-resolution. Bryon Reese blogs that we need new words to adjust to AI (to which Dave adds “AI-chemy” to the list). In Point and Counterpoint, David Sliver argues that AlphaZero exhibits the “essence of creativity,” while Sean Dorrance Kelly argues that AI can’t be an artist. Interpretable Machine Learning by Christoph Molnar hits version 1.0, and Andy highlights Asimov’s classic short story, The Machine that Won the War. And finally a symposium at Princeton University’s Institute for Advanced Studies examines deep learning – alchemy or science?

for related materials.


Smarter Parts Make Collective Systems Too Stubborn
Random Search and Reproducibility for Neural Architecture Search
Evaluating the Search Phase of Neural Architecture Search
World Discovery Models
SC-FEGAN: Face Editing Generative Adversarial Network with User's Sketch and Color

Announcements / Initiatives

Largest US Center on AI & Policy Comes to Georgetown University
Microsoft Workers Call on Company to Cancel Military Contract
Microsoft Stands By Its $480 Million Pentagon Contract
Using Machine Learning to Predict Wind Turbine Energy Production
Australia’s Defense Department Takes Lead in Ethics Research
U.K. Government to Fund AI University Courses With £115m

Reports of the Week

Police across the US are training crime-predicting AIs on falsified data
Neural Reading Comprehension and Beyond

Papers of the Week

The evolution of citation graphs in artificial intelligence research
Deep Learning for Image Super-resolution: A Survey

Blog Essay of the Week

Our language needs to evolve alongside AI. Here's how

Discussion Topic of the Week

How AlphaZero has rewritten the rules of game play on its own
Counterpoint: A philosopher argues that an AI can’t be an artist
An AI “artist” got a solo show at a Chelsea gallery; Will it reinvent art, or destroy it?

Book of the Week

Interpretable Machine Learning

Story of the Week

The Machine that Won the War

Videos of the Week

Deep Learning: Alchemy or Science?

Individual talks

Episode 2.18

March 8, 2019

OpenAI has trained an unsupervised language model that can perform basic reading comprehension, summarize text, answer questions, and generate coherent paragraphs; as Andy and Dave discuss, the bigger news came from OpenAI's decision to release a less-capable version of the GPT-2 model, "for the good of humanity," as one news site claimed. IBM's Project Debater lost a debate with champion debater Harish Natarajan, but more of the audience said Project Debater better enriched their knowledge on the topic. Princeton and Microsoft announce NAIL, an agent for playing general interactive fiction (such as the Zork series), and consisting of multiple Decision Modules for performing various tasks. Columbia University takes a step toward reconstructing speech directly from the brain's auditory cortex, by temporarily placing electrodes in patients and having them listen to spoken numbers. DARPA announces SAIL-ON, the Science of Artificial Intelligence and Learning for Open-world Novelty, in an attempt to help AI adapt to constantly changing conditions. DARPA's Systematizing Confidence in Open Research and Evidence (SCORE) promises $7.6M to the Center for Open Science, for leading the charge on reproducibility. The Animal-AI Olympics hopes to create a survival-of-the-fittest for AI approach to the animal kingdom. Facebook releases ELF OpenGo, an open source implementation of DeepMind's AlphaZero. Neuroscientists from Case Western Reserve discover an entirely new form of neural communication that works through electrical fields and can function over gaps in severed tissues. The Nufffield Foundation and the Leverhulme Centre for the Future of Intelligence release a reports on the Ethical and Societal Implications of Algorithms, Data, and AI. Technology for Global Security and Center for Global Security and Research join forces to understand and manage risks to international security and warfare, as posed by AI-related tech. A short review in Science looks at brain circuitry and learning, and Andy pulls DeepMind's look at Neuroscience-inspired AI paper from 2017. Research examines engineering-based design methodology for embedding ethics in autonomous robots, while another paper assess the local interpretability of machine learning methods. Jeff Erickson releases a text book on Algorithms; Daniel Shiffman publishes The Nature of Code; and Jason Brownlee offers up Clever Algorithms – Nature-Inspired Programming Recipes. A video from This Week in Machine Learning and AI dissects the controversy surrounding OpenAI's GPT-2 model. And finally, two websites offer up faces of fictional people.

for related materials.


Language Models Are Unsupervised Multitask Learners
IBM AI Loses Debate to Human Champion
NAIL: A General Interactive Fiction Agent
Towards Reconstructing Intelligible Speech from the Human Auditory Cortex

Announcements / Initiatives

New DARPA program: Teaching AI Systems to Adapt to Dynamic Environments
(Older) DARPA program: Systematizing Confidence in Open Research and Evidence (SCORE)
The Animal-AI Olympics
ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero
Neuroscientists Say They've Found an Entirely New Form of Neural Communication

Reports of the Week

Ethical and Societal Implications of Algorithms, Data, and AI: A Roadmap for Research
T4GS and CGSR partnership

Papers of the Week

Books of the Week

Algorithms, by Jeff Erickson
The Nature of Code, by Daniel Shiffman
Clever Algorithms - Nature-Inspired Programming Recipes, by Jason Brownlee

Episode 2.17

February 22, 2019

Andy and Dave discuss a series of announcements: President Trump signs an Executive Order to prioritize and promote AI; the U.S. Department of Defense releases its 2019 AI Strategy; DARPA announces an Intelligent Neural Interface program focused on improving neurotechnology, and DARPA announces Guaranteeing AI Robustness against Deception (GARD), intended as an almost immune-system like approach to increase the resistance of ML models to deception; Securities and Exchange Commission filings from both Google and Microsoft disclose in “risk factors” that products with AI and ML may not work as intended, and may exacerbate a variety of problems, which could adversely affect the companies’ branding and reputation; and Uber AI releases Ludwig, an open source deep learning toolbox that allows users to train and test deep learning models without writing code. In research topics, DeepMind sets its sights on using ML to conquer Hanabi, a cooperative game with imperfect information, that requires a “theory of mind.” The Allen Institute for AI releases “Iconary,” a game of Pictionary with an AI partner. Research from Expedia Group uses a attentional convolution network for facial expression. IBM publishes research on a neuro-inspired “creativity” decoder. IBM Research AI and Arizona State University examine when AI bots might lie (in the context of “acceptable” social white lies). And research from Munchen demonstrates that humans are less likely to hurt or sacrifice a robot, if it is more human-like. In reports, the McKinsey Global Institute examines Europe’s Gap in Digital and AI. In papers, Johns Hopkins University publishes an opinion paper on the strengths and weaknesses of deep nets for vision, and the Centre of AI in Australia and the University of Illinois at Chicago publish a comprehensive survey on graph neural networks. John Brockman will be releasing a new book, Possible Minds: 25 Ways of Looking at AI. A TED Talk from Hugh Herr looks at bionics ability to extend human potential. And registration is now open for the Sackler Colloquium on the science of Deep Learning at the National Academy of Sciences.

for related materials.


President Trump signs Executive Order to Prioritize and Promote AI

(Summary of) DoD 2018 AI Strategy

Two New DARPA Program Announcements

Google and Microsoft Warn that AI May Do Dumb Things

Uber AI Introduces New Open Source Deep Learning Toolbox, Ludwig


The Hanabi Challenge: A New Frontier for AI Research

Iconary: A Pictionary-playing AI designed to figure out how the world works

(When) Can AI Bots Lie?

AAAI AES Conference

Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network

Toward A Neuro-inspired Creative Decoder

Saving the Robot or the Human? Robots Who Feel Deserve Moral Care

Things of the Week

Report of the Week

Papers of the Week

Deep Nets: What have they ever done for Vision?

A Comprehensive Survey on Graph Neural Networks

Book of the Week

Possible Minds: 25 Ways of Looking at AI

Video of the Week

How we’ll become cyborgs and extend human potential

Upcoming Conferences

The Science of Deep Learning

Episode 2.16

February 15, 2019

For research topics, Andy and Dave discuss the task-agnostic self-modeling machine from Columbia University, a robotic arm that learns to build an approximate model of itself and then interact with the world; they also discuss the over-hyped reporting of the research. A much less hyped, but possibly more groundbreaking research from MIT results in a robot that can play the tower-block game Jenga, using multisensory fusion to do so. More research from MIT attempts to synthesize probabilistic programs for automatic data modeling. Research from the University of Tubingen shows that approximating convolutional neural nets with bag-of-local-features modeling yields decent results with ImageNet. And University of Washington and the Allen Institute for AI announce the Atlas of Machine Commonsense (ATOMIC), a collection of 877k textual descriptions of inferential knowledge, which allows more accurate inference for previously unseen events. In announcements of the week, DARPA announces the Competency-Aware Machine Learning (CAML) program for ML systems to assess their own performance; and Measuring Biological Aptitude (MBA) attempts to link genotype to phenotype in order to improve recruiting, training, and other aspects. The U.S. Navy’s Sea Hunter drone ship completes an autonomous trip from San Diego to Hawaii and back. The "Papers with Code" archive attempts to collect and link ML-related papers, code, and evaluation tables. The U.S. Army activates its AI Task Force at Carnegie Mellon. And the International Conference on Learning Representations (ICLR) 2019 has been announced for 6-9 May 2019. In media of the week, the World Intellectual Property Organization releases its report on the Technology Trends of 2019; the AMA Journal of Ethics publishes an entire (open-access) issue devoted to AI in health care; the Congressional Research Service updates its report on AI and National Security; Dan Simmons provides a hefty tome on Evolutionary Optimization Algorithms; and Julian Togelius publishes a book on Playing Smart. Wake Word is the Game of the Week, and in videos, Super Bowl ads provided a variety of glimpses into life with robots.

for related materials.


Task-agnostic self-modeling machines
A ‘Self-Aware’ Fish Raises Doubts About a Cognitive Test
See, feel, act: Hierarchical learning for complex manipulation skills with multisensory fusion
Bayesian Synthesis of Probabilistic Programs for Automatic Data Modeling
Approximating CNNs with Bag-of-Local-Features models works surprisingly well on ImageNet
Atlas of Machine Commonsense (ATOMIC)

Announcements / Initiatives

Two New DARPA program announcements
Competency-Aware Machine Learning (CAML)

DARPA’s Broad Agency Announcement (BAA) Announcement

Measuring Biological Aptitude (MBA)


Navy's Sea Hunter Drone Ship Has Sailed Autonomously from San Diego To Hawaii And Back

Video (showing construction and walkthrough of Sea Hunter)

"Papers With Code" archive

Carnegie Mellon Hosts Activation of U.S. Army AI Task Force

International Conference on Learning Representations (ICLR) 2019

Report of the Week

WIPO Technology Trends 2019

Papers of the Week

AMA Journal of Ethics

CRS Report on AI and National Security

Understanding China's AI Strategy

Books of the Week

Evolutionary Optimization Algorithms
Playing Smart

Game of the Week

Wake World: An Algorithmic Nightmare

Videos of the Week

Roundup of All the Super Bowl Ads About Robots and AI

Episode 2.15

February 8, 2019

Description : In recent announcements, Andy and Dave discuss the National Endowment for Science, Technology, and the Arts (Nesta) launch of a project that is ‘Mapping AI Governance;’ MIT Tech Review’s survey of AI and ML research suggests that “the era of deep learning coming to an end” (or does it?); a December 2018 survey shows strong opposition to “killer robots;” China has (internally) released a report on its view of the “State of AI in China;” and DARPA wants to build conscious robots using insect brains, announcing its (mu)BRAIN Program. In research topics, Andy and Dave discuss the recent competition between DeepMInd’s AlphaStar and human professional gamers in playing Starcraft II. MIT and Microsoft have created a model that can identify instances where autonomous systems have learned from training examples that don’t match what’s happening in the real world, thus creating blind spots. Boston University publishes research that allows an ordinary camera to “see” around corners using shadow projection, in essence turning a wall into a mirror – and doing so without any AI or ML techniques. In papers and reports, the Office of the Director for National Intelligence releases its AIM Initiative – a strategy for augmenting intelligence using machines; a report provides a survey of the state of self-driving cars; and another report surveys the state of AI/ML in medicine. Game Changer takes a look at AlphaZero’s chess strategies, while The Hundred-Page Machine Learning Book offers a condensed overview of ML. The Association for the Advancement of AI conference (27 Jan – 1 Feb) begins to release videos of the conference, including an Oxford-style debate of the Future of AI. And finally, Andy and Dave conclude with a “hype teaser” for next week – with SELF AWARE robots!

for related materials.


Nesta launches the ‘Mapping AI Governance’ project

‘MIT Technology Review’ Survey of AI and ML research papers

Survey Shows Strong Opposition to ‘Killer Robots’

‘The State of Artificial Intelligence in China’ according to China

DARPA Wants to Build Conscious Robots Using Insect Brains


DeepMind’s AlphaStar defeats professional human pro-gamers at Starcraft II for 1st time

Discovering Blind Spots in Reinforcement Learning

Shadows used to 'see' around corners

Things of the Week

Report of the Week

Papers of the Week

Books of the Week

Videos of the Week

AAAI-2019 Videos

Episode 2.14

February 1, 2019

Description: CNA’s Center for Autonomy and Artificial Intelligence kicks off its first panel for 2019 with a live recording of AI with AI! Andy and Dave take a step back and look at the broader trends of research and announcements involving AI and machine learning, including: a summary of historical events and issues; the myths and hype, looking at expectations, buzzwords, and reality; hits and misses (and more hype!), and some of the many challenges of why AI is far from a panacea.

for related materials.

Episode 2.13

January 25, 2019

Andy and Dave discuss Microsoft’s $1.76B five-year service deal with the Department of Defense, US Coast Guard, and the intelligence communities; the US Defense Innovation Board announces its first "public listening session" on AI principles; Finland announces an AI experiment to teach 1% of its population the basics of AI; a report from the Center for the Governance of AI and the Future of Humanity Institute reports on American attitudes and trends toward AI; and the Reuters Institute for the Study of Journalism examines UK media coverage of AI. In research news, MIT and IBM Watson AI Lab dissect a GAN to visualize and understand its inner workings, and they identify clusters of neurons that represent concepts; they also created GAN Paint, which lets a user add or subtract elements from a photo. Research from NYU and Columbia trained a single network model to perform 20 cognitive tasks, and discover this learning gives rise to compositionality of task representations, where one task can be performed by recombining representations from other tasks. Researchers at the University of Waterloo, Princeton University, and Tel Aviv University demonstrate that a type of machine learning can be undecidable, that is, unsolvable. Jeff Huang at Brown University has compiled a list of the best papers at computer science conferences since 1996; McGill and Google Brain offer a condensed Introduction to Deep Reinforcement Learning; Nature launches the inaugural issue of Nature Machine Intelligence; and a paper explores designing neural networks through neuroevolution. Major General Mick Ryan debuts a sci-fi story “AugoStrat Awakenings;” NeurIPS 2018 makes all videos and slides available, and USNI’s Proceedings publishes an essay from CAPT Sharif Calfee on The Navy Needs an Autonomy Project Office.

for related materials.

Episode 2.12

January 18, 2019

Anna Williams joins Andy and Dave as CNA’s Russia AI and Autonomy expert Sam Bendett returns to discuss the latest news and developments from Russia. Sam describes the progress that the Russian Ministry of Defense has made in implementing AI since its announcement of an AI Roadmap in March 2018, including some of the organizations involved and their advances. The group also discusses developments in the Russian civilian AI sector, as well as Russia’s intent to publish a civilian AI Roadmap by mid-year. Sam also describes some of the recent AI research and announcements (into which Andy and Dave note less visibility in English venues), and the group wraps up with a discussion on the latest developments in Russian military unmanned systems.

for related materials.

Episode 2.11

January 11, 2019

Andy and Dave discuss Rodney Brooks' predictions on AI from early 2018, and his (on-going) review of those predictions. The European Commission releases a report on AI and Ethics, a framework for "Trustworthy AI." DARPA announces the Knowledge-directed AI Reasoning over Schemas (KAIROS) program, aimed at understanding "complex events." The Standardized Project Gutenberg Corpus attempts to provide researchers broader data across the project's complete data holdings. And MORS announces a special meeting on AI and Autonomy at JHU/APL in February. In research, Andy and Dave discuss work from Keio University, which shows that slime mold can approximate solutions to NP-hard problems in linear time (and differently from other known approximations). Researchers in Spain, the UK, and the Netherlands demonstrate that kilobots (small 3 cm robots) with basic communication rule-sets will self-organize. Research from UCLA and Stanford creates an AI system that mimics how humans visualize and identify objects by feeding the system many pieces of an object, called "viewlets." NVIDIA shows off its latest GAN that can generate fictional human faces that are essentially indistinguishable from real ones; further, they structure their generator to provide more control over various properties of the latent space (such as pose, hair, face shape, etc). Other research attempts to judge a paper on how good it looks. And in the "click-bait" of the week, Andy and Dave discuss an article from TechCrunch, which misrepresented bona fide (and dated) AI research from Google and Stanford. Two surveys provide overviews on different topics: one on safety and trustworthiness of deep neural networks, and the other on mini-UAV-based remote sensing. A report from CIFAR summarizes national and regional AI strategies (minus the US and Russia). In books of the week, Miguel Herman and James Robins are working on a Causal Inference Book, and Michael Nielsen has provided a book on Neural Networks and Deep Learning. CW3 Jesse R. Crifasi provides a fictional peek into a combat scenario involving AI. And Samim Winiger has started a mini documentary series, "LIFE," on the intersection of humans and machines.

for related materials.


‘Best of’ Top-of-the-Year AI Predictions

European Commission Releases Report on AI and Ethics | (37 page) Report | About the High-Level Expert Group on AI

DARPA announces KAIROS program

tandardized Project Gutenberg Corpus (SPGC) Announced

Special Meeting on ‘Artificial Intelligence and Autonomy’ at John Hopkins APL | Draft agenda


Amoeba finds approximate solutions to NP-hard problem in linear time

Morphogenesis in robot swarms

New AI system mimics how humans visualize and identify objects | Technical paper (behind pay wall)

A Style-Based Generator Architecture for Generative Adversarial Networks

Things of the Week

Silly Research of the Week – Deep Paper Gestalt | Video samples (3 min)

‘Clickbait’ Research of the Week – This clever AI hid data from its creators to cheat at its appointed task | Technical paper: “CycleGAN, a Master of Steganography”

Surveys of the Week – Safety and Trustworthiness of Deep Neural Networks: A Survey (90 page paper) | Mini-UAV-based Remote Sensing: Techniques, Applications and Prospectives (51 page paper)

Report of the Week – CIFAR Report on National and Regional AI Strategies | Insights into what Russia is working towards outlined by Sam Bendett in DefenseOne (back in July 2018)

Books of the Week – Causal Inference Book | Neural Networks and Deep Learning

Story of the Week – The Human Targeting Solution: An AI Story

Videos of the Week – LIFE – Episode 1: Artificial Life | LIFE – Episode 2: Neurorobotics | About Sanim Winiger

Episode 2.10

January 4, 2019

In shorter news items, Andy and Dave discuss the announcement that the Allen Institute for Artificial Intelligence is partnering with Microsoft Research to connect AI2’s Semantic Scholar academic search engine with Microsoft’s Academic Graph. The University of Pavia in Italy demonstrates an artificial neuron (a perceptron) on an actual quantum processor. Another Tesla on Autopilot has an accident; and Waymo demonstrates that pure imitation learning (with 30 million examples) is not sufficient for teaching a model to drive a car. And Tumblr implements a porn-detecting AI. In research topics, researchers with Facebook AI, MIT, and UC Berkeley demonstrate “dataset distillation,” compressing 60,000 MNIST images into 10 synthetic images. Researchers at University of Maryland demonstrate the ability to hide adversarial attacks from network interpretation; so for networks that visually locate the item identified, that network would locate the “original” item instead of the adversarial item. Adobe and Auburn show that neural networks fail miserably for “out-of-distribution” inputs (or, “strange poses of familiar objects”), and they probe deeper into the parameters that cause the misbehavior. In other news, the AI Narratives Report explores how AI is portrayed and perceived. The AI Index releases its 2018 version. AI researchers have a spirited debate on Twitter about deep learning and symbol manipulation. Quantum Computing: Progress and Prospects provides a deeper look at this nascent technology. And Juergen Schmidhuber gives a TEDx talk on how “true AI” will change everything.

for related materials.


AI2 joins forces with Microsoft Research to upgrade search tools for scientific studies: Semantic Scholar | Microsoft Academic Graph

An Artificial Neuron Implemented on an Actual Quantum Processor

Tesla On Autopilot Slams into Police Car

Tumblr's Porn-Detecting AI Has One Job—and It's Bad at It


Dataset Distillation [Note: results generalize to other interpretation algorithms (different from grad-CAM) as well]

Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects: Technical paper | Code and data

Things of the Week

Reports of the Week

Foundational “Problem” and Paper of the Week

Book of the Week

Videos of the Week

Episode 2.9

December 21, 2018

The Joint Artificial Intelligence Center is up and running, and Andy and Dave discuss some of the newer revealed details. And the rebranded NeurIPS (originally NIPS), the largest machine learning conference of the year, holds its 32nd annual conference in Montreal, Canada, with a keynote discussion on “What Bodies Think About” by Michael Levin. And a group of graduate students have create a community-driven database to provide links to tasks, data, metrics, and results on the “state of the art” for AI. In other news, one of the “best paper” awards at NeurIPS goes to Neural Ordinary Differential Equations, research from University of Toronto that replaces the nodes and connections of typical neural networks with one continuous computation of differential equations. DeepMind publishes its paper on AlphaZero, which details the announcements made last year on the ability of the neural network to play chess, shogi, and go “from scratch.” And AlphaFold from DeepMind brings machine learning methods to a protein folding competition. In reports of the week, the AI Now Institute at New York University releases a 3rd annual report on understanding social implications of AI. With a blend of technology and philosophy, Arsiwalla and co-workers break up the complex “morphospace” of consciousness into three categories: computational, autonomy, and social; and they map various examples to this space. For interactive fun of generating images with a GAN, check out the “Ganbreeder,” though maybe not before going to sleep. In videos of the week, “Earworm” tells the tale of an AI that deleted a century; and CIMON, the ISS Robot, interacts with the space crew. And finally, Russia24 joins a long history of people dressing up and pretending to be robots.

for related materials.

Episode 2.8

December 14, 2018

This week, Andy and Dave discuss the US Department of Commerce’s announcement to consider regulating AI as an export; counter to that idea, Amazon makes freely available 45+ hours of training materials on machine learning, with tailored learning paths; Oren Etzioni proposes ideas for broader regulation of AI research, that attempts to balance the benefits with the potential harms; DARPA tests its CODE program for autonomous drone operations in the presence of GPS and communications jamming; a Chinese researcher announces the use of CRISPR to produce the first gene-edited babies; and the 2018 ACM Gordon Bell Prize goes to Lawrence Berkeley National Lab for achieving the first exa-scale (10^18) application, running on over 27,000 NVIDIA GPUs. Uber’s OpenAI announces advances in exploration and curiosity of an algorithm that help it “win” Montezuma’s Revenge. Research from Facebook AI suggests that pre-training convolutional neural nets may provide fewer benefits over random initialization than previously thought. Google Brain examines how well ImageNet architectures transfers to other tasks. A paper from INDOPACOM describes the exploitation of big data for special operations forces. And Yuxi Li publishes a technical paper on deep reinforcement learning. And a recent paper explores self-organized criticality as a fundamental property of neural systems. Christopher Bishop’s Pattern Recognition and Machine Learning is available online, and the Architects of Intelligence provides one-on-one conversations with 23 AI researchers. Maxim Pozdorovkin releases “The Truth about Killer Robots” on HBO, and finally, a Financial Times articles over-hypes (anti-hypes?) a questionable graph on Chinese AI investments.

for related materials.

Episode 2.7

December 7, 2018

In the latest news, Andy and Dave discuss OpenAI releasing “Spinning Up in Deep RL,” an online educational resource; Google AI and the New York Times team up to digitize over 5 million photos and find “untold stories;” China is recruiting its brightest children to develop AI “killer bots;” and China unveils the world’s first AI new anchor; and Douglas Rain, the voice of HAL 9000 has died at age 90. In research topics, Andy and Dave discuss research from MIT, Tegmark, and Wu, that attempts to improve unsupervised machine learning by using a framework that more closely mirrors scientific thought and process. Albrecht and Stone examine the issue of autonomous agents modeling other agents, which leads to an interest list of open problems for future research. Research from the Stanford makes an empirical examination of bias and generalization in deep generative models, and Andy notes striking similarities to previously reported experiments in cognitive psychology. Other research surveys data collection for machine learning, from the perspective of the data. In blog posts of the week, the Mad Scientist Initiative reveals the results from a recent competition, which suggests themes of the impacts of AI on the future battlefield; and Piekniewski follows up his May 2018 “Is an AI Winter On Its Way?” in which he reviews cracks appearing in the AI façade, with particular focus on the arena of self-driving vehicles. And Melanie Mitchell provides some insight about AI hitting the barrier of meaning. CSIS publishes a report on the Importance of the AI Ecosystem. And another paper takes insights from the social sciences to provide insight into AI. Finally, MIT press has updated one of the major sources on Reinforcement Learning with a second edition; AI Superpowers examines the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging technology; Major Voke’s entire presentation on AI for C2 of Airpower is now available; and the Bionic Bug Podcast has an interview with CNA’s own Sam Bendett to talk AI and robotics.

for related materials.


OpenAI Releases Spinning Up in Deep RL

NY Times Using Google AI to Digitize 5M+ Photos and Find ‘Untold Stories’

China’s brightest children are being recruited to develop AI ‘killer bots’

World's first AI news anchor unveiled in China | Here’s Why China’s AI Newscaster Is A Bad Idea For The US | Video sample

R.I.P. HAL: Douglas Rain, Voice of Computer In '2001,' Dies At 90


Toward an AI Physicist for Unsupervised Learning | Overview

Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems

Bias and Generalization in Deep Generative Models: An Empirical Study

A Survey on Data Collection for Machine Learning: Big Data - AI Integration Perspective

Things of the Week

Blog Posts of the Week

Report of the Week - Artificial Intelligence and National Security: The Importance of the AI Ecosystem

Technical Paper of the Week - Explanation in AI: Insights from the social sciences | Preprint (66 pages)

Technical Book of the Week - Reinforcement Learning: Introduction | Open access (HTML, source code) | PDF (5 Nov 2017 book draft provided by Sutton, 445 pages) | Hardcopy (for purchase)

Nontechnical Books of the Week

Videos of the Week

Episode 2.6

November 30, 2018

Andy and Dave discuss research from Hasani and colleagues that uses a natural method for growing a neural network, which they use to demonstrate that a 12-neuron network can be trained to steer and park a rover robot to a given spot. Jeff Hawkins and co-workers describe a new theory of intelligence, positing that every part of the human neocortex learns complete models of objects and concepts, resulting in a "thousand brains theory of intelligence." The UK publishes a 2000+ page report on the state of AI industry in the UK. A technical paper asks whether multiagent deep reinforcement is learning the answer or the question. The books of the week include Sejnowski’s The Deep Learning Revolution, and Gerrish’s How Smart Machines Think. And the videos of the week include the Deep Learning Summer School series and Reinforcement Learning Summer School series.

for related materials.

Episode 2.5

November 23, 2018

In the latest news, Andy and Dave discuss Microsoft’s announcement that it will sell artificial intelligence and other advanced technology to the Pentagon; Google is giving $25M to projects that use artificial intelligence for humanitarian projects; Stanford announces the Human-Centered AI initiative; AdaNet offers fast and flexible AutoML with “learning guarantees;” and a “human brain” supercomputer (using neuromorphic computing) with 1 million processors is switched online for the first time. In other stories, Andy and Dave discuss the AI-generated portrait that sold at a Christie’s auction for $432,500. MIT Media Lab announces the results of their “Moral Machine” experiment, which asked people around the globe to choose how a self-driving vehicle should behave in different moral dilemmas. And GoogleAI describes its “fluid annotation” method, an exploratory machine language-powered interface for faster image annotation.

for related materials.

Episode 2.4

November 16, 2018

Deep generative models can generate “spurious” samples (i.e. errors). Researchers from Université Paris-Saclay and PSL Research University explore a basic question, “Is it possible to get rid of all spurious samples [in deep generative models] without sacrificing coverage of a model?” Their research suggests a “Heisenberg Uncertainty”-like tradeoff between full coverage and spurious objects. DeepMind announces large-scale GAN training for natural image synthesis with high fidelity. And Andy discusses Topaz’s “AI Gigapixel,” an AI-driven software capability that intelligently adds information to photos to increase their resolution/size. In the paper of the work, researchers flip the Turing Test and ask humans what one word would they use to convince a human judge that they’re alive; the results are crappy. On a related note, Andy recalls Brian Christian’s achievement of being The Most Human Human. For books of the week, the UK’s Development, Concepts, and Doctrine Centre publishes the 6th edition of Global Strategic Trends; papers from the 3rd conference on the Philosophy and Theory of AI are available in a single publications; and Minsky’s Society of the Mind get a free hyperlinked online version (with the classic illustrations). In the video of the week, the Center for Technology Innovation asks “Who should answer the ethical questions surrounding AI?” And in the “silliness of the week,” a robot appears at a UK parliamentary meeting and “talks” to MPs about the future of AI in the classroom.

for related materials.


Spurious samples in deep generative models: bug or feature?

Large Scale GAN Training for High Fidelity Natural Image Synthesis | ***Kudos (rarely, if ever seen, in research papers): Appendix G: Negative Results

“Deep Fake” Images and Videos | Technical paper: Deep video portraits

Things of the Week

Paper of the Week

Books of the Week

Video of the Week

Silliness of the Week

Episode 2.3

November 9, 2018

Andy and Dave discuss the latest corporate buzz on the Department of Defense’s JEDI contract, in which Microsoft employees publish an open letter and accuse the company of straying from its AI principles; a new DARPA program seeks to codify humans’ basic common sense through computational models and repositories; MIT establishes the Stephen A. Schwarzman College of Computing, a $1B initiative and the single largest by an American academic institution; MIT also announces an Autonomous Vehicle Technology study, a data-driven effort for “safe and enjoyable” human-AI interaction in driving; Wired takes a look at initial data on accidents involving self-driving vehicles; and researchers (at least 23!) publish a complete electron microscopy volume of the brain of the fruit fly. In deeper topics, Andy and Dave discuss research from the University of Louisville that shows the failure of neutral networks to understand optical illusions. Researchers from UPenn, ARL, and NYU demonstrate a drone that can be controlled by your eyes. Stocco and colleagues demonstrate BrainNet, a “social network” of that allows 3 people to transmit “thoughts” to each other. And researchers at Ecole Centrale de Lyon have created new framework that may allow robots to autonomously optimize their own hyper-parameters – about which Dave tries to look on the bright side.

for related materials.

Episode 2.2

November 2, 2018

Andy and Dave focus of a variety of big news items, including: Google bows out of the bidding for the Pentagon’s “JEDI” cloud contract valued at $10 billion; the Government Accountability Office releases a 50-page report on the poor state of the cybersecurity of U.S. weapons systems; “The Big Hack” makes big news, with Bloomberg reporting that China inserted a tiny chip on hardware in order to infiltrate U.S. networks; the U.S. Department of Transportation looks to rewrite safety rules in order to accommodate fully driverless vehicles on public roads; two leaders in collaborative robots (Rethink and Jibo) close their doors; and DeepMind announces efforts to discuss “Technical AI safety” including the areas of specification (true intentions), robustness (safety upon perturbation), and assurance (understanding and control). The latter topic launches further discussion into ethics-related efforts for AI, including the UK Machine Intelligence Garage Ethics Committee; a paper on the motivations and risks of machine ethics; and research from North Caroline State University shows that the (Association for Computing Machinery) code of ethics does not appear to affect the decisions made by software developers. All the excitement somehow causes Dave to invoke Jean Valjean when he means to say Javert. C’est la vie! Finally, Andy describes a couple motherlodes of papers; Biostorm by Anthony DeCapite makes the story of the week; ZDNet ranks 36 of the best movies on AI; AutoML is prepping an open access book on AutoML; and Dave goes fan boy over the Automata web series from Penny Arcade.

for related materials.

Episode 2.1

October 26, 2018

Welcome to Version 2.0 of AI with AI! Dave starts off by trying to explain the weird podcast titles, and he plugs Andy’s (@ai_ilachinski) and his (@crypticnarwhal) Twitter accounts. Andy and Dave then get down to business discussing Britain’s “successful” trials of using AI (“SAPIENT”) in urban battlefield scanning to identify enemy movements; the IEEE launches an ethics certification program for autonomous and intelligent systems; the U.S. Department of Energy invests $218M in Quantum Information Science; and DARPA announces the Subterranean Challenge, for technologies to augment underground operations, and wherein Dave makes a dire prediction of Tolkien-proportions! Andy and Dave then delve greedily and deeply into a series of topics of counter-AI. They start with discussing Dedrone, which has developed a capability to detect and track swarms (of robots/drones). Researchers in Korea use an AI-enabled drone to herd flocks of birds (diverting them from designated airspace). Researchers at the University of Albany, with GE, demonstrate the ability to attack object detectors (Faster Regional Convolutional Neural Networks) using imperceptible patches on the background; and researchers at the Georgia Institute of Technology, with Intel, announce ShapeShifter, a targeted physical attack on Faster R-CNN object detectors found in “state-of-the-art” detectors (such as the current generation of self-driving vehicles). On the other side, Luca de Alfaro at the University of California, Santa Cruz, published research into creating neural networks with built-in resistance to adversarial attacks, by reducing the neural networks’ “local linearity.” After a quick touch on research from Google Research on simplifying and compacting neural networks (for resource-constrained devices) without floating point operations or multiplications, Andy recommends a paper on Learning Causality; August Cole’s Angry Trident makes the story of the week; Interpretable Machine Learning (by Molnar) is the book of the week, along with Pattern Classification by Duda, Hart, and Stork; and Christopher Moore explores the Limits of Computation in a two-part video series.

for related materials.

Season 1

Episode 50

October 19, 2018

Andy and Dave discuss the “Transparency by Design Network” (TbD-net), research from MIT Lincoln Lab that uses a collection of modular neural nets to perform specific image identification subtasks. The resulting output places heat-map blobs over objects in an image, which allows a human analyst to see how a module is interpreting the image (and to use that information to further improve the model’s accuracy). In research from DeepMind and the University of Oxford, researchers attempt to solve the problem that neural nets have in not manipulating numerical information well outside of the range of values encountered during training. Researchers created a Neural Accumulator and a Neural Arithmetic Logic Unit (in essence, representing numerical quantities as individual neurons without a nonlinearity) to allow a system to learn to represent and manipulate numbers in a systematic way. Georgia Tech has developed a machine learning-based method to automate the generation of novel video games, using Super Mario Bros, Mega Man, and Kirby’s Adventure as inputs. And Kate Crawford and Vladan Joler have created a massive visualization of the many processes that make an Amazon Echo work, in the “Anatomy of an AI system.” DARPA celebrates its 60th anniversary with a 184-page paper that highlights its research over the last 60 years; Google launches a “What-If Tool” for probing datasets at a non-coding level; Neural Networks and Learning Machines (3rd Edition) by Simon Haykin is available for free. Robin R. Murphy curates information on “Robotics Through Science Fiction” (and more); all of the keynotes and presentations from the Joint Multi-Conference on Human-Level Artificial Intelligence are available online, likely requiring a week of vacation to view them all; and the 11th International Conference on Swarm Intelligence will be in Rome at the end of October 2018.

for related materials.


(MIT Lincoln Lab - Intelligence and Decision Technologies Group) Transparency by Design Network (TbD-net): AI system uses transparent, human-like reasoning to solve problems

(DeepMind/Univ Oxford/Univ College London) NALU - Neural Arithmetic Logic Units

(Georgia Tech) This AI mashes up existing games to create new ones

Tour-de-Force Visualization: Anatomy of an AI system

Things of the Week

Paper of the Week - DARPA’s history from 1958-2018  | 60th Anniversary Symposium homepage

App of the Week - Google’s “What If?” Tool

Technical Book of the Week - Neural Networks and Learning Machines, by Simon Haykin

Science-Fiction & AI Things of the Week -

Video of the Week - All keynotes and presentations from the Joint Multi-Conference on Human-Level Artificial Intelligence (HLAI), held 22-25 August, 2018 in Prague, Czech Republic

Episode 49

October 12, 2018

Andy and Dave discuss an online essay by Tim Dutton, which summarizes the AI strategies that nations have published in the last year and a half. Sentient Investment Management announces plans to liquidate its hedge fund that used AI to forecast investment strategies. IBM spearheads effort to create standards for AI developers to demonstrate the fairness of their AI algorithms, through a Supplier’s Declaration of Conformity. Google announces an Unrestricted Adversarial Examples Challenge, with “birds versus bicycles,” where applicants can either submit a defender (an image classifier that will resist adversarial attacks) or submit an attacker (an adversarial attack that attempts to make the defender declare a confident, incorrect answer). The Drone Racing League announces a new competition for teams developing AI pilots for drone racing.  And DARPA announces research that has allowed a paralyzed man to send (and receive) signals for three drones simultaneously, through a surgically-implanted microchip in the brain.

for related materials.

Episode 48

September 28, 2018

Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include: Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).

for related materials.


GGE on LAWs - Emerging Commonalities, Conclusions and Recommendation

Center for Autonomy and AI blog: CNA Statement to UN Group of Government Experts on Lethal Autonomous Weapon Systems, August 29 2018, Larry Lewis

CNA report: AI and Autonomy in War: Understanding and Mitigating Risks, by Larry Lewis

Episode 47

September 14, 2018

Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases its Unmanned Systems Integrated Roadmap 2017-2042; Google announces DataSet Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life; and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability poison the training data set of an neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpeanAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”

for related materials.


GGE on LAWs - Emerging Commonalities, Conclusions and Recommendation

Pentagon Unmanned Systems Integrated Roadmap 2017-2042

DataSet Search: Google launches new search engine to help researchers locate online data

State of California Endorses Asilomar AI Principles

The Neural Information Processing Systems (NIPS) 2018 conference sold out in 11 min 38 secs!


Creating new visual concepts by recombining familiar ones

An assault on “catastrophic forgetting” - toward an AI that remembers

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks | Technical paper

AI can track 200 eye movements to determine personality traits

(OpenAI, UC, Berkeley, Univ. of Edinburgh) Large-Scale Study of Curiosity-Driven Learning

Things of the Week

App of the Week - GAN Lab | GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation

Paper of the Week - Franken-algorithms: the deadly consequences of unpredictable code

Book of the Week - Autonomy: The Quest to Build the Driverless Car—And How It Will Reshape Our World, by Lawrence Burns and Christopher Shulgan

Videos of the Week

Episode 46

September 7, 2018

Andy and Dave discuss the latest developments in OpenAI’s AI team that competed against human players in Dota 2, a team-based tower defense game. Researchers published a method for probing Atari agents to understand where the agents were focusing when learning to play games (and to understand why they are good at games like Space Invaders, but not at Ms. Pac-Man). A DeepMind AI can match health experts when spotting eye diseases from optical coherence tomography (OCT) scans; it uses two networks to segment the problems, which also allows a way for the AI to indicate which portion of the scans prompted the diagnosis. Research from Germany and the UK showed that children may be especially vulnerable to peer pressure from robots; the experiments replicated Asch’s social experiments from the 1950s, but interestingly adults did not show the same vulnerability to robot peer pressure. Research from Rosenfeld, Zemel, and Tsotsos showed that “minor” perturbations in images (such as shifting the location of an elephant) can cause misclassifications to occur, again highlighting the potential for failures in image classifiers. Andy recommends “The Seven Tools of Causal Inference with Reflections on Machine Learning” by Pearl; Algorithms for Reinforcement Learning by Szepesvari is available online; Robin Sloan has a novel, Sourdough, with much use of AI and robots; Wolfram has an interview on the computational universe; a new documentary on AI look at the life and role of Geoffrey Hinton ; and Josh Tenenbaum examines the issues of “Growing a Mind in a Machine.”

for related materials.


(From episode #43) OpenAI is “Beating Humans at ‘Dota 2’”? – Ahh, not so fast! – Team of pro gamers from Brazil defeated OpenAI’s Dota-2 AI:

Saliency Maps - (Aug 17) Science Mag: Why does AI stink at certain video games? Researchers made one play Ms. Pac-Man and Space Invaders to find out

(Aug 13) DeepMind AI matches health experts at spotting eye diseases | Details | Technical paper (open access)

(Aug 15) Children may be especially vulnerable to peer pressure from robots

(Aug 9) The Elephant in the Room – “ML vision” is far from “solved” (non-adversarial “adversarial” problems!)

Things of the Week

Paper of the Week: The Seven Tools of Causal Inference with Reflections on Machine Learning, by Judea Pearl, UCLA CS Dept

Technical Book of the Week: Algorithms for Reinforcement Learning, by Csaba Szepesvári

Audio of the Week: Interview with Stephen Wolfram, “A Journey Of Computational Complexity” | Short introduction | Full podcast interview

Videos of the Week:

Episode 45

August 31, 2018

In breaking news, Andy and Dave discuss the Convention on Conventional Weapons meeting on lethal autonomous weapons systems (LAWs) at the United Nations, where more than 70 countries are participating in the sixth meeting since 2014. Highlights include the priorities for discussion, as well as the UK delegation's role and position. The Pentagon’s AI programs get a boost in the defense budget. DARPA announces the Automating Scientific Knowledge Extraction (ASKE) project, with the lofty goal of building an AI tool that can automatically generate, test, and refine its own scientific hypotheses. Google employees react to and protest the company’s secret, censored search engine (Dragonfly) for China. The Electronic Frontier Foundation releases a white paper on Mitigating the Risks of Military AI, which includes applications outside of the “kill chain.” And Brookings releases the results of a survey that asks people whether AI technologies should be developed for warfare.

for related materials.

Episode 44

August 24, 2018

The Director for CNA’s Center for Autonomy and AI, Dr. Larry Lewis, joins Dave for a discussion on understanding and mitigating the risks of using autonomy and AI in war. They discuss some of the commonly voiced risks of autonomy and AI, in application for war, but also in general application, which include: AI will destroy the world; AI and lethal autonomy are unethical; lack of accountability; and lack of discrimination. Having examined the underpinnings of these commonly voiced risks, Larry and Dave move on to practical descriptions and identifications of risks for use of AI and autonomy in war, including the context of military operations, the supporting institutional development (including materiel, training, and test & evaluation), as well as the law and policy that govern their use. They wrap up with a discussion about the current status of organizations and thought leaders in the Department of Defense and the Department of the Navy.

for related materials.


Episode 43

August 17, 2018

In breaking news, Andy and Dave discuss the Dota 2 competition between the Open AI Five team of AIs and a top (99.95th percentile) human team, where the humans won one game in a series of three; the Pentagon signs a $885M AI contract with Booz Allen; MIT builds Cheetah 3, a “blind” robot that has no visual sensors but can climb stairs and maneuver in a space with obstacles; Tencent Machine Learning trains AlexNet in just 4 minutes on ImageNet (breaking the previous record of 11 minutes); researchers at MIT Media Lab have developed a machine-learning model to perceive human emotions; and the 2018 Conference on Uncertainty in AI (UAI) may have been held 7-10 August in Monterey, CA – we’re not certain (but what is certain is that Dave will never tire of these jokes). In other news, IBM Watson reportedly recommended cancer treatments that were “unsafe and incorrect, and Amazon’s Rekognition software incorrectly identifies 28 lawmakers as crime suspects, about which Andy and Dave yet again highlight the dangerous gap in AI between expectations and reality. Lipton (CMU) and Steinhardt (Standford) identify “troubling trends” in machine learning research and scientific scholarship. The Institute for Theoretical Physics in Zurich describes SciNet, a neural network that can discover physical concepts (such as the motion of a damped pendulum). A paper by Kott and Perconti makes an empirical assessment of forecasting military technology on the 20-30 year horizon, and finds the forecasts are surprisingly accurate (65-87%). “Elements of Statistical Learning Data Mining, Inference, and Prediction,” is available online. Andy recommends the Ellison classic story, I Have No Mouth, and I Must Scream, and finally, a video by Percy Liang at Stanford discusses ways of evaluating machine learning for AI.

for related materials.


A team ranked in the 99.95th percentile faced off against OpenAI Five, and won just one match in a series of three.

(July 30) Pentagon Signs $885 Million Artificial Intelligence Contract with Booz Allen

(MIT) “Blind” Cheetah 3 robot can climb stairs littered with obstacles | Video

(Chinese Tencent Machine Learning) Training ImageNet in Four Minutes

(MIT Media Lab) Helping computers perceive human emotions

2018 Conference on Uncertainty in AI (UAI): August 7-10, Monterey, CA


IBM Watson Reportedly Recommended Cancer Treatments That Were 'Unsafe and Incorrect'

Facial Recognition Software Wrongly Identifies 28 Lawmakers As Crime Suspects

Troubling Trends in Machine Learning Scholarship


Technical paper (presented at ICML 2018)

(July 26) SciNet: Discovering physical concepts with neural networks

Things of the Week

Paper of the Week – Long-Term Forecasts of Military Technologies for a 20-30 Year Horizon: An Empirical Assessment of Accuracy

Technical Book of the Week – Elements of Statistical Learning Data Mining, Inference, and Prediction, Second Edition, by Trevor Hastie, Robert Tibshirani, and Jerome Friedman, Springer-Verlag, 2009

Science-Fiction Story of the Week - Harlan Ellison's I Have No Mouth, and I Must Scream

  • Audio – read by Harlan Ellison himself! (who passed away June 27, 2018)
  • Hardcopy

Video of the Week - How Should We Evaluate Machine Learning for AI? by Percy Liang

Episode 42

August 10, 2018

Continuing in a discussion of recent topics, Andy and Dave discuss research from Johns Hopkins University, which used supervised machine learning to predict toxicity of chemicals (the results of which beat animal tests). DeepMind probes toward general AI by exploring AI’s abstract reasoning capability; in their tests, they found that systems did OK (75% correct) when problems used the same abstract factors, but that AI systems fared very poorly if the testing differed from the training set (even minor variations such as using dark-colored objects instead of light-colored objects) – in a sense, suggesting that deep neural nets cannot “understand” problems they have not been explicitly trained to solve. Research from Spyros Makridakis demonstrated that existing traditional statistical methods outperform (better accuracy; lower computation requirements) than a variety of popular machine-learning methods, suggesting the need for better benchmarks and standards when discussing the performance of machine learning methods. Finally, Andy and Dave wrap up with two reports from the Center for a New American Security, on Technology Roulette, and Strategic Competition in an Era of AI, the latter of which highlights that the U.S. has not yet experienced a true “Sputnik moment.” Research from MIT, McGill and Masdar IST defines and visualizes skill sets required for various occupations, and how these contribute to a growing disparity between high- and low-wage occupations. The conference proceedings of Alife2018 (nearly 700 pages) are available for the 23-27 July event. Art of the Future Warfare Project features a collection of “war stories from the future,” and over 50 videos are available from the 2018 International Joint Conference on AI.

for related materials.


(July 18, Future of Life) Tech leaders have signed a pledge promising to not develop LAWs

(July 20) DARPA announces its Artificial Intelligence Exploration (AIE) program

  • Program announcement
  • Video - Intro to Third Wave, John Launchbury, Director of DARPA’s Information Innovation Office, (I2O) from 2014-2017 (20 min)

(July 21) DARPA announces new SHRIMP program

(July 18) Finalists of the Solving the AI Race round of the General AI Challenge announced

(July) The UK’s Use of Armed Drones: Working with Partners


(July 15) Why Did AI Fail in the FIFA World Cup 2018? ; Paper: Effective injury forecasting in soccer with GPS training data and machine learning

(July 23) Baidu announces ClariNet, a neural network for text-to-speech synthesis (ClariNet Demos)

(June 11, John Hopkins Univ) Software beats animal tests at predicting toxicity of chemicals

(July 19, Nature) Controlling an organic synthesis robot with machine learning to search for new reactivity

(July 19) Robot chemist discovers new molecules and reactions

(July 13, DeepMind) Measuring abstract reasoning in neural networks ; Technical paper

(March 27) Statistical and Machine Learning forecasting methods: Concerns and ways forward

(July 13) Unpacking the polarization of workplace skills

Things of the Week

Papers of the Week - Reports released by the Center for a New American Security (CNAS)

Book of the Week – Conference proceedings (MIT Press) of Alife2018, held July 23-27 in Tokyo

Science-Fiction Story Collection of the Week - War Stories from the Future

Videos of the Week - Talks/presentations at the 2018 International Joint Conference on Artificial Intelligence (IJCAI)

Episode 41

August 3, 2018

In breaking news, Andy and Dave discuss the "Future of Life" pledge that various AI tech leaders have signed, promising not to develop lethal autonomous weapons; DARPA announces its Artificial Intelligence Exploration (AIE) program, to provide "unique funding opportunities;" DARPA also announces a Short-Range Independent Microrobotic Platform (SHRIMP) program, which seeks to develop multi-functional tiny robots for use in natural and critical disaster scenarios; GoodAI announces the finalists in the "General AI Challenge," which produced a series of conceptual papers; and a report from UK's parliament examines the issues surrounding the government’s use of drones. Then in deeper topics, Andy and Dave discuss various attempts to use AI to predict the FIFA World Cup 2018 champion (all of which failed), which includes a discussion on the appropriate types of questions to which AI is amenable, and also includes an obligatory Star Trek reference. Baidu announced ClariNet, which performs text-to-speech synthesis within one neural network (as opposed to multiple networks).

for related materials.


(July 18, Future of Life) Tech leaders have signed a pledge promising to not develop LAWs

(July 20) DARPA announces its Artificial Intelligence Exploration (AIE) program

  • Program announcement
  • Video - Intro to Third Wave, John Launchbury, Director of DARPA’s Information Innovation Office, (I2O) from 2014-2017 (20 min)

(July 21) DARPA announces new SHRIMP program

(July 18) Finalists of the Solving the AI Race round of the General AI Challenge announced

(July) The UK’s Use of Armed Drones: Working with Partners


(July 15) Why Did AI Fail in the FIFA World Cup 2018? ; Paper: Effective injury forecasting in soccer with GPS training data and machine learning

(July 23) Baidu announces ClariNet, a neural network for text-to-speech synthesis (ClariNet Demos)

(June 11, John Hopkins Univ) Software beats animal tests at predicting toxicity of chemicals

(July 19, Nature) Controlling an organic synthesis robot with machine learning to search for new reactivity

(July 19) Robot chemist discovers new molecules and reactions

(July 13, DeepMind) Measuring abstract reasoning in neural networks ; Technical paper

(March 27) Statistical and Machine Learning forecasting methods: Concerns and ways forward

(July 13) Unpacking the polarization of workplace skills

Things of the Week

Papers of the Week - Reports released by the Center for a New American Security (CNAS)

Book of the Week – Conference proceedings (MIT Press) of Alife2018, held July 23-27 in Tokyo

Science-Fiction Story Collection of the Week - War Stories from the Future

Videos of the Week - Talks/presentations at the 2018 International Joint Conference on Artificial Intelligence (IJCAI)

Episode 40

July 27, 2018

CNA’s expert on Russian AI and autonomous systems, Samuel Bendett, joins temporary host Larry Lewis (again filling in for Dave and Andy) to discuss Russia’s pursuits with the militarization of AI and autonomy. Russian Ministry of Defense (MOD) has made no secret of its desire to achieve technological breakthroughs in IT and especially artificial intelligence, marshalling extensive resources for a more organized and streamlined approach to information technology R&D. MOD is overseeing a significant public-private partnership effort, calling for its military and civilian sectors to work together on information technologies, while hosting high-profile events aiming to foster dialogue between its uniformed and civilian technologists. For example, Russian state corporation Russian Technologies (Rostec), with extensive ties to the nation’s military-industrial complex, has overseen the creation of a company with the ominous name – Kryptonite. The company’s name – the one vulnerability of a super-hero – was unlikely to be picked by accident. Russia’s government is working hard to see that the Russian technology sector can compete with American, Western and Asian hi-tech leaders. This technology race is only expected to accelerate - and Russian achievements merit close attention.

Episode 39

July 20, 2018

This week Andy and Dave take a respite from the world of AI. In the meantime, Larry Lewis hosts Shawn Steene from the Office of Secretary of Defense. Shawn manages DOD Directive 3000.09 – US military policy on autonomous weapons – and is a member of the US delegation to the UN’s CCW meetings on Lethal Autonomous Weapon Systems (LAWS). Shawn and Larry discuss U.S. policy, what DOD Directive 3000.09 actually means, and how the future of AI could more closely resemble the android data than SKYNET from the Terminator movies. That leads to a discussion of some common misconceptions about artificial intelligence and autonomy in military applications, and how these misconceptions can manifest themselves in the UN talks. With data having single-handedly saved the day in the eighth and tenth Star Trek movies (First Contact and Nemesis, respectively), perhaps Star Trek should be required viewing for the next UN meeting in Geneva.

Larry Lewis is the Director of the Center for Autonomy and Artificial Intelligence at CNA. His areas of expertise include lethal autonomy, reducing civilian casualties, identifying lessons from current operations, security assistance, and counterterrorism.

Episode 38

July 13, 2018

In the second part of this epic podcast, Andy and Dave continue their discussion with research from MIT, Vienna University of Technology, and Boston University, which uses human brainwaves and hand gestures to instantly correct robot mistakes. The research uses a combination of electroencephalogram (EEG, brain signals) and electromyogram (EMG, muscle signals) in combination to allow a human (without training) to provide corrective input to a robot while it performs tasks. On a related topic, MIT’s Picower Institute for Learning and Memory demonstrated the rules for human brain plasticity, by showing that when one synapse connection strengthens, the immediately neighboring synapses weaken; while suspected for some time, this research showed for the first time how this balance works. Then, research from Stanford and Berkley introduces a Taskonomy, a system for disentangling task transfer learning. This structured approach maps out 25 different visual tasks to identify the conditions under which transfer learning works from one task to another; such a structure would allow data in some dimensions to compensate for the lack of data in other dimensions. Next up, OpenAI has developed an AI tool for spotting photoshopped photos, by examining three types of manipulation techniques (splicing, copy-move, and removal), and by also examining local noise features. Researchers at Stanford have used machine learning to recreate the periodic table of elements after providing the system with a database of chemical formulae. And finally, Andy and Dave wrap up with a selection of papers and other media, including CNAS’s AI: What Every Policymaker Needs to Know; a beautifully-done tutorial on machine learning; the Question for AI by Nilsson; Nonserviam by Lem; IPI’s Governing AI; the US Congressional Hearing on the Power of AI; and Twitch Plays Robotics.

for related materials.


(June 19) Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

(MIT/June20) Navion chip: Upgrade helps miniature drones navigate

(June 22) Enlisting Industry Leaders to Help Government Make Sense of AI

(June 22) IEEE and the MIT Media Lab Launch Global Council on Extended Intelligence (CXI)

(June 8, Foundation for Responsible Robotics) Report: Drones in the Service of Society


(June 18) IBM’s Project Debater (more ambitious follow-on to Watson)

(OpenAI/June 25) OpenAI Five: “Algorithmic ‘A team’ crushes humans in complex computer game” -  the biggest (breakthrough?!) news of the week/but not without many questions!

(MIT) Supervising a robot with one’s brain and hand gestures 

(Stanford/Univ of CA at Berkeley) Taskonomy: Disentangling Task Transfer Learning

Technical paper


Project homepage/data

Additional presentation awards at CVPR18

(OpenAI/June 25) Adobe Using AI to Spot Photoshopped Photos

(June 25/Stanford) Atom2Vec: ML Recreates Periodic Table of Elements in Hours

Things of the Week

Papers of the Week –

Book of the Week - The Quest for Artificial Intelligence: A History of Ideas and Achievements, by Nils J. Nilsson (Hard copy)

Science-Fiction Story of the Week - Nonserviam, in The Perfect Vacuum, by Stanislaw Lem

Videos of the Week - 

Just Fun - Twitch Plays Robotics (some interesting crowdsourced/evolved robots)

Episode 37

July 6, 2018

In breaking news, Andy and Dave discuss a potentially groundbreaking paper on the scalable training of artificial neural nets with adaptive sparse connectivity; MIT researchers unveil the Navion chip, only 20 square millimeters in size and consumes 24 milliwatts of power, it can process real-time camera images up to 171 frames per second, and can be integrated into drones the size of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals “reclaim their digital identity;” and the Foundation for Responsible Robotics, which wants to shape the responsible design and use of robotics, releases a report on Drones in the Service of Society. Then, Andy and Dave discuss IBM’s Project Debater, the follow-on to Watson that engaged in a live, public debate with humans on 18 June. IBM spent 6 years developing PD’s capabilities, with over 30 technical papers and benchmark datasets, Debater can debate nearly 100 topics. PD uses three pioneering capabilities: data-driven speech writing and delivery, listening comprehension, and the ability to model human dilemmas. Next up, OpenAI announces OpenAI Five, a team of 5 AI algorithms trained to take on a human team in the tower defense game, Dota 2; Andy and Dave discuss the reasons for the impressive achievement, including that the 5 AI networks do not communicate with each other, and that coordination and collaboration naturally emerge from their incentive structures. The system uses 256 Nvidia graphics cards and 128,000 processor cores; it has taken on (and won) a variety of human teams, but OpenAI plans to stream a match against a top Dota 2 team in late July.

for related materials.


(June 19) Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

(MIT/June20) Navion chip: Upgrade helps miniature drones navigate

(June 22) Enlisting Industry Leaders to Help Government Make Sense of AI

(June 22) IEEE and the MIT Media Lab Launch Global Council on Extended Intelligence (CXI)

(June 8, Foundation for Responsible Robotics) Report: Drones in the Service of Society


(June 18) IBM’s Project Debater (more ambitious follow-on to Watson)

(OpenAI/June 25) OpenAI Five: “Algorithmic ‘A team’ crushes humans in complex computer game” -  the biggest (breakthrough?!) news of the week/but not without many questions!

(MIT) Supervising a robot with one’s brain and hand gestures 

(Stanford/Univ of CA at Berkeley) Taskonomy: Disentangling Task Transfer Learning

Technical paper


Project homepage/data

Additional presentation awards at CVPR18

(OpenAI/June 25) Adobe Using AI to Spot Photoshopped Photos

(June 25/Stanford) Atom2Vec: ML Recreates Periodic Table of Elements in Hours

Things of the Week

Papers of the Week –

Book of the Week - The Quest for Artificial Intelligence: A History of Ideas and Achievements, by Nils J. Nilsson (Hard copy)

Science-Fiction Story of the Week - Nonserviam, in The Perfect Vacuum, by Stanislaw Lem

Videos of the Week - 

Just Fun - Twitch Plays Robotics (some interesting crowdsourced/evolved robots)

Episode 36

June 29, 2018

In breaking news, Andy and Dave discuss the recently unveiled Wolfram Neural Net Repository with 70 neural net models (as of the podcast recording) accessible in the Wolfram Language; Carnegie Mellon and STRUDEL announce the Code/Natural Language (CoNaLa) Challenge with a focus on Python; Amazon releases its Deep Lens video camera that enables deep learning tools; and the Computer Vision and Pattern Recognition 2018 conference in Salt Lake City. Then, Andy and Dave discuss DeepMind’s Generative Query Network, a framework where machines learn to turn 2D scenes into 3D views, using only their own sensors. MIT’s RF-Pose  trains a deep neural net to “see” people through walls by measuring radio frequencies from WiFi devices. Research at the University of Bonn is attempting to train an AI to predict  future results based on current observations (with the goal of “seeing” 5 minutes into the future), and a healthcare group of Google Brain has been developing an AI to predict when a patient will die, based on a swath of historical and current medical data. The University of Wyoming announced DeepCube, an “autodidactic iteration” method from McAleer that allows solving a Rubik’s Cube without human knowledge. And finally, Andy and Dave discuss a variety of books and videos, including The Next Step: Exponential Life, The Machine Stops, and a Ted Talk from Max Tegmark on getting empowered, not overpowered, by AI.

for related materials.


(Wolfram, June 14) Wolfram Neural Net Repository (WNNP)

CoNaLa: The Code/Natural Language Challenge announced

(Amazon, June 18) Deep Lens: Deep learning-enabled video camera launched by Amazon

Computer Vision and Pattern Recognition (CVPR) 2018 – Salt Lake City, June 18-22


(Google/DeepMind) Generative Query Network (GQN) - Neural scene representation and rendering

(MIT) RF-Pose: Seeing Through Walls with Wi-Fi Signals

“Scientists Have Invented a Software That Can 'See' Several Minutes Into The Future”

(Google/Univ of Chicago Medicine/Univ of California in San Francisco/Stanford Univ) Developing an “AI” to Predict When a Patient Will Die

(University of Wyoming) DeepCube- Solving the Rubik's Cube Without Human Knowledge

Things of the Week

Book of the Week - The Next Step: Exponential Life (BBVA / OpenMind)

Science-Fiction Book of the Week - The Machine Stops, by E.M. Forster

Video of the Week -  Max Tegmark @TED2018 - How to get empowered, not overpowered, by AI

Episode 35

June 22, 2018

In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using "analog" synapses to improve neural net implementation, and Stanford U researchers develop an artificial sensory nervous system; and Berkley Deep Drive provides the largest self-driving car dataset for free public download. Next, the topic of "hard exploration games with sparse rewards" returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring ("curiosity") than from performing tasks as dictated by the researchers. From Cognition Expo 18, work from Martinez-Plumed attempts to "Forecast AI," but largely highlights the challenges in making comparisons due to the neglected, or un-reported, aspects of developments, such as the data, human oversight, computing cycles, and much more. From the Google AI Blog, researchers improve deep learning performance by finding and describing the transformation policies of the data, and using that information to increase the amount and diversity of the training dataset. Then, Andy and Dave discuss attempts to using drone surveillance to identify violent individuals (for good reasons only, not for bad ones). And in a more sporty application, "AI enthusiast" Chintan Trivedi describes his efforts to train a bot to play a soccer video game, by observing his playing. Finally, Andy recommends an NSF workshop report, a book on AI: Foundations of Computational Agents, Permutation City, and over 100 video hours of the CogX 2018 conference.

for related materials.


(June 7, Brookings) Report on Views of AI, Robots, and Automation based on Internet Search Data

(June 14, Chatham House Report) AI and International Affairs: Disruption Anticipated

(Microsoft) The Future Computed: AI and its role in society

(DeepMind) First major AI patent filings revealed

(June 6) From the “Biomimicry Department”: Synapses and a Sense of Touch

(June 8) Berkeley Deep Drive, the largest-ever self-driving car dataset, has been released by BDD Industry Consortium for free public download:


(University of Wyoming) Deep Curiosity Search: Intra-Life Exploration Improves Performance on Challenging Deep Reinforcement Learning Problems

(June 2/CogX18) Forecasting AI: Accounting for the Neglected Dimensions of AI Progress

(June 4, Google/AI Blog) Improving Deep Learning Performance with AutoAugment

Eye in the Sky: Real-Time Drone Surveillance System (DSS) for Violent Individuals Identification

(June 14) Building a Deep Neural Network to play FIFA18

Paper of the week (28 pages) – Report from an NSF workshop in May 2017

Technical Book of the Week – AI: Foundations of Computational Agents (Second Edition)

Science-Fiction Book of the Week – Permutation City by Greg Egan


(June 11/12) CognitionX 2018 Conference in London

Day 1

Day 2

Episode 34

June 15, 2018

In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new Xavier chip packs $10k worth of power into a $1299 box. Next, Andy and Dave discuss a benevolent application of adversarial attack methods, with a “privacy filter” for photos that are designed to stop AI face detection (reducing detection from nearly 100 percent to 0.5 percent). MIT used AI in the development of nanoparticles, training neural nets to “learn” how a nanoparticle’s structure affects its behavior. Then the remaining topics dip deep into the philosophical realm, starting with a discussion on empiricism and the limits of gradient descent, and how philosophical concepts of empiricist induction compare with critical rationalism. Next, the topic of a potential AI Winter continues to percolate with a viral blog from Piekniewski, leading into a paper from Berkley/MIT that discovers a 4-15% reduction in accuracy for CIFAR-10 classifiers on a new set of similar training images (bringing into doubt the idea of robustness of these systems). Andy shares a possibly groundbreaker paper on “graph networks,” that provides a new conceptual framework for thinking about machine learning. And finally, Andy and Dave close with some media selections, including Blood Music by Greg Bear and Swarm by Frank Schatzing.

for related materials.


(June 2) Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program

(March 20-21) Royal Australian Air Force’s biennial Air Power Conference

Defense Innovation Unit Experimental (DIUx) - Annual Report 2017

(May 12, China) DefCon2018 conference dedicated to AI in cybersecurity: Highlights, presentations

(June 3) Nvidia's Jetson Xavier AI chip boasts $10,000-worth of power


(May 31, Univ of Toronto) AI researchers design 'privacy filter' for your photos

(June 3, MIT) AI-infused development of specialized nanoparticles

Philosophical Ruminations #1 – Empiricism and the limits of gradient descent, from Julian Togelius’ blog

AI Things of the Week

Paper of the Week - Relational inductive biases, deep learning, and graph networks

Cartoon of the Week – Abstruse Goose

Magazine of the Week – Wilson Quarterly Spring 2018 Issue – Living with AI

Technical Book of the Week – Elements of Robotics by Mordechai Ben-Ari and Francesco Mondada

Science-Fiction Books of the Week (all dealing with intelligent swarms in one way or another) –

Video of the Week (1 hr) - Gary Marcus, Deep Learning: A Critical Appraisal

Episode 33

June 8, 2018

Andy and Dave didn’t have time to do a short podcast this week, so they did a long one instead. In breaking news, they discuss the establishment of the Joint Artificial Intelligence Center (JAIC), yet-another-Tesla autopilot crash, Geurts defending the decision to dissolve the Navy’s Unmanned Systems Office, and Germany publishes a paper that describes its stance on autonomy in weapon systems. Then, Andy and Dave discuss DeepMind’s approach to using YouTube videos to train an AI to learn “hard exploration games” (with sparse rewards). In another “centaur” example, facial recognition experts form best when combined with an AI. University of Manchester researches announce a new footstep-recognition AI system, but Dave pulls a Linus and has a fit of “footstep awareness.” In other recent reports, Andy and Dave discuss another example of biomimicry, where researchers at ETH Zurich have modeled the schooling behavior of fish. And in brain-computer interface research, a noninvasive BCI system co-trained with tetraplegics to control avatars in a racing-game. Finally, they round out the discussion with a mention of ZAC Inc and its purported general AI, a book on How People and Machines are Smarter Together, and a video on deep reinforcement learning.

for related materials.


(May 29) Joint Artificial Intelligence Center (JAIC) established

(May 29) Tesla that crashed into police car was in 'autopilot' mode, California official says

(May 24) Follow-up to last episode:

(May 23) Autonomy in Weapon Systems: The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy


(May 29, DeepMind) Playing hard exploration games (with sparse rewards) by watching YouTube

(May 29) Facial Recognition Experts Perform Best With An AI Sidekick (“Centaur” example)

(SfootBD) Powerful new footstep-recognition AI system

Brain-computer-interface training helps tetraplegics win avatar race

Research into fish schooling energy dynamics could boost autonomous swarming drones

Beautiful analysis of fine-scale collective behavior of wild white stork migration with equally elegant figures

Candidate for “Hype of the Week” –ZAC (Z Advanced Computing Inc.) Announcement: Maryland researchers say they discovered 'Holy Grail' of machine learning


Book of the Week  - AIQ: How People and Machines Are Smarter Together


Video of the week (30 min) - Reproducibility, Reusability, & Robustness in Deep Reinforcement Learning

Episode 32

June 1, 2018

In breaking news, Andy and Dave discuss a few cracks seem to be appearing in Google's Duplex demonstration; more examples of the breaking of Moore's Law; a Princeton effort to advance the dialogue on AI and ethics; India joins the global AI-sabre-rattling; the UK Ministry of Defence launches an AI hub/lab; and the U.S. Navy dissolves its secretary-level unmanned systems office. Andy and Dave then discuss a demonstration of "zero-shot" learning, by which a robot learns to do a task by watching a human perform it once. The work reminds Andy of the early natural language "virtual block world" SHRDLU, from the 1970s. In other news, the research team that designed Libratus (a world-class poker-playing AI) announced they had developed a better AI that, more importantly, is also computationally orders of magnitude less expensive (using a 4-core CPU with 16 GB of memory). Next, research with Intel and the University of Illinois UC has developed a convolutional neural net to significantly improve low-ISO image quality while shooting at faster shutter speeds; Andy and Dave both found the results for improving low-light images to be quite stunning. Finally, after yet-another-round of a generative adversarial example (in which Dave predicts the creation of a new field), Andy closes with some recommendations on papers, books, and videos, including Galatea 2.2 and The Space of Possible Minds.

for related materials.


(Issue raised by Axios) Follow-on to Google’s May-8-demo of its New Duplex technology

( AI and Compute blog

Princeton Center for Information Technology Policy (CITP) announces publication of four original case studies from "Princeton Dialogues on AI and Ethics" project

India now wants AI-based weapon systems

UK launches a new AI hub

(May 16) Navy dissolves unmanned systems office


(21-25 May, 2018) IEEE Robotics and Automation Society (ICRA) in Brisbane, Australia

Interactive visualization of ICRA-2018 papers

Joint Concept Note 1/18 – Human-Machine Teaming

(May 21, Carnegie Mellon Univ) Depth-Limited Solving for Imperfect-Information Games

(Intel and University of Illinois Urbana-Champaign) AI is learning to see in the dark

(May 21, Microsoft Research, Stanford Univ) Generative Adversarial Examples

Paper of the week - Using Artifcial Intelligence to Augment Human Intelligence

Book of the week - Galatea 2.2 by Richard Powers (published in 2004)


Video of the week (30 min) - The Space of Possible Minds: A Conversation With Murray Shanahan

Episode 31

May 25, 2018

In a review of the latest news, Andy and Dave discuss: the White House’s “plan” for AI, the departure of employees from Google due to Project Maven, another Tesla crash, the first AI degree for undergraduates at CMU, and Boston Dynamics’ jumping and climbing robots. Next, two AI research topics have implications for neuroscience. First, Andy and Dave discuss AI research at DeepMind, which showed that an AI trained to navigate between two points developed “grid cells,” very similar to those found in the mammalian brain. And second, another finding from DeepMind on “meta-learning” suggests that dopamine in the human brain may have a more integral role in meta-learning than previously thought. In another example of “AI-chemy,” Andy and Dave discuss the looming problem of (lack of) explainability in health care (with implications for many other areas, such as DoD), and they also discuss some recent research on adding an option for an AI to defer a decision with “I Don’t Know” (IDK). After a quick romp through the halls of AI-generated DOOM, the two discuss a recent proof that reveals the fundamental limits of scientific knowledge (so much for super-AIs). And finally, they close with a few media recommendations, including “The Book of Why: The New Science of Cause and Effect.”

for related materials.


(May 14) Follow up on earlier news about Google employees being “upset” with work on Project Maven

(May 14) Tesla Model S crashed into a fire department truck in Utah: Police probe whether Autopilot feature was on in Tesla crash

(May 10) “The White House’s plan for AI is to not have a plan for AI”

(May 10) 1st AI degree for undergraduates at CMU

Boston Dynamics' robots can now run, jump and climb


Google, DeepMind and University College London, UK: Navigating with grid-like representations in artificial agents

Google, DeepMind: Using a NN to help explain ‘meta-learning’ in human brains

Lack of Explainability in Health Care Becoming an Issue?

PhD student, David Madras, CS, University of Toronto: Learning to Defer

Video game maps made by AI: More DOOM!

David Wolpert, Sante Fe Institute: New proof reveals fundamental limits of scientific knowledge

Paper of the week - AGI Safety Literature Review

Book of the Week - Judea Pearl’s The Book of Why: The New Science of Cause and Effect


Video of the week - 2018 Isaac Asimov Memorial Debate: Artificial Intelligence

Episode 30

May 18, 2018

In a review of the most recent news, Andy and Dave discuss the latest information on the fatal self-driving Uber accident, the AI community reacts (poorly) to Nature's announcement of a new closed-access section on machine learning, on-demand self-driving cars will be coming soon to north Dallas, and the Chinese government is adding AI to high school curriculum with a mandated textbook. For more in-depth topics, Andy and Dave discuss the latest information from DARPA's Lifelong Learning Machines (L2M) project, which has announced its initial teams and topics, which seek to generate "paradigm-changing approaches" as opposed to incremental improvements. Next, they discuss an experiment from OpenAI that provides visibility into dialogue between two AI on a topic, one of which is lying. This discussion segues into recent comparisons of the field of machine learning to the ancient art of alchemy. Dave avoids using the word "alcheneering," but thinks that "AI-chemy" might be worth considering. Finally, after a discussion on a couple of photography-related developments, they close with a discussion on some papers and videos of interest, including the splash of the Google's new "Turing-test-beating" Duplex assistant for conducting natural conversations over the phone.

for related materials.


Uber sets safety review; media report says software cited in fatal crash

Thousands of AI researchers will boycott a new science journal

Self-driving cars are here: will offer on-demand robotic cars in Frisco, a suburb north of Dallas

China brings AI to high school curriculum, with mandated textbook

Facebook Adds A.I. Labs in Seattle and Pittsburgh, Pressuring Local Universities


(DARPA) Lifelong Learning Machines (L2M) project

(OpenAI) How can we be sure AI will behave? Perhaps by watching it argue with itself

AI researchers allege that machine learning is alchemy

Facebook Training Image Recognition AI with Billions of Instagram Photos

(NVIDIA) Inpainting for Irregular Holes Using Partial Convolutions

Google Photos to Use AI to Colorize Black-and-White Photos: Keynote (Google I/O '18) (at 1:33:15)

Google’s new Duplex "AI Assistant" technology

Paper of the week: Exploration of Swarm Dynamics Emerging from Asymmetry

Short story of the week: Automated Valor by August Cole (author of Ghost Fleet)


What is a complex system? | Karoline Wiesner & James Ladyman (TED Talk, 15 min) Complex systems - Beehives and human brain - merging of CAS, AI, and Cybernetics

DeepMind - From Generative Models to Generative Agents - Koray Kavukcuoglu (2 May, at ICLR2018, 45min)

Episode 29

May 11, 2018

Andy and Dave discuss a couple of recent reports and events on AI, including the Sixth International Conference on Learning Representations (ICLR). Next, Edward Ott and fellow researchers have applied machine learning to replicate chaotic attractors, using "reservoir computing." Andy describes the reasons for his excitement in seeing how far out this technique is able to predict a 4th order nonlinear partial differential equation. Next, Andy and Dave discuss a few adversarial attack-related topics: a single-pixel attack for fooling deep neural network (DNN) image classifiers; an Adversarial Robustness Toolbox from IBM Research Ireland, which provides an open-source software library to help researchers in defending DNN against adversarial attacks; and the susceptibility of the medical field to fraudulent attacks. The BAYOU project takes another step toward giving AI the ability to program new methods for implementing tasks. And Uber Labs releases source code that can train a DNN to play Atari games in about 4 hours on a *single* 48-core modern desktop! Finally, after a review of a few books and videos, including Paul Scharre's new book "Army of None," Andy and Dave conclude with a discussion on potatoes.

for related materials.


How Might Artificial Intelligence Affect the Risk of Nuclear War? - RAND Corp

(30 April - 3 May) Sixth International Conference on Learning Representations

(April 26) Congressional Research Service (CRS) report: Artificial Intelligence and National Security

(April 23) Uber AI Labs, Accelerating Deep Neuroevolution: Train Atari in Hours on a Single Personal Computer

Bulletin of Atomic Scientists, special issue on Military Applications of AI

(April 24) Book of the week: Paul Scharre, Army of None: Autonomous Weapons and the Future of War   


Using Machine Learning to Replicate Chaotic Attractors and Calculate Lyapunov Exponents from Data

One pixel attack for fooling deep neural networks

Neural Sketch Learning: Rice University turns deep-learning AI loose on software development


(1.2 hrs) MIT AGI: Life 3.0, discussion w/Max Tegmark, Physics Professor at MIT, co-founder of Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.

(1.5 hrs) The Rise of AI Deep Learning - documentary 2018

“This is getting too silly,” as Graham Chapman, from Monty-Python, might say: AI Will Give Us Better French Fries

Episode 28

May 4, 2018

This week, Andy, Larry, and Dave welcome Major General Mick Ryan, Commander of the Australian Defence College. Mick has recently published a report on Human-Machine Teaming for Future Ground Forces, in which he identifies keys areas for human-machine teams, as well as challenges that military forces will have in incorporating these new capabilities. The group discusses some of these issues, and some of the broader challenges in both the near- and far-term.

for related materials.

Episode 27

April 27, 2018

Andy and Dave start this week's podcast with a review of some of the latest announcements: the latest meeting of the UN Convention on Certain Convention Weapons, SecDef Mattis's announcement of a new joint program office for AI, a declaration of cooperation on AI by 25 European countries, and a UK Parliament report on AI. They then discuss the latest Center for the Study of the Drone report, which compares U.S. Dept of Defense drone spending for FY19 with FY18. The MIT-IBM Watson AI Lab has launched a "Moments in Time" dataset, the first steps toward building a large and robust set of short videos for action classification purposes. Google has increased the quality of its AI in picking voices out of a noisy room, by making use of additional information (here, video). And Google has introduce a way to "talk to books;" Andy and Dave were a bit underwhelmed, but check it out and judge for yourself. Finally, Andy and Dave close with a selection of whimsical comments from the news, and a selection of videos.

for related materials.


(April 9-13) Convention on Certain Conventional Weapons (CCW) - Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS)

(April 9) DoD Official Highlights Value of Artificial Intelligence to Future Warfare

(April 10) 25 EU Member States sign up to cooperate on Artificial Intelligence

(April 16) UK Parliament Report on AI: AI in the UK: ready, willing and able? (PDF)


(April 9) Center for the Study of the Drone (Bard College): FY19 drone budget request

MIT-IBM Watson AI Lab launches Moments in Time dataset

Google trains its AI to pick out voices in a noisy crowd to SPY on your secret conversations

(April 13) Google introduces new AI experience called 'Talk to Books' semantic-search feature


Elon Musk drafts in humans after robots slow down Tesla Model 3 production: "Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated."

Move Over, Moore’s Law: Make Way for Huang’s Law


(April 11, 25 min) Al Jazeera - Do You Trust This Computer?: "Will killer robots save us or destroy humanity?"

TED Talk: General Artificial Intelligence: Making sci-fi a reality | Darya Hvizdalova

Episode 26

April 20, 2018

Anna Williams joins Dave in welcoming CAPT Sharif Calfee for a two-part discussion on unmanned systems and artificial intelligence. As part of his fellowship research, CAPT Calfee has been speaking with organizations and subject matter experts across the U.S. Navy, the U.S. Government, Federally Funded Research and Development Centers, University Affiliated Research Centers, and Industry, in order to understand the broader efforts involving unmanned systems, autonomy, and artificial intelligence. In the first part of their discussion, the group discusses the progress and the challenges that the CAPT has observed in his engagements. In the second part, the group discusses various steps that the U.S. Navy can take to move forward more deliberately, to include the consideration for a new Naval Reactors-like office to oversee AI.

Episode 25

April 13, 2018

Andy and Dave cover a wide variety of topics this week, starting with two prominent examples of employees and researchers objecting to certain uses of AI technology. Andy and Dave then discuss a recent GAO report on AI, as well as France’s announcement to invest in AI. They also discuss AI in designing chemical synthesis pathways, AI in reading echocardiograms, meta-learning (learning how to learn in unsupervised learning), helping robots express themselves when they fail, and a collection of papers, graphic novels, and videos. By the end, Dave’s arms are flailing wildly!

for related materials.


NY Times: ‘The Business of War’: Google Employees Protest Work for the Pentagon

(March 29)France announces investment in AI – wants to become AI hub


(GAO Report, March 28) Artificial Intelligence: Emerging Opportunities, Challenges, and Implications

(Nature volume 555, pages 604–610, March 29) Chemical Syntheses with DNN

Meta-Learning: Learning Unsupervised Learning Rules (Google Brain)

Helping Robots Express Themselves When They Fail

Stanford's DAWNBench is a new benchmark suite measuring a variety of deep learning training and inference tasks

Adversarial Attacks and Defences Competition

Graphic Novel

Silent Ruin, by Army Cyber Institute, West Point

NATO Vs. Killer Russian Robots: Graphic Novel Envisions Cyberwar In Moldova


How we can teach computers to make sense of our emotions (TED Talk, 11 min)

The Threat of AI Weapons

Will AI make us immortal? Or will it wipe us out? Elon Musk, Ray Kurzweil and Nick Bostrom.

Vicious Cycle- a group of little autonomous robots performing a range of repetitive functions (3 min)


Episode 24

April 6, 2018

Dave starts with a shocking revelation! Can you pass the test?? Andy and Dave then discuss MIT Tech Review’s EmTech Digital Conference, which highlighted the latest in AI research. Next, Andy and Dave discuss the rapid expansion of newly reported AI models, including the “GAN Zoo.” Venture capital funding in the U.S. suggests that the AI market may be cooling. Andy describes new insight into brain function that will likely lead to further AI breakthroughs. And after a discussion of an AI playing Battlefield 1, Andy and Dave close with a look at AIs learning in electric dreams, and a GAN that can lip sync a face to an audio-video clip.

for related materials.


MIT Technology Review’s EmTech Digital conference in San Francisco - March 26-27


(MIT Media Lab) Closing the AI Knowledge Gap - towards a “Science of AI”

The brain may learn completely differently than we've assumed since the 20th century

EA Teaches AI to Play 'Battlefield 1' Multiplayer

(Google Brain) World Models: Can agents learn inside their own dreams?

Speech-Driven Facial Reenactment Using Conditional Generative Adversarial Networks


Listen: First Music Album Composed By Artificial Intelligence



We Are Here To Create (40 min) A Conversation with Kai-Fu Lee, author of forthcoming book AI Superpowers: China, Silicon Valley, and the New World Order

Episode 23

March 30, 2018

With the news of the first death at the digital hands of a driverless vehicle, Andy and Dave discuss some of the broader issues surrounding the understanding and implementation of AI technology. In other news, they discuss the creation of a digital version of yeast (DCell) as a way to provide insight into the otherwise “black box” of AI. Then, after describing DeepMind’s efforts into using evolutionary Auto Machine Learning to discover neural network architectures, Andy and Dave discuss an example of how background knowledge (“priors”) transfers to the world of games, and how that compares with AI.

for related materials.


First known pedestrian death involving a self-driving vehicle


(Univ. California, San Diego, School of Medicine)

(Google, DeepMind) Using Evolutionary AutoML to Discover Neural Network Architectures

(University of California, Berkeley) Investigating Human Priors for Playing Video Games

Three DARPA program announcements:


The Cinematic Control Room since the early 1970s

Episode 22

March 23, 2018

Larry Lewis, Director of CNA’s Center for Autonomy and AI, again sits in for Dave this week. He and Andy discuss: the recent passing of physicist Stephen Hawking (along with his "cautionary" views on AI); CNAS’s recent launch of a new Task Force on AI and National Security, Microsoft’s AI breakthrough in matching human performance translating news from Chinese to English; a report that looks at China’s "AI Dream" (and introduces an "AI Potential Index" to assess China’s AI capabilities compared to other nations); a second index, from a separate report, called the "Government AI Readiness Index," which inexplicably excludes China from the top 35 ranked nations; and the issue of legal liability of AI systems. They conclude with call outs to a fun-to-read crowd-sourced paper written by researchers in artificial life, evolutionary computation, and AI that tells stories about the surprising creativity of digital evolution, and three videos: a free BBC-produced documentary on Stephen Hawking, a technical talk on deep learning, and a Q&A session with Elon Musk (that includes an exchange on AI).

for related materials.


Stephen Hawking passed away

CNAS (Center for New American Security) launches Task Force on Artificial Intelligence and National Security

AI matches human performance translating news from Chinese to English


Deciphering China’s AI Dream - Future of Humanity Institute, University of Oxford

2018 Emerging Tech Trends Report (248 pages) – Future Today Institute, launched March 11, 2018

Artificial Intelligence and Legal Liability - John Kingston, University of Brighton, UK

Interesting Paper (30 pages) - The Surprising Creativity of Digital Evolution


Yann LeCun and Christopher Manning Discuss Deep Learning

Elon Musk (CEO of SpaceX and Tesla)

Episode 21

March 16, 2018

Larry Lewis, Director of CNA’s Center for Autonomy and AI, sits in for Dave this week, as he and Andy discuss: a recent report that not all Google employees are happy with Google’s partnership with DoD (in developing a drone-footage-analyzing AI); research efforts designed to lift the lid – just a bit - on the so-called “black box” reasoning of neural-net-based AIs; some novel ways of getting robots/AIs to teach themselves; and an arcade-playing AI that has essentially “discovered” that if you can’t win at the game, it is best to either kill yourself or cheat. The podcast ends with a nod to a new free online AI resource offered by Google, another open access book (this time on the subject of Robotics), and a fascinating video of Stephen Wolfram of Mathematica fame, lecturing about artificial general intelligence and the “computational universe” to a computer science class at MIT.

for related materials.

Episode 20

March 9, 2018

Andy and Dave discuss a recently released report on the Malicious Use of AI: Forecasting, Prevention, and Mitigation, which describes scenarios where AI might have devious applications (hint: there’s a lot). They also discuss a recent report that describes the extent of missing data in AI studies, which makes it difficult to reproduce published results. Andy then describes research that looks into ways to alter information (in this case, classification of an image) to fool both AI and humans. Dave has to repeat the research in order to understand the sheer depth of the terror that could be lurking below. Then Andy and Dave quickly discuss a new algorithm that can mimic any voice with just a few snippets of audio. The only non-terrifying topic they discuss involves an attempt to make Alexa more chatty. Even then, Dave decides that this effort will only result in a more-empty wallet.

for related materials.

Episode 19

March 2, 2018

Andy and Dave welcome Sam Bendett, a research analyst for CNA's Center for Strategic Studies, where he is a member of the Russia Studies Program. His work involves Russian defense and security technology and developments, Russian geopolitical influence in the former Soviet states, as well as Russian unmanned systems development, Russian naval capabilities and Russian decision-making calculus during military crises. Sam is in our studio to discuss recent Russian developments in AI and unmanned systems, and to preview an upcoming Defense One summit called "Genius Machines," which he will be speaking at on March 7.

for related materials.

Episode 18

Feb 23, 2018

In another smattering of topics, Andy and Dave discuss the latest insight into the dispersion of global AI start-ups, as well as AI talent. They also describe a commercially available drone that can navigate landscapes and obstacles as it tracks a target. And they discuss an AI algorithm with “social skills” that can teach humans how to collaborate. After chat bots and Deep TAMER, Andy and Dave discuss a few recent videos, including one about door-opening dogs; and, Dave has a meltdown as he fails to recall The Earth Stood Still, but instead substitutes a different celestial body. Klaatu barada nikto.

for related materials.

Breaking News

Artificial Intelligence Trends To Watch In 2018

Follow-up to podcast #17 – DroNet: Learning to Fly by Driving


Tencent says there are only 300,000 AI engineers worldwide, but millions are needed

AI algorithm with ‘social skills’ teaches humans how to collaborate: Cooperating with machines

Human-machine collaborative chatbot, Evorus

Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces

(Google/DeepMind) IMPALA: Scalable Distributed DeepRL in DMLab-30


Stanislaw Lem short story: “The Upside-Down Evolution”

Artificial Intelligence and Games, by Georgios N. Yannakakis and Julian Togelius, 2018 (hardcopy)


Intel's Winter Olympics 1218-Drone Light Show

Boston Dynamics crosses new threshold with door-opening dog (SpotMini)

Episode 17

Feb 16, 2018

Andy and Dave start this week’s episode with a superconducting ‘synapse’ that could enable powerful future neuromorphic supercomputers. They discuss an attempt to use AI to decode the mysterious Voynich manuscript, and then move on to Hofstadter’s take on the shallowness of Google Translate (with mention of the ELIZA effect). After discussing DroNet’s drones that can learn to fly by watching a driving video, and updating the Domain-Adaptive Meta-Learning discussion where a robot can learn a task by watching a video, they close with some recommendations of videos and books, including Lem’s ‘Golem XIV.’

for related materials.

Episode 16a & 16b

Feb 9, 2018

Andy and Dave welcome back Larry Lewis, the Director for CNA's Center for Autonomy and Artificial Intelligence, and welcome Merel Ekelhof, a Ph.D. candidate at VU University Amsterdam and visiting scholar at Harvard Law School. Over the course of this two-part series, the group discusses the idea of "meaningful human control" in the context of the military targeting process, the increasing role of autonomous technologies (and that autonomy is not simply an issue "at the boom"), and the potential directions for future meetings of the U.N. Convention on Certain Weapons.

for related materials.

Episode 15

Feb 2, 2018

Andy and Dave discuss two recent AI announcements that employ generative adversarial networks: an AI algorithm that can crack classic encryption ciphers (without prior knowledge of English), and an AI algorithm that can "draw" (generate) an image based on simple text instructions. They start, however, with a discussion on the recent rash of autonomous (and semi-autonomous) vehicle incidents, and they also discuss "brain-on-a-chip" hardware, as well as a robot that can learn to do tasks by watching video.

for related materials.

Breaking News

Tesla ‘on Autopilot’ slams into parked fire truck on California freeway

People Keep Confusing Their Teslas for Self-Driving Cars

Waze unable to explain how car ended up in Lake Champlain

Tesla Bears Some Blame for Self-Driving Crash Death, Feds Say

Tesla Autopilot crash caught on dashcam shows how not to use the system


(Google/University of Toronto) AI code decryption

(MIT) Artificial synapse created for "brain-on-a-chip" hardware

(Microsoft) Text to Image Generation with - AI that draws what it is instructed to draw

(Google/University of Southern California) Robot learning from video

Miscellaneous Links

Point / Counterpoint "debate" on Slaughterbots discussed in podcast #5 – recall that Slaughterbots is Future of Life Institute’s "mini movie" on why autonomous weapons ought to be banned

Episode 14

Jan 26, 2018

Andy and Dave cover a series of topics that evoke broader to connect with the "meta" questions about the role and nature of AI. They begin with Google's Cloud AutoML announcement, which offers ways to more easily build your own AI. They discuss the announcement of AIs that "defeated" humans on a Standard University reading comprehension text, and the misrepresentation of that achievement. They discuss deep image reconstruction, with a neural net that "read minds" by piecing together images from a human's visual cortex. And they close with discussions about Gary Marcus's recent article, which offers a critical appraisal of Deep Learning, and a recent paper that suggests that convolutional neural nets may not be as good at "grasping" higher-level abstract concepts as is typically believed.

for related materials.

Breaking News

Google announces Cloud AutoML


AI has "defeated" humans on a Stanford University reading comprehension test

Deep image reconstruction: Japanese-designed NN can "read minds"

Gary Marcus (NYU Professor and Founder of Uber-owned ML startup Geometric Intelligence) publishes Deep Learning: A Critical Appraisal


Artificial intelligence debate at New York University between Yann LeCun vs. Gary Marcus: Does AI Need More Innate Machinery? (2 hrs)

Episode 13

Jan 19, 2018

Andy and Dave discuss a newly announced method of attack on the speech-to-text capability DeepSpeech, which introduces noise to an audio waveform so that the AI does not hear the original message, but instead hears a message that the attacker intends. They also discuss the introduction of probabilistic models to AI as a way for AI to "embrace uncertainty" and make better decisions (or perhaps doubt whether or not humans should remain alive). And finally, Andy and Dave discuss some recent applications of AI to different areas of scientific study, particularly in the examination of very large data sets.

for related materials.


From images to voice

AI systems that doubt themselves: AI will make better decisions by embracing uncertainty

AI for science


Paul Scharre’s testimony before the House Armed Services Subcommittee on Emerging Threats and Capabilities (9 Jan 2018): China’s Pursuit of Emerging and Exponential Technologies. Watch clip. Transcript.

The documentary about Google DeepMind's 'AlphaGo' algorithm is now available on Netflix

Episode 12

Jan 12, 2018

Andy and Dave discuss “Tacotron 2,” the latest text-to-speech capability from Google that produces results nearly indistinguishable from human speech. They also discuss efforts at Google to create a Neural Image Assessment (NIMA), that not only can evaluate the quality of an image, but can also be trained to rate the aesthetics (as defined by the user) of an image. And after a look at some of the AI predictions for 2018, they play a musical game with two pieces of music – can Andy guess which piece Dave wrote, and which the AI composer AIVA, the Artificial Intelligence Virtual Artist, wrote?

for related materials.

Episode 11

Jan 5, 2018

It’s a smorgasbord of topics, as Andy and Dave discuss: the “AI 100” top companies report; the implications of Google’s new AI Research Center in Beijing; a workshop from the National Academy of Science and the Intelligence Community Studies Board on the challenges of machine generation of analytic products from multi-source data; Ethically Aligned Design and the IEEE; Quantum Computing; and finally, some Kasparov-related materials.

for related materials.


CB Insights (market analysis firm): AI 100: The Artificial Intelligence Startups Redefining Industries

Google Opens an AI Research Center In Beijing

(Workshop) National Academy of Science / Intelligence Community Studies Board - Challenges in Machine Generation of Analytic Products from Multi-Source Data

Ethically Aligned Design (EAD) – IEEE – toward a global, multilingual collaboration

Quantum Computing + machine learning: A Startup Uses Quantum Computing to Boost Machine Learning

  • Related: IZBM announces 50-Qubit quantum computer on 10 Nov; caveat (as for all state-of-the-art q-computers: the quantum state is preserved for 90 microseconds—a record for the industry, but still an extremely short period of time. IBM Raises the Bar with a 50-Qubit Quantum Computer

Microsoft releases a (preview of a) “Quantum Development Kit” (~ Visual Studio) (Video)


Kasparov on Deep Learning in chess:

Episode 10

Dec 29, 2017

Andy and Dave continue their discussion on the 31st Annual Conference on Neural Information Processing Systems (NIPS), covering Sokoban, chemical reactions, and a variety of video disentanglement and recognition capabilities. They also discuss a number of breakthroughs in medicine that involve artificial intelligence: a robot passing a medical licensing exam, an algorithm that can diagnose pneumonia better than expert radiologists, a venture between GE Healthcare and NVIDIA to tap into volumes of unrealized medical data, and deep-brain stimulation. Finally, for reading material and reference, Andy recommends a technical lecture on reinforcement learning, as well as two books on robot ethics.

for related materials.


NASA announcement on Dec. 14

Follow-up on AlphaGo (by DeepMind): AlphaGo Teach

31st Annual Conference on Neural Information Processing Systems (NIPS)

Imagination-Augmented Agents for Deep Reinforcement Learning

(IBM) Predicting outcomes of chemical reactions (Video)

DrNET: Unsupervised Learning of Disentangled Representations from Video


Several Milestones in Artificial Intelligence Were Just Reached in Medicine


Rich Sutton ("Father" of reinforcement learning, Department of Computing Science, University of Alberta) – 1.5hr technical lecture on a reinforcement-learning technique called temporal-difference learning (TDL)


Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by Patrick Lin, Keith Abney, and Ryan Jenkins, Oxford University Press

Towards a Code of Ethics for Artificial Intelligence, Paula Boddington, Springer-Verlag

Episode 9

Dec 22, 2017

After some brief speculation on the announcement from NASA (which was being held at the same time as this podcast was recorded), and a quick review of AlphaGo Teach, Andy and Dave discuss the 31st Annual Conference on Neural Information Processing Systems (NIPS). With over 8,000 attendees, 7 invited speakers, seminar and poster sessions, NIPS provides insight into the latest and greatest developments in deep learning, neural nets, and related fields.

for related materials.

Episode 8

Dec 15, 2017

Andy and Dave discuss how DeepMind's AI continues to bust through the record books while AlphaZero takes one step closer to world domination (of all board games). After a brief discussion on protein folding, they discuss the "AI Index," which seeks to measure the evolution and advances in AI over time.

for related materials.

Episode 7

Dec 8, 2017

Andy and Dave discuss a market analysis report that identifies where the Department of Defense is spending money in artificial intelligence, big data, and the cloud. They also elaborate on the challenge of "catastrophic forgetting," and a 4-year program at DARPA that seeks to develop "Lifelong Learning Machines," which can continuously apply the results of past experiences. After a conversation about SquishedNets, they cover a Harvard research paper that asserts the need for AI to have explanatory capabilities and accountability.

for related materials.

Episode 6a & 6b

Nov 24, 2017

Dr. Larry Lewis joins Andy and Dave to discuss the U.N. Convention on Conventional Weapons, which met in mid-November with a "mandate to discuss" the topic of lethal autonomous weapons. Larry provides an overview of the group's purpose, the group�s schedule and discussions, the mood and reaction of various parts of the group, and what the next steps might be.

for related materials.


November 13-17 meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on lethal autonomous weapons systems (86 countries)

22 countries now support a prohibition with Brazil, Iraq and Uganda joining the list of ban endorsers during the GGE meeting. Cuba, Egypt, Pakistan and other states that support the call to ban fully autonomous weapons also forcefully reiterated the urgent need for a prohibition.

States will take a final decision on the CCW’s future on this challenge, including 2018 meeting duration/dates, at the CCW’s annual meeting on Friday, 24 November.”

2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)/links

Group of Governmental Experts on Lethal Autonomous Weapons Systems (links to docs)

Recaps of the UN CCW meetings Nov 13 – 17 (by Autonomous Weapons):

  • “The vast majority of CCW high contracting parties participating in this meeting do want concrete action. The majority of those want a legally binding instrument, while others prefer—at least for now—a political declaration or other voluntary arrangements. However, China, Japan, Latvia, Republic of Korea, Russia, and the United States made it clear that they do not want to consider tangible outcomes at this time.”
  • Autonomous Weapons recap
  • Stop Killer Robots recap


Slaughterbots – Future of Life Institute “mini movie” on why autonomous weapons ought to be banned (postscript by Stuart Russell, AI researcher):

Related: In Aug 2017, Elon Musk lead 116 AI experts in open letter calling for ban of killer robots. Read.

Episode 5

Nov 17, 2017

Andy and Dave discuss the recent Geneva Convention on Conventional Weapons, which met to lay the groundwork for discussing the role of lethal autonomous weapons. They also discuss a new technique, called Capsule Networks, that aims to improve recognition of an object due to a change in spatial orientation. Andy and Dave conclude with a discussion of why fruit flies are so awesome.

for related materials.

Episode 4

Nov 10, 2017

Andy and Dave discuss MIT efforts to create a tool to train AIs, in this case, using another AI to provide the training. They discuss efforts to crack the "cocktail party" dilemma of picking out individual voices in a noisy room, as well as an AI that can "upres" photographs with remarkable use of texture (that is, taking a lower resolution photo and making it larger in a realistic way). Finally, they discuss the latest MIT Tech Review magazine, which focused on AI.

for related materials.

Episode 3

Nov 10, 2017

Andy and Dave follow up on the discussion of AlphaGo Zero and the never-before-seen patterns of play that the AI discovered, and the implications of such discoveries (which seem to be the "norm" for AI). They also discuss Google's AutoML project, which applies machine learning to help improve machine learning.

for related materials.

Episode 2

Nov 3, 2017

Andy and Dave discuss the late-breaking news of AlphaGo Zero, a new iteration of the Go playing AI, which surpassed its predecessor AI in about 3 days of learning, using only the basic rules of Go (as opposed to the 6+ months of the original, using thousands of games as examples).

for related materials.


AlphaGo Zero beats AlphaGo 100-0 after 3 days of training (compared to several months for original AlphaGo) and without any human intervention/human-game-playing-data! Read: Technology Review and Nature


AlphaGo Documentary - Local screening in Reston, VA

Episode 1

Nov 3, 2017

In the inaugural podcast for AI with AI, Andy provides an overview of his recent report on AI, Robots, and Swarms, and discusses the bigger picture of the development and breakthroughs in artificial intelligence and autonomy. Andy also discusses some of his recommended books and movies.

for related materials.


Movies & TV


  • When Will AI Exceed Human Performance? - Survey of 352 experts who had published at recent AI conferences (Oxford, Yale, and the Future of Life Institute)
  • AI Progress Measurement - Measuring the Progress of AI Research
  • New Theory Cracks Open the Black Box of Deep Learning - Tishby / information bottleneck
  • "The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts." - analogy to renormalization (as used in statistical physics), may lead to better understanding and new architectures
  • Forget Killer Robots—Bias Is the Real AI Danger - Technology Review (MIT)
  • An AI developed at Vanderbilt University in Tennessee to identify cases of colon cancer from patients’ electronic records performed well - at first - but it was discovered that the AI "learned" to associate patients with confirmed cases with a specific clinic to which they were sent.
  • Counterargument (by Peter Norvig: Google's AI research director, co-author of standard text: Artificial Intelligence: A Modern Approach)
  • "Since humans are not very good at explaining their decision-making either...the performance of an AI system could be gauged simply by observing its outputs over time"
  • If these AI bots can master the world of StarCraft, they might be able to master the world of humans (Artificial Intelligence and Interactive Digital Entertainment -AIIDE- Starcraft AI Competition at Memorial University in Newfoundland)
  • "StarCraft [is] complex enough to be a good simulation of real life...It's like playing soccer while playing chess."