AI with AI

AI with AI explores the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications. Join experts Andy Ilachinski and David Broyles as they explain the latest developments in this rapidly evolving field.

The views expressed here are those of the commentators and do not necessarily reflect the views of CNA or any of its sponsors. Episodes are recorded a week prior to release, and new episodes are released every Friday. Recording and engineering provided by Jonathan Harris.

Episode 38

July 13, 2018

In the second part of this epic podcast, Andy and Dave continue their discussion with research from MIT, Vienna University of Technology, and Boston University, which uses human brainwaves and hand gestures to instantly correct robot mistakes. The research uses a combination of electroencephalogram (EEG, brain signals) and electromyogram (EMG, muscle signals) in combination to allow a human (without training) to provide corrective input to a robot while it performs tasks. On a related topic, MIT’s Picower Institute for Learning and Memory demonstrated the rules for human brain plasticity, by showing that when one synapse connection strengthens, the immediately neighboring synapses weaken; while suspected for some time, this research showed for the first time how this balance works. Then, research from Stanford and Berkley introduces a Taskonomy, a system for disentangling task transfer learning. This structured approach maps out 25 different visual tasks to identify the conditions under which transfer learning works from one task to another; such a structure would allow data in some dimensions to compensate for the lack of data in other dimensions. Next up, OpenAI has developed an AI tool for spotting photoshopped photos, by examining three types of manipulation techniques (splicing, copy-move, and removal), and by also examining local noise features. Researchers at Stanford have used machine learning to recreate the periodic table of elements after providing the system with a database of chemical formulae. And finally, Andy and Dave wrap up with a selection of papers and other media, including CNAS’s AI: What Every Policymaker Needs to Know; a beautifully-done tutorial on machine learning; the Question for AI by Nilsson; Nonserviam by Lem; IPI’s Governing AI; the US Congressional Hearing on the Power of AI; and Twitch Plays Robotics.

for related materials.

Breaking

(June 19) Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

(MIT/June20) Navion chip: Upgrade helps miniature drones navigate

(June 22) Enlisting Industry Leaders to Help Government Make Sense of AI

(June 22) IEEE and the MIT Media Lab Launch Global Council on Extended Intelligence (CXI)

(June 8, Foundation for Responsible Robotics) Report: Drones in the Service of Society

Topics

(June 18) IBM’s Project Debater (more ambitious follow-on to Watson)

(OpenAI/June 25) OpenAI Five: “Algorithmic ‘A team’ crushes humans in complex computer game” -  the biggest (breakthrough?!) news of the week/but not without many questions!

(MIT) Supervising a robot with one’s brain and hand gestures 

(Stanford/Univ of CA at Berkeley) Taskonomy: Disentangling Task Transfer Learning

Technical paper

Video/demo

Project homepage/data

Additional presentation awards at CVPR18

(OpenAI/June 25) Adobe Using AI to Spot Photoshopped Photos

(June 25/Stanford) Atom2Vec: ML Recreates Periodic Table of Elements in Hours

Things of the Week

Papers of the Week –

Book of the Week - The Quest for Artificial Intelligence: A History of Ideas and Achievements, by Nils J. Nilsson (Hard copy)

Science-Fiction Story of the Week - Nonserviam, in The Perfect Vacuum, by Stanislaw Lem

Videos of the Week - 

Just Fun - Twitch Plays Robotics (some interesting crowdsourced/evolved robots)


Episode 37

July 6, 2018

In breaking news, Andy and Dave discuss a potentially groundbreaking paper on the scalable training of artificial neural nets with adaptive sparse connectivity; MIT researchers unveil the Navion chip, only 20 square millimeters in size and consumes 24 milliwatts of power, it can process real-time camera images up to 171 frames per second, and can be integrated into drones the size of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals “reclaim their digital identity;” and the Foundation for Responsible Robotics, which wants to shape the responsible design and use of robotics, releases a report on Drones in the Service of Society. Then, Andy and Dave discuss IBM’s Project Debater, the follow-on to Watson that engaged in a live, public debate with humans on 18 June. IBM spent 6 years developing PD’s capabilities, with over 30 technical papers and benchmark datasets, Debater can debate nearly 100 topics. PD uses three pioneering capabilities: data-driven speech writing and delivery, listening comprehension, and the ability to model human dilemmas. Next up, OpenAI announces OpenAI Five, a team of 5 AI algorithms trained to take on a human team in the tower defense game, Dota 2; Andy and Dave discuss the reasons for the impressive achievement, including that the 5 AI networks do not communicate with each other, and that coordination and collaboration naturally emerge from their incentive structures. The system uses 256 Nvidia graphics cards and 128,000 processor cores; it has taken on (and won) a variety of human teams, but OpenAI plans to stream a match against a top Dota 2 team in late July.

for related materials.

Breaking

(June 19) Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science

(MIT/June20) Navion chip: Upgrade helps miniature drones navigate

(June 22) Enlisting Industry Leaders to Help Government Make Sense of AI

(June 22) IEEE and the MIT Media Lab Launch Global Council on Extended Intelligence (CXI)

(June 8, Foundation for Responsible Robotics) Report: Drones in the Service of Society

Topics

(June 18) IBM’s Project Debater (more ambitious follow-on to Watson)

(OpenAI/June 25) OpenAI Five: “Algorithmic ‘A team’ crushes humans in complex computer game” -  the biggest (breakthrough?!) news of the week/but not without many questions!

(MIT) Supervising a robot with one’s brain and hand gestures 

(Stanford/Univ of CA at Berkeley) Taskonomy: Disentangling Task Transfer Learning

Technical paper

Video/demo

Project homepage/data

Additional presentation awards at CVPR18

(OpenAI/June 25) Adobe Using AI to Spot Photoshopped Photos

(June 25/Stanford) Atom2Vec: ML Recreates Periodic Table of Elements in Hours

Things of the Week

Papers of the Week –

Book of the Week - The Quest for Artificial Intelligence: A History of Ideas and Achievements, by Nils J. Nilsson (Hard copy)

Science-Fiction Story of the Week - Nonserviam, in The Perfect Vacuum, by Stanislaw Lem

Videos of the Week - 

Just Fun - Twitch Plays Robotics (some interesting crowdsourced/evolved robots)


Episode 36

June 29, 2018

In breaking news, Andy and Dave discuss the recently unveiled Wolfram Neural Net Repository with 70 neural net models (as of the podcast recording) accessible in the Wolfram Language; Carnegie Mellon and STRUDEL announce the Code/Natural Language (CoNaLa) Challenge with a focus on Python; Amazon releases its Deep Lens video camera that enables deep learning tools; and the Computer Vision and Pattern Recognition 2018 conference in Salt Lake City. Then, Andy and Dave discuss DeepMind’s Generative Query Network, a framework where machines learn to turn 2D scenes into 3D views, using only their own sensors. MIT’s RF-Pose  trains a deep neural net to “see” people through walls by measuring radio frequencies from WiFi devices. Research at the University of Bonn is attempting to train an AI to predict  future results based on current observations (with the goal of “seeing” 5 minutes into the future), and a healthcare group of Google Brain has been developing an AI to predict when a patient will die, based on a swath of historical and current medical data. The University of Wyoming announced DeepCube, an “autodidactic iteration” method from McAleer that allows solving a Rubik’s Cube without human knowledge. And finally, Andy and Dave discuss a variety of books and videos, including The Next Step: Exponential Life, The Machine Stops, and a Ted Talk from Max Tegmark on getting empowered, not overpowered, by AI.

for related materials.

Breaking

(Wolfram, June 14) Wolfram Neural Net Repository (WNNP)

CoNaLa: The Code/Natural Language Challenge announced

(Amazon, June 18) Deep Lens: Deep learning-enabled video camera launched by Amazon

Computer Vision and Pattern Recognition (CVPR) 2018 – Salt Lake City, June 18-22

Topics

(Google/DeepMind) Generative Query Network (GQN) - Neural scene representation and rendering

(MIT) RF-Pose: Seeing Through Walls with Wi-Fi Signals

“Scientists Have Invented a Software That Can 'See' Several Minutes Into The Future”

(Google/Univ of Chicago Medicine/Univ of California in San Francisco/Stanford Univ) Developing an “AI” to Predict When a Patient Will Die

(University of Wyoming) DeepCube- Solving the Rubik's Cube Without Human Knowledge

Things of the Week

Book of the Week - The Next Step: Exponential Life (BBVA / OpenMind)

Science-Fiction Book of the Week - The Machine Stops, by E.M. Forster

Video of the Week -  Max Tegmark @TED2018 - How to get empowered, not overpowered, by AI


Episode 35

June 22, 2018

In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using "analog" synapses to improve neural net implementation, and Stanford U researchers develop an artificial sensory nervous system; and Berkley Deep Drive provides the largest self-driving car dataset for free public download. Next, the topic of "hard exploration games with sparse rewards" returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring ("curiosity") than from performing tasks as dictated by the researchers. From Cognition Expo 18, work from Martinez-Plumed attempts to "Forecast AI," but largely highlights the challenges in making comparisons due to the neglected, or un-reported, aspects of developments, such as the data, human oversight, computing cycles, and much more. From the Google AI Blog, researchers improve deep learning performance by finding and describing the transformation policies of the data, and using that information to increase the amount and diversity of the training dataset. Then, Andy and Dave discuss attempts to using drone surveillance to identify violent individuals (for good reasons only, not for bad ones). And in a more sporty application, "AI enthusiast" Chintan Trivedi describes his efforts to train a bot to play a soccer video game, by observing his playing. Finally, Andy recommends an NSF workshop report, a book on AI: Foundations of Computational Agents, Permutation City, and over 100 video hours of the CogX 2018 conference.

for related materials.

Breaking

(June 7, Brookings) Report on Views of AI, Robots, and Automation based on Internet Search Data

(June 14, Chatham House Report) AI and International Affairs: Disruption Anticipated

(Microsoft) The Future Computed: AI and its role in society

(DeepMind) First major AI patent filings revealed

(June 6) From the “Biomimicry Department”: Synapses and a Sense of Touch

(June 8) Berkeley Deep Drive, the largest-ever self-driving car dataset, has been released by BDD Industry Consortium for free public download:

Topics

(University of Wyoming) Deep Curiosity Search: Intra-Life Exploration Improves Performance on Challenging Deep Reinforcement Learning Problems

(June 2/CogX18) Forecasting AI: Accounting for the Neglected Dimensions of AI Progress

(June 4, Google/AI Blog) Improving Deep Learning Performance with AutoAugment

Eye in the Sky: Real-Time Drone Surveillance System (DSS) for Violent Individuals Identification

(June 14) Building a Deep Neural Network to play FIFA18

Paper of the week (28 pages) – Report from an NSF workshop in May 2017

Technical Book of the Week – AI: Foundations of Computational Agents (Second Edition)

Science-Fiction Book of the Week – Permutation City by Greg Egan

Videos

(June 11/12) CognitionX 2018 Conference in London

Day 1

Day 2


Episode 34

June 15, 2018

In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new Xavier chip packs $10k worth of power into a $1299 box. Next, Andy and Dave discuss a benevolent application of adversarial attack methods, with a “privacy filter” for photos that are designed to stop AI face detection (reducing detection from nearly 100 percent to 0.5 percent). MIT used AI in the development of nanoparticles, training neural nets to “learn” how a nanoparticle’s structure affects its behavior. Then the remaining topics dip deep into the philosophical realm, starting with a discussion on empiricism and the limits of gradient descent, and how philosophical concepts of empiricist induction compare with critical rationalism. Next, the topic of a potential AI Winter continues to percolate with a viral blog from Piekniewski, leading into a paper from Berkley/MIT that discovers a 4-15% reduction in accuracy for CIFAR-10 classifiers on a new set of similar training images (bringing into doubt the idea of robustness of these systems). Andy shares a possibly groundbreaker paper on “graph networks,” that provides a new conceptual framework for thinking about machine learning. And finally, Andy and Dave close with some media selections, including Blood Music by Greg Bear and Swarm by Frank Schatzing.

for related materials.

Breaking

(June 2) Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program

(March 20-21) Royal Australian Air Force’s biennial Air Power Conference

Defense Innovation Unit Experimental (DIUx) - Annual Report 2017

(May 12, China) DefCon2018 conference dedicated to AI in cybersecurity: Highlights, presentations

(June 3) Nvidia's Jetson Xavier AI chip boasts $10,000-worth of power

Topics

(May 31, Univ of Toronto) AI researchers design 'privacy filter' for your photos

(June 3, MIT) AI-infused development of specialized nanoparticles

Philosophical Ruminations #1 – Empiricism and the limits of gradient descent, from Julian Togelius’ blog

AI Things of the Week

Paper of the Week - Relational inductive biases, deep learning, and graph networks

Cartoon of the Week – Abstruse Goose

Magazine of the Week – Wilson Quarterly Spring 2018 Issue – Living with AI

Technical Book of the Week – Elements of Robotics by Mordechai Ben-Ari and Francesco Mondada

Science-Fiction Books of the Week (all dealing with intelligent swarms in one way or another) –

Video of the Week (1 hr) - Gary Marcus, Deep Learning: A Critical Appraisal


Episode 33

June 8, 2018

Andy and Dave didn’t have time to do a short podcast this week, so they did a long one instead. In breaking news, they discuss the establishment of the Joint Artificial Intelligence Center (JAIC), yet-another-Tesla autopilot crash, Geurts defending the decision to dissolve the Navy’s Unmanned Systems Office, and Germany publishes a paper that describes its stance on autonomy in weapon systems. Then, Andy and Dave discuss DeepMind’s approach to using YouTube videos to train an AI to learn “hard exploration games” (with sparse rewards). In another “centaur” example, facial recognition experts form best when combined with an AI. University of Manchester researches announce a new footstep-recognition AI system, but Dave pulls a Linus and has a fit of “footstep awareness.” In other recent reports, Andy and Dave discuss another example of biomimicry, where researchers at ETH Zurich have modeled the schooling behavior of fish. And in brain-computer interface research, a noninvasive BCI system co-trained with tetraplegics to control avatars in a racing-game. Finally, they round out the discussion with a mention of ZAC Inc and its purported general AI, a book on How People and Machines are Smarter Together, and a video on deep reinforcement learning.

for related materials.

Breaking

(May 29) Joint Artificial Intelligence Center (JAIC) established

(May 29) Tesla that crashed into police car was in 'autopilot' mode, California official says

(May 24) Follow-up to last episode:

(May 23) Autonomy in Weapon Systems: The Military Application of Artificial Intelligence as a Litmus Test for Germany’s New Foreign and Security Policy

Topics

(May 29, DeepMind) Playing hard exploration games (with sparse rewards) by watching YouTube

(May 29) Facial Recognition Experts Perform Best With An AI Sidekick (“Centaur” example)

(SfootBD) Powerful new footstep-recognition AI system

Brain-computer-interface training helps tetraplegics win avatar race

Research into fish schooling energy dynamics could boost autonomous swarming drones

Beautiful analysis of fine-scale collective behavior of wild white stork migration with equally elegant figures

Candidate for “Hype of the Week” –ZAC (Z Advanced Computing Inc.) Announcement: Maryland researchers say they discovered 'Holy Grail' of machine learning

Books

Book of the Week  - AIQ: How People and Machines Are Smarter Together

Videos

Video of the week (30 min) - Reproducibility, Reusability, & Robustness in Deep Reinforcement Learning


Episode 32

June 1, 2018

In breaking news, Andy and Dave discuss a few cracks seem to be appearing in Google's Duplex demonstration; more examples of the breaking of Moore's Law; a Princeton effort to advance the dialogue on AI and ethics; India joins the global AI-sabre-rattling; the UK Ministry of Defence launches an AI hub/lab; and the U.S. Navy dissolves its secretary-level unmanned systems office. Andy and Dave then discuss a demonstration of "zero-shot" learning, by which a robot learns to do a task by watching a human perform it once. The work reminds Andy of the early natural language "virtual block world" SHRDLU, from the 1970s. In other news, the research team that designed Libratus (a world-class poker-playing AI) announced they had developed a better AI that, more importantly, is also computationally orders of magnitude less expensive (using a 4-core CPU with 16 GB of memory). Next, research with Intel and the University of Illinois UC has developed a convolutional neural net to significantly improve low-ISO image quality while shooting at faster shutter speeds; Andy and Dave both found the results for improving low-light images to be quite stunning. Finally, after yet-another-round of a generative adversarial example (in which Dave predicts the creation of a new field), Andy closes with some recommendations on papers, books, and videos, including Galatea 2.2 and The Space of Possible Minds.

for related materials.

Breaking

(Issue raised by Axios) Follow-on to Google’s May-8-demo of its New Duplex technology

(Open.ai) AI and Compute blog

Princeton Center for Information Technology Policy (CITP) announces publication of four original case studies from "Princeton Dialogues on AI and Ethics" project

India now wants AI-based weapon systems

UK launches a new AI hub

(May 16) Navy dissolves unmanned systems office

Topics

(21-25 May, 2018) IEEE Robotics and Automation Society (ICRA) in Brisbane, Australia

Interactive visualization of ICRA-2018 papers

Joint Concept Note 1/18 – Human-Machine Teaming

(May 21, Carnegie Mellon Univ) Depth-Limited Solving for Imperfect-Information Games

(Intel and University of Illinois Urbana-Champaign) AI is learning to see in the dark

(May 21, Microsoft Research, Stanford Univ) Generative Adversarial Examples

Paper of the week - Using Artifcial Intelligence to Augment Human Intelligence

Book of the week - Galatea 2.2 by Richard Powers (published in 2004)

Videos

Video of the week (30 min) - The Space of Possible Minds: A Conversation With Murray Shanahan


Episode 31

May 25, 2018

In a review of the latest news, Andy and Dave discuss: the White House’s “plan” for AI, the departure of employees from Google due to Project Maven, another Tesla crash, the first AI degree for undergraduates at CMU, and Boston Dynamics’ jumping and climbing robots. Next, two AI research topics have implications for neuroscience. First, Andy and Dave discuss AI research at DeepMind, which showed that an AI trained to navigate between two points developed “grid cells,” very similar to those found in the mammalian brain. And second, another finding from DeepMind on “meta-learning” suggests that dopamine in the human brain may have a more integral role in meta-learning than previously thought. In another example of “AI-chemy,” Andy and Dave discuss the looming problem of (lack of) explainability in health care (with implications for many other areas, such as DoD), and they also discuss some recent research on adding an option for an AI to defer a decision with “I Don’t Know” (IDK). After a quick romp through the halls of AI-generated DOOM, the two discuss a recent proof that reveals the fundamental limits of scientific knowledge (so much for super-AIs). And finally, they close with a few media recommendations, including “The Book of Why: The New Science of Cause and Effect.”

for related materials.

Breaking

(May 14) Follow up on earlier news about Google employees being “upset” with work on Project Maven

(May 14) Tesla Model S crashed into a fire department truck in Utah: Police probe whether Autopilot feature was on in Tesla crash

(May 10) “The White House’s plan for AI is to not have a plan for AI”

(May 10) 1st AI degree for undergraduates at CMU

Boston Dynamics' robots can now run, jump and climb

Topics

Google, DeepMind and University College London, UK: Navigating with grid-like representations in artificial agents

Google, DeepMind: Using a NN to help explain ‘meta-learning’ in human brains

Lack of Explainability in Health Care Becoming an Issue?

PhD student, David Madras, CS, University of Toronto: Learning to Defer

Video game maps made by AI: More DOOM!

David Wolpert, Sante Fe Institute: New proof reveals fundamental limits of scientific knowledge

Paper of the week - AGI Safety Literature Review

Book of the Week - Judea Pearl’s The Book of Why: The New Science of Cause and Effect

Videos

Video of the week - 2018 Isaac Asimov Memorial Debate: Artificial Intelligence


Episode 30

May 18, 2018

In a review of the most recent news, Andy and Dave discuss the latest information on the fatal self-driving Uber accident, the AI community reacts (poorly) to Nature's announcement of a new closed-access section on machine learning, on-demand self-driving cars will be coming soon to north Dallas, and the Chinese government is adding AI to high school curriculum with a mandated textbook. For more in-depth topics, Andy and Dave discuss the latest information from DARPA's Lifelong Learning Machines (L2M) project, which has announced its initial teams and topics, which seek to generate "paradigm-changing approaches" as opposed to incremental improvements. Next, they discuss an experiment from OpenAI that provides visibility into dialogue between two AI on a topic, one of which is lying. This discussion segues into recent comparisons of the field of machine learning to the ancient art of alchemy. Dave avoids using the word "alcheneering," but thinks that "AI-chemy" might be worth considering. Finally, after a discussion on a couple of photography-related developments, they close with a discussion on some papers and videos of interest, including the splash of the Google's new "Turing-test-beating" Duplex assistant for conducting natural conversations over the phone.

for related materials.

Breaking

Uber sets safety review; media report says software cited in fatal crash

Thousands of AI researchers will boycott a new science journal

Self-driving cars are here: Drive.ai will offer on-demand robotic cars in Frisco, a suburb north of Dallas

China brings AI to high school curriculum, with mandated textbook

Facebook Adds A.I. Labs in Seattle and Pittsburgh, Pressuring Local Universities

Topics

(DARPA) Lifelong Learning Machines (L2M) project

(OpenAI) How can we be sure AI will behave? Perhaps by watching it argue with itself

AI researchers allege that machine learning is alchemy

Facebook Training Image Recognition AI with Billions of Instagram Photos

(NVIDIA) Inpainting for Irregular Holes Using Partial Convolutions

Google Photos to Use AI to Colorize Black-and-White Photos: Keynote (Google I/O '18) (at 1:33:15)

Google’s new Duplex "AI Assistant" technology

Paper of the week: Exploration of Swarm Dynamics Emerging from Asymmetry

Short story of the week: Automated Valor by August Cole (author of Ghost Fleet)

Video

What is a complex system? | Karoline Wiesner & James Ladyman (TED Talk, 15 min) Complex systems - Beehives and human brain - merging of CAS, AI, and Cybernetics

DeepMind - From Generative Models to Generative Agents - Koray Kavukcuoglu (2 May, at ICLR2018, 45min)


Episode 29

May 11, 2018

Andy and Dave discuss a couple of recent reports and events on AI, including the Sixth International Conference on Learning Representations (ICLR). Next, Edward Ott and fellow researchers have applied machine learning to replicate chaotic attractors, using "reservoir computing." Andy describes the reasons for his excitement in seeing how far out this technique is able to predict a 4th order nonlinear partial differential equation. Next, Andy and Dave discuss a few adversarial attack-related topics: a single-pixel attack for fooling deep neural network (DNN) image classifiers; an Adversarial Robustness Toolbox from IBM Research Ireland, which provides an open-source software library to help researchers in defending DNN against adversarial attacks; and the susceptibility of the medical field to fraudulent attacks. The BAYOU project takes another step toward giving AI the ability to program new methods for implementing tasks. And Uber Labs releases source code that can train a DNN to play Atari games in about 4 hours on a *single* 48-core modern desktop! Finally, after a review of a few books and videos, including Paul Scharre's new book "Army of None," Andy and Dave conclude with a discussion on potatoes.

for related materials.

Breaking

How Might Artificial Intelligence Affect the Risk of Nuclear War? - RAND Corp

(30 April - 3 May) Sixth International Conference on Learning Representations

(April 26) Congressional Research Service (CRS) report: Artificial Intelligence and National Security

(April 23) Uber AI Labs, Accelerating Deep Neuroevolution: Train Atari in Hours on a Single Personal Computer

Bulletin of Atomic Scientists, special issue on Military Applications of AI

(April 24) Book of the week: Paul Scharre, Army of None: Autonomous Weapons and the Future of War   

Topics

Using Machine Learning to Replicate Chaotic Attractors and Calculate Lyapunov Exponents from Data

One pixel attack for fooling deep neural networks

Neural Sketch Learning: Rice University turns deep-learning AI loose on software development

Video

(1.2 hrs) MIT AGI: Life 3.0, discussion w/Max Tegmark, Physics Professor at MIT, co-founder of Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence.

(1.5 hrs) The Rise of AI Deep Learning - documentary 2018

“This is getting too silly,” as Graham Chapman, from Monty-Python, might say: AI Will Give Us Better French Fries


Episode 28

May 4, 2018

This week, Andy, Larry, and Dave welcome Major General Mick Ryan, Commander of the Australian Defence College. Mick has recently published a report on Human-Machine Teaming for Future Ground Forces, in which he identifies keys areas for human-machine teams, as well as challenges that military forces will have in incorporating these new capabilities. The group discusses some of these issues, and some of the broader challenges in both the near- and far-term.

for related materials.


Episode 27

April 27, 2018

Andy and Dave start this week's podcast with a review of some of the latest announcements: the latest meeting of the UN Convention on Certain Convention Weapons, SecDef Mattis's announcement of a new joint program office for AI, a declaration of cooperation on AI by 25 European countries, and a UK Parliament report on AI. They then discuss the latest Center for the Study of the Drone report, which compares U.S. Dept of Defense drone spending for FY19 with FY18. The MIT-IBM Watson AI Lab has launched a "Moments in Time" dataset, the first steps toward building a large and robust set of short videos for action classification purposes. Google has increased the quality of its AI in picking voices out of a noisy room, by making use of additional information (here, video). And Google has introduce a way to "talk to books;" Andy and Dave were a bit underwhelmed, but check it out and judge for yourself. Finally, Andy and Dave close with a selection of whimsical comments from the news, and a selection of videos.

for related materials.

Breaking

(April 9-13) Convention on Certain Conventional Weapons (CCW) - Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS)

(April 9) DoD Official Highlights Value of Artificial Intelligence to Future Warfare

(April 10) 25 EU Member States sign up to cooperate on Artificial Intelligence

(April 16) UK Parliament Report on AI: AI in the UK: ready, willing and able? (PDF)

Topics

(April 9) Center for the Study of the Drone (Bard College): FY19 drone budget request

MIT-IBM Watson AI Lab launches Moments in Time dataset

Google trains its AI to pick out voices in a noisy crowd to SPY on your secret conversations

(April 13) Google introduces new AI experience called 'Talk to Books' semantic-search feature

Whimsical

Elon Musk drafts in humans after robots slow down Tesla Model 3 production: "Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated."

Move Over, Moore’s Law: Make Way for Huang’s Law

Video

(April 11, 25 min) Al Jazeera - Do You Trust This Computer?: "Will killer robots save us or destroy humanity?"

TED Talk: General Artificial Intelligence: Making sci-fi a reality | Darya Hvizdalova


Episode 26

April 20, 2018

Anna Williams joins Dave in welcoming CAPT Sharif Calfee for a two-part discussion on unmanned systems and artificial intelligence. As part of his fellowship research, CAPT Calfee has been speaking with organizations and subject matter experts across the U.S. Navy, the U.S. Government, Federally Funded Research and Development Centers, University Affiliated Research Centers, and Industry, in order to understand the broader efforts involving unmanned systems, autonomy, and artificial intelligence. In the first part of their discussion, the group discusses the progress and the challenges that the CAPT has observed in his engagements. In the second part, the group discusses various steps that the U.S. Navy can take to move forward more deliberately, to include the consideration for a new Naval Reactors-like office to oversee AI.


Episode 25

April 13, 2018

Andy and Dave cover a wide variety of topics this week, starting with two prominent examples of employees and researchers objecting to certain uses of AI technology. Andy and Dave then discuss a recent GAO report on AI, as well as France’s announcement to invest in AI. They also discuss AI in designing chemical synthesis pathways, AI in reading echocardiograms, meta-learning (learning how to learn in unsupervised learning), helping robots express themselves when they fail, and a collection of papers, graphic novels, and videos. By the end, Dave’s arms are flailing wildly!

for related materials.

Breaking

NY Times: ‘The Business of War’: Google Employees Protest Work for the Pentagon

(March 29)France announces investment in AI – wants to become AI hub

Topics

(GAO Report, March 28) Artificial Intelligence: Emerging Opportunities, Challenges, and Implications

(Nature volume 555, pages 604–610, March 29) Chemical Syntheses with DNN

Meta-Learning: Learning Unsupervised Learning Rules (Google Brain)

Helping Robots Express Themselves When They Fail

Stanford's DAWNBench is a new benchmark suite measuring a variety of deep learning training and inference tasks

Adversarial Attacks and Defences Competition

Graphic Novel

Silent Ruin, by Army Cyber Institute, West Point

NATO Vs. Killer Russian Robots: Graphic Novel Envisions Cyberwar In Moldova

Video

How we can teach computers to make sense of our emotions (TED Talk, 11 min)

The Threat of AI Weapons

Will AI make us immortal? Or will it wipe us out? Elon Musk, Ray Kurzweil and Nick Bostrom.

Vicious Cycle- a group of little autonomous robots performing a range of repetitive functions (3 min)

Marvin-the-robot


Episode 24

April 6, 2018

Dave starts with a shocking revelation! Can you pass the test?? Andy and Dave then discuss MIT Tech Review’s EmTech Digital Conference, which highlighted the latest in AI research. Next, Andy and Dave discuss the rapid expansion of newly reported AI models, including the “GAN Zoo.” Venture capital funding in the U.S. suggests that the AI market may be cooling. Andy describes new insight into brain function that will likely lead to further AI breakthroughs. And after a discussion of an AI playing Battlefield 1, Andy and Dave close with a look at AIs learning in electric dreams, and a GAN that can lip sync a face to an audio-video clip.

for related materials.

Breaking

MIT Technology Review’s EmTech Digital conference in San Francisco - March 26-27

Topics

(MIT Media Lab) Closing the AI Knowledge Gap - towards a “Science of AI”

The brain may learn completely differently than we've assumed since the 20th century

EA Teaches AI to Play 'Battlefield 1' Multiplayer

(Google Brain) World Models: Can agents learn inside their own dreams?

Speech-Driven Facial Reenactment Using Conditional Generative Adversarial Networks

Audio

Listen: First Music Album Composed By Artificial Intelligence

AI/FlowMachines

Videos

We Are Here To Create (40 min) A Conversation with Kai-Fu Lee, author of forthcoming book AI Superpowers: China, Silicon Valley, and the New World Order


Episode 23

March 30, 2018

With the news of the first death at the digital hands of a driverless vehicle, Andy and Dave discuss some of the broader issues surrounding the understanding and implementation of AI technology. In other news, they discuss the creation of a digital version of yeast (DCell) as a way to provide insight into the otherwise “black box” of AI. Then, after describing DeepMind’s efforts into using evolutionary Auto Machine Learning to discover neural network architectures, Andy and Dave discuss an example of how background knowledge (“priors”) transfers to the world of games, and how that compares with AI.

for related materials.

Breaking

First known pedestrian death involving a self-driving vehicle

Topics

(Univ. California, San Diego, School of Medicine)

(Google, DeepMind) Using Evolutionary AutoML to Discover Neural Network Architectures

(University of California, Berkeley) Investigating Human Priors for Playing Video Games

Three DARPA program announcements:

Videos

The Cinematic Control Room since the early 1970s


Episode 22

March 23, 2018

Larry Lewis, Director of CNA’s Center for Autonomy and AI, again sits in for Dave this week. He and Andy discuss: the recent passing of physicist Stephen Hawking (along with his "cautionary" views on AI); CNAS’s recent launch of a new Task Force on AI and National Security, Microsoft’s AI breakthrough in matching human performance translating news from Chinese to English; a report that looks at China’s "AI Dream" (and introduces an "AI Potential Index" to assess China’s AI capabilities compared to other nations); a second index, from a separate report, called the "Government AI Readiness Index," which inexplicably excludes China from the top 35 ranked nations; and the issue of legal liability of AI systems. They conclude with call outs to a fun-to-read crowd-sourced paper written by researchers in artificial life, evolutionary computation, and AI that tells stories about the surprising creativity of digital evolution, and three videos: a free BBC-produced documentary on Stephen Hawking, a technical talk on deep learning, and a Q&A session with Elon Musk (that includes an exchange on AI).

for related materials.

Breaking

Stephen Hawking passed away

CNAS (Center for New American Security) launches Task Force on Artificial Intelligence and National Security

AI matches human performance translating news from Chinese to English

Topics

Deciphering China’s AI Dream - Future of Humanity Institute, University of Oxford

2018 Emerging Tech Trends Report (248 pages) – Future Today Institute, launched March 11, 2018

Artificial Intelligence and Legal Liability - John Kingston, University of Brighton, UK

Interesting Paper (30 pages) - The Surprising Creativity of Digital Evolution

Videos

Yann LeCun and Christopher Manning Discuss Deep Learning

Elon Musk (CEO of SpaceX and Tesla)


Episode 21

March 16, 2018

Larry Lewis, Director of CNA’s Center for Autonomy and AI, sits in for Dave this week, as he and Andy discuss: a recent report that not all Google employees are happy with Google’s partnership with DoD (in developing a drone-footage-analyzing AI); research efforts designed to lift the lid – just a bit - on the so-called “black box” reasoning of neural-net-based AIs; some novel ways of getting robots/AIs to teach themselves; and an arcade-playing AI that has essentially “discovered” that if you can’t win at the game, it is best to either kill yourself or cheat. The podcast ends with a nod to a new free online AI resource offered by Google, another open access book (this time on the subject of Robotics), and a fascinating video of Stephen Wolfram of Mathematica fame, lecturing about artificial general intelligence and the “computational universe” to a computer science class at MIT.

for related materials.


Episode 20

March 9, 2018

Andy and Dave discuss a recently released report on the Malicious Use of AI: Forecasting, Prevention, and Mitigation, which describes scenarios where AI might have devious applications (hint: there’s a lot). They also discuss a recent report that describes the extent of missing data in AI studies, which makes it difficult to reproduce published results. Andy then describes research that looks into ways to alter information (in this case, classification of an image) to fool both AI and humans. Dave has to repeat the research in order to understand the sheer depth of the terror that could be lurking below. Then Andy and Dave quickly discuss a new algorithm that can mimic any voice with just a few snippets of audio. The only non-terrifying topic they discuss involves an attempt to make Alexa more chatty. Even then, Dave decides that this effort will only result in a more-empty wallet.

for related materials.


Episode 19

March 2, 2018

Andy and Dave welcome Sam Bendett, a research analyst for CNA's Center for Strategic Studies, where he is a member of the Russia Studies Program. His work involves Russian defense and security technology and developments, Russian geopolitical influence in the former Soviet states, as well as Russian unmanned systems development, Russian naval capabilities and Russian decision-making calculus during military crises. Sam is in our studio to discuss recent Russian developments in AI and unmanned systems, and to preview an upcoming Defense One summit called "Genius Machines," which he will be speaking at on March 7.

for related materials.


Episode 18

Feb 23, 2018

In another smattering of topics, Andy and Dave discuss the latest insight into the dispersion of global AI start-ups, as well as AI talent. They also describe a commercially available drone that can navigate landscapes and obstacles as it tracks a target. And they discuss an AI algorithm with “social skills” that can teach humans how to collaborate. After chat bots and Deep TAMER, Andy and Dave discuss a few recent videos, including one about door-opening dogs; and, Dave has a meltdown as he fails to recall The Earth Stood Still, but instead substitutes a different celestial body. Klaatu barada nikto.

for related materials.

Breaking News

Artificial Intelligence Trends To Watch In 2018

Follow-up to podcast #17 – DroNet: Learning to Fly by Driving

Topics

Tencent says there are only 300,000 AI engineers worldwide, but millions are needed

AI algorithm with ‘social skills’ teaches humans how to collaborate: Cooperating with machines

Human-machine collaborative chatbot, Evorus

Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces

(Google/DeepMind) IMPALA: Scalable Distributed DeepRL in DMLab-30

Books

Stanislaw Lem short story: “The Upside-Down Evolution”

Artificial Intelligence and Games, by Georgios N. Yannakakis and Julian Togelius, 2018 (hardcopy)

Video

Intel's Winter Olympics 1218-Drone Light Show

Boston Dynamics crosses new threshold with door-opening dog (SpotMini)


Episode 17

Feb 16, 2018

Andy and Dave start this week’s episode with a superconducting ‘synapse’ that could enable powerful future neuromorphic supercomputers. They discuss an attempt to use AI to decode the mysterious Voynich manuscript, and then move on to Hofstadter’s take on the shallowness of Google Translate (with mention of the ELIZA effect). After discussing DroNet’s drones that can learn to fly by watching a driving video, and updating the Domain-Adaptive Meta-Learning discussion where a robot can learn a task by watching a video, they close with some recommendations of videos and books, including Lem’s ‘Golem XIV.’

for related materials.


Episode 16a & 16b

Feb 9, 2018

Andy and Dave welcome back Larry Lewis, the Director for CNA's Center for Autonomy and Artificial Intelligence, and welcome Merel Ekelhof, a Ph.D. candidate at VU University Amsterdam and visiting scholar at Harvard Law School. Over the course of this two-part series, the group discusses the idea of "meaningful human control" in the context of the military targeting process, the increasing role of autonomous technologies (and that autonomy is not simply an issue "at the boom"), and the potential directions for future meetings of the U.N. Convention on Certain Weapons.

for related materials.


Episode 15

Feb 2, 2018

Andy and Dave discuss two recent AI announcements that employ generative adversarial networks: an AI algorithm that can crack classic encryption ciphers (without prior knowledge of English), and an AI algorithm that can "draw" (generate) an image based on simple text instructions. They start, however, with a discussion on the recent rash of autonomous (and semi-autonomous) vehicle incidents, and they also discuss "brain-on-a-chip" hardware, as well as a robot that can learn to do tasks by watching video.

for related materials.

Breaking News

Tesla ‘on Autopilot’ slams into parked fire truck on California freeway

People Keep Confusing Their Teslas for Self-Driving Cars

Waze unable to explain how car ended up in Lake Champlain

Tesla Bears Some Blame for Self-Driving Crash Death, Feds Say

Tesla Autopilot crash caught on dashcam shows how not to use the system

Topics

(Google/University of Toronto) AI code decryption

(MIT) Artificial synapse created for "brain-on-a-chip" hardware

(Microsoft) Text to Image Generation with - AI that draws what it is instructed to draw

(Google/University of Southern California) Robot learning from video

Miscellaneous Links

Point / Counterpoint "debate" on Slaughterbots discussed in podcast #5 – recall that Slaughterbots is Future of Life Institute’s "mini movie" on why autonomous weapons ought to be banned


Episode 14

Jan 26, 2018

Andy and Dave cover a series of topics that evoke broader to connect with the "meta" questions about the role and nature of AI. They begin with Google's Cloud AutoML announcement, which offers ways to more easily build your own AI. They discuss the announcement of AIs that "defeated" humans on a Standard University reading comprehension text, and the misrepresentation of that achievement. They discuss deep image reconstruction, with a neural net that "read minds" by piecing together images from a human's visual cortex. And they close with discussions about Gary Marcus's recent article, which offers a critical appraisal of Deep Learning, and a recent paper that suggests that convolutional neural nets may not be as good at "grasping" higher-level abstract concepts as is typically believed.

for related materials.

Breaking News

Google announces Cloud AutoML

Topics

AI has "defeated" humans on a Stanford University reading comprehension test

Deep image reconstruction: Japanese-designed NN can "read minds"

Gary Marcus (NYU Professor and Founder of Uber-owned ML startup Geometric Intelligence) publishes Deep Learning: A Critical Appraisal

Video

Artificial intelligence debate at New York University between Yann LeCun vs. Gary Marcus: Does AI Need More Innate Machinery? (2 hrs)


Episode 13

Jan 19, 2018

Andy and Dave discuss a newly announced method of attack on the speech-to-text capability DeepSpeech, which introduces noise to an audio waveform so that the AI does not hear the original message, but instead hears a message that the attacker intends. They also discuss the introduction of probabilistic models to AI as a way for AI to "embrace uncertainty" and make better decisions (or perhaps doubt whether or not humans should remain alive). And finally, Andy and Dave discuss some recent applications of AI to different areas of scientific study, particularly in the examination of very large data sets.

for related materials.

Topics

From images to voice

AI systems that doubt themselves: AI will make better decisions by embracing uncertainty

AI for science

Video

Paul Scharre’s testimony before the House Armed Services Subcommittee on Emerging Threats and Capabilities (9 Jan 2018): China’s Pursuit of Emerging and Exponential Technologies. Watch clip. Transcript.

The documentary about Google DeepMind's 'AlphaGo' algorithm is now available on Netflix


Episode 12

Jan 12, 2018

Andy and Dave discuss “Tacotron 2,” the latest text-to-speech capability from Google that produces results nearly indistinguishable from human speech. They also discuss efforts at Google to create a Neural Image Assessment (NIMA), that not only can evaluate the quality of an image, but can also be trained to rate the aesthetics (as defined by the user) of an image. And after a look at some of the AI predictions for 2018, they play a musical game with two pieces of music – can Andy guess which piece Dave wrote, and which the AI composer AIVA, the Artificial Intelligence Virtual Artist, wrote?

for related materials.


Episode 11

Jan 5, 2018

It’s a smorgasbord of topics, as Andy and Dave discuss: the “AI 100” top companies report; the implications of Google’s new AI Research Center in Beijing; a workshop from the National Academy of Science and the Intelligence Community Studies Board on the challenges of machine generation of analytic products from multi-source data; Ethically Aligned Design and the IEEE; Quantum Computing; and finally, some Kasparov-related materials.

for related materials.

Topics

CB Insights (market analysis firm): AI 100: The Artificial Intelligence Startups Redefining Industries

Google Opens an AI Research Center In Beijing

(Workshop) National Academy of Science / Intelligence Community Studies Board - Challenges in Machine Generation of Analytic Products from Multi-Source Data

Ethically Aligned Design (EAD) – IEEE – toward a global, multilingual collaboration

Quantum Computing + machine learning: A Startup Uses Quantum Computing to Boost Machine Learning

  • Related: IZBM announces 50-Qubit quantum computer on 10 Nov; caveat (as for all state-of-the-art q-computers: the quantum state is preserved for 90 microseconds—a record for the industry, but still an extremely short period of time. IBM Raises the Bar with a 50-Qubit Quantum Computer

Microsoft releases a (preview of a) “Quantum Development Kit” (~ Visual Studio) (Video)

Book/Video

Kasparov on Deep Learning in chess:


Episode 10

Dec 29, 2017

Andy and Dave continue their discussion on the 31st Annual Conference on Neural Information Processing Systems (NIPS), covering Sokoban, chemical reactions, and a variety of video disentanglement and recognition capabilities. They also discuss a number of breakthroughs in medicine that involve artificial intelligence: a robot passing a medical licensing exam, an algorithm that can diagnose pneumonia better than expert radiologists, a venture between GE Healthcare and NVIDIA to tap into volumes of unrealized medical data, and deep-brain stimulation. Finally, for reading material and reference, Andy recommends a technical lecture on reinforcement learning, as well as two books on robot ethics.

for related materials.

Topics

NASA announcement on Dec. 14

Follow-up on AlphaGo (by DeepMind): AlphaGo Teach

31st Annual Conference on Neural Information Processing Systems (NIPS)

Imagination-Augmented Agents for Deep Reinforcement Learning

(IBM) Predicting outcomes of chemical reactions (Video)

DrNET: Unsupervised Learning of Disentangled Representations from Video

NASNet:

Several Milestones in Artificial Intelligence Were Just Reached in Medicine

Video

Rich Sutton ("Father" of reinforcement learning, Department of Computing Science, University of Alberta) – 1.5hr technical lecture on a reinforcement-learning technique called temporal-difference learning (TDL)

Books

Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by Patrick Lin, Keith Abney, and Ryan Jenkins, Oxford University Press

Towards a Code of Ethics for Artificial Intelligence, Paula Boddington, Springer-Verlag


Episode 9

Dec 22, 2017

After some brief speculation on the announcement from NASA (which was being held at the same time as this podcast was recorded), and a quick review of AlphaGo Teach, Andy and Dave discuss the 31st Annual Conference on Neural Information Processing Systems (NIPS). With over 8,000 attendees, 7 invited speakers, seminar and poster sessions, NIPS provides insight into the latest and greatest developments in deep learning, neural nets, and related fields.

for related materials.


Episode 8

Dec 15, 2017

Andy and Dave discuss how DeepMind's AI continues to bust through the record books while AlphaZero takes one step closer to world domination (of all board games). After a brief discussion on protein folding, they discuss the "AI Index," which seeks to measure the evolution and advances in AI over time.

for related materials.


Episode 7

Dec 8, 2017

Andy and Dave discuss a market analysis report that identifies where the Department of Defense is spending money in artificial intelligence, big data, and the cloud. They also elaborate on the challenge of "catastrophic forgetting," and a 4-year program at DARPA that seeks to develop "Lifelong Learning Machines," which can continuously apply the results of past experiences. After a conversation about SquishedNets, they cover a Harvard research paper that asserts the need for AI to have explanatory capabilities and accountability.

for related materials.


Episode 6a & 6b

Nov 24, 2017

Dr. Larry Lewis joins Andy and Dave to discuss the U.N. Convention on Conventional Weapons, which met in mid-November with a "mandate to discuss" the topic of lethal autonomous weapons. Larry provides an overview of the group's purpose, the group’s schedule and discussions, the mood and reaction of various parts of the group, and what the next steps might be.

for related materials.

Topics

November 13-17 meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts (GGE) on lethal autonomous weapons systems (86 countries)

22 countries now support a prohibition with Brazil, Iraq and Uganda joining the list of ban endorsers during the GGE meeting. Cuba, Egypt, Pakistan and other states that support the call to ban fully autonomous weapons also forcefully reiterated the urgent need for a prohibition.

States will take a final decision on the CCW’s future on this challenge, including 2018 meeting duration/dates, at the CCW’s annual meeting on Friday, 24 November.”

2017 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)/links

Group of Governmental Experts on Lethal Autonomous Weapons Systems (links to docs)

Recaps of the UN CCW meetings Nov 13 – 17 (by Autonomous Weapons):

  • “The vast majority of CCW high contracting parties participating in this meeting do want concrete action. The majority of those want a legally binding instrument, while others prefer—at least for now—a political declaration or other voluntary arrangements. However, China, Japan, Latvia, Republic of Korea, Russia, and the United States made it clear that they do not want to consider tangible outcomes at this time.”
  • Autonomous Weapons recap
  • Stop Killer Robots recap

Video

Slaughterbots – Future of Life Institute “mini movie” on why autonomous weapons ought to be banned (postscript by Stuart Russell, AI researcher):

Related: In Aug 2017, Elon Musk lead 116 AI experts in open letter calling for ban of killer robots. Read.


Episode 5

Nov 17, 2017

Andy and Dave discuss the recent Geneva Convention on Conventional Weapons, which met to lay the groundwork for discussing the role of lethal autonomous weapons. They also discuss a new technique, called Capsule Networks, that aims to improve recognition of an object due to a change in spatial orientation. Andy and Dave conclude with a discussion of why fruit flies are so awesome.

for related materials.


Episode 4

Nov 10, 2017

Andy and Dave discuss MIT efforts to create a tool to train AIs, in this case, using another AI to provide the training. They discuss efforts to crack the "cocktail party" dilemma of picking out individual voices in a noisy room, as well as an AI that can "upres" photographs with remarkable use of texture (that is, taking a lower resolution photo and making it larger in a realistic way). Finally, they discuss the latest MIT Tech Review magazine, which focused on AI.

for related materials.


Episode 3

Nov 10, 2017

Andy and Dave follow up on the discussion of AlphaGo Zero and the never-before-seen patterns of play that the AI discovered, and the implications of such discoveries (which seem to be the "norm" for AI). They also discuss Google's AutoML project, which applies machine learning to help improve machine learning.

for related materials.


Episode 2

Nov 3, 2017

Andy and Dave discuss the late-breaking news of AlphaGo Zero, a new iteration of the Go playing AI, which surpassed its predecessor AI in about 3 days of learning, using only the basic rules of Go (as opposed to the 6+ months of the original, using thousands of games as examples).

for related materials.

Topics

AlphaGo Zero beats AlphaGo 100-0 after 3 days of training (compared to several months for original AlphaGo) and without any human intervention/human-game-playing-data! Read: Technology Review and Nature

Video

AlphaGo Documentary - Local screening in Reston, VA


Episode 1

Nov 3, 2017

In the inaugural podcast for AI with AI, Andy provides an overview of his recent report on AI, Robots, and Swarms, and discusses the bigger picture of the development and breakthroughs in artificial intelligence and autonomy. Andy also discusses some of his recommended books and movies.

for related materials.

Books

Movies & TV

AI/general

  • When Will AI Exceed Human Performance? - Survey of 352 experts who had published at recent AI conferences (Oxford, Yale, and the Future of Life Institute)
  • AI Progress Measurement - Measuring the Progress of AI Research
  • New Theory Cracks Open the Black Box of Deep Learning - Tishby / information bottleneck
  • "The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts." - analogy to renormalization (as used in statistical physics), may lead to better understanding and new architectures
  • Forget Killer Robots—Bias Is the Real AI Danger - Technology Review (MIT)
  • An AI developed at Vanderbilt University in Tennessee to identify cases of colon cancer from patients’ electronic records performed well - at first - but it was discovered that the AI "learned" to associate patients with confirmed cases with a specific clinic to which they were sent.
  • Counterargument (by Peter Norvig: Google's AI research director, co-author of standard text: Artificial Intelligence: A Modern Approach)
  • "Since humans are not very good at explaining their decision-making either...the performance of an AI system could be gauged simply by observing its outputs over time"
  • If these AI bots can master the world of StarCraft, they might be able to master the world of humans (Artificial Intelligence and Interactive Digital Entertainment -AIIDE- Starcraft AI Competition at Memorial University in Newfoundland)
  • "StarCraft [is] complex enough to be a good simulation of real life...It's like playing soccer while playing chess."

AI/military