skip to main content
ArticlePodcastReportQuick LookVideotriangleplus sign dropdown arrow Case Study All

AI with AI

Episode 1.42: People for the Ethical Tasking of AIs

Continuing in a discussion of recent topics, Andy and Dave discuss research from Johns Hopkins University, which used supervised machine learning to predict the toxicity of chemicals (the results of which beat animal tests). DeepMind probes toward general AI by exploring AI’s abstract reasoning capability; in their tests, they found that systems did OK (75% correct) when problems used the same abstract factors, but those AI systems fared very poorly if the testing differed from the training set (even minor variations such as using dark-colored objects instead of light-colored objects) – in a sense, suggesting that deep neural nets cannot “understand” problems they have not been explicitly trained to solve. Research from Spyros Makridakis demonstrated that existing traditional statistical methods outperform (better accuracy; lower computation requirements) than a variety of popular machine-learning methods, suggesting the need for better benchmarks and standards when discussing the performance of machine learning methods. Finally, Andy and Dave wrap up with two reports from the Center for a New American Security, on Technology Roulette, and Strategic Competition in an Era of AI, the latter of which highlights that the U.S. has not yet experienced a true “Sputnik moment.” Research from MIT, McGill and Masdar IST define and visualizes skill sets required for various occupations, and how these contribute to a growing disparity between high- and low-wage occupations. The conference proceedings of Alife2018 (nearly 700 pages) are available for the 23-27 July event. Art of the Future Warfare Project features a collection of “war stories from the future,” and over 50 videos are available from the 2018 International Joint Conference on AI.

CNA Office of Communications

John Stimpson, Communications Associate