AI with AI
Episode 2.20: Reflecting on Huginn and Muninn
Andy and Dave discuss “activation atlases,” recent work from OpenAI and Google that offers a new technique for visualizing interactions between the neurons in an image classifying deep neural network. The UCLA Center for Vision, Cognition, Learning, and Autonomy together with the International Center for AI and Robot Autonomy publish work on RAVEN – a dataset for Relational and Analogical Visual rEasoNing, which uses John Raven’s Progressive Matrices for testing joint spatial-temporal reasoning; in combination with a dynamic residual tree method, they see improvement over other methods, but still short of human performance. Research from the University of New South Wales uses machine learning to predict which of two patterns a subject will choose before the subject is aware of which one they have chosen. And Google Brain publishes research that demonstrates BigGAN, capable of generating high-fidelity images with much fewer (10-20%) labeled data. In announcements, DARPA holds its AI Colloquium on 6-7 March; the US Army is investing $72M into CMU for AI research; OpenAI launches OpenAI LP, a new company for funding safe artificial *general* intelligence; and the IEEE is set to release on 29 March the first edition of its Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. In reports of the week, the Allen Institute for AI examines the quality of AI papers and predicts that China will soon overtake the US in quality AI research; MMC publishes an examination of the State of AI in Europe; a paper looks at predicting research trends in the publications on Arxiv, and another paper surveys deep learning advances on different 3D data representations. Dive into Deep Learning is the book of the week, available online. The University of Vermont uses AI and Project Gutenberg stories to identify six main arcs of storytelling. Dear Machine, by Greg Kieser, is the AI sci-fi story of the week. John Sunda Hsia’s website compiles the “ultimate guide” to all of the upcoming AI and ML conferences. And the Allen Institute releases a “dumbed down” version of OpenAI’s GPT-2, with some resulting humorous reflections.