skip to main content
Article Podcast Report Summary Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube

AI with AI

Episode 2.8: AI with AI: Montezuma’s Regulation

This week, Andy and Dave discuss the US Department of Commerce’s announcement to consider regulating AI as an export; counter to that idea, Amazon makes freely available 45+ hours of training materials on machine learning, with tailored learning paths; Oren Etzioni proposes ideas for broader regulation of AI research, that attempts to balance the benefits with the potential harms; DARPA tests its CODE program for autonomous drone operations in the presence of GPS and communications jamming; a Chinese researcher announces the use of CRISPR to produce the first gene-edited babies; and the 2018 ACM Gordon Bell Prize goes to Lawrence Berkeley National Lab for achieving the first exa-scale (10^18) application, running on over 27,000 NVIDIA GPUs. Uber’s OpenAI announces advances in exploration and curiosity of an algorithm that help it “win” Montezuma’s Revenge. Research from Facebook AI suggests that pre-training convolutional neural nets may provide fewer benefits over random initialization than previously thought. Google Brain examines how well ImageNet architectures transfer to other tasks. A paper from INDOPACOM describes the exploitation of big data for special operations forces. And Yuxi Li publishes a technical paper on deep reinforcement learning. And a recent paper explores self-organized criticality as a fundamental property of neural systems. Christopher Bishop’s Pattern Recognition and Machine Learning are available online, and the Architects of Intelligence provides one-on-one conversations with 23 AI researchers. Maxim Pozdorovkin releases “The Truth About Killer Robots” on HBO, and finally, a Financial Times article over-hypes (anti-hypes?) a questionable graph on Chinese AI investments.

CNA Office of Communications

John Stimpson, Communications Associate