skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap Newsletter

Search Results

Your search for found 2049 results.

ai with ai: The Whirly Bird Gets the Drone
/our-media/podcasts/ai-with-ai/season-1/1-49
Andy and Dave discuss an online essay by Tim Dutton, which summarizes the AI strategies that nations have published in the last year and a half. Sentient Investment Management announces plans to liquidate its hedge fund that used AI to forecast investment strategies. IBM spearheads an effort to create standards for AI developers to demonstrate the fairness of their AI algorithms, through a Supplier’s Declaration of Conformity. Google announces an Unrestricted Adversarial Examples Challenge, with “birds versus bicycles,” where applicants can either submit a defender (an image classifier that will resist adversarial attacks) or submit an attacker (an adversarial attack that attempts to make the defender declare a confident, incorrect answer). The Drone Racing League announces a new competition for teams developing AI pilots for drone racing.  And DARPA announces research that has allowed a paralyzed man to send (and receive) signals for three drones simultaneously, through a surgically-implanted microchip in the brain.
ai with ai: Keep Talking and No Robot Explodes, Part II
/our-media/podcasts/ai-with-ai/season-1/1-48b
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
ai with ai: Keep Talking and No Robot Explodes, Part I
/our-media/podcasts/ai-with-ai/season-1/1-48
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include: Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
ai with ai: AI with AI: Curiosity Killed the Poison Frog, Part II
/our-media/podcasts/ai-with-ai/season-1/1-47b
Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases it's Unmanned Systems Integrated Roadmap 2017-2042; Google announces Dataset Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life, and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability to poison the training data set of a neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpeanAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”
ai with ai: Curiosity Killed the Poison Frog, Part I
/our-media/podcasts/ai-with-ai/season-1/1-47
Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases it's Unmanned Systems Integrated Roadmap 2017-2042; Google announces Dataset Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life, and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability to poison the training data set of a neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpeanAI, Berkley and Edinburgh's research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”
ai with ai: There are FOUR ELEPHANTS!
/our-media/podcasts/ai-with-ai/season-1/1-46
Andy and Dave discuss the latest developments in OpenAI’s AI team that competed against human players in Dota 2, a team-based tower defense game. Researchers published a method for probing Atari agents to understand where the agents were focusing when learning to play games (and to understand why they are good at games like Space Invaders, but not at Ms. Pac-Man). A DeepMind AI can match health experts when spotting eye diseases from optical coherence tomography (OCT) scans; it uses two networks to segment the problems, which also allows a way for the AI to indicate which portion of the scans prompted the diagnosis. Research from Germany and the UK showed that children may be especially vulnerable to peer pressure from robots; the experiments replicated Asch’s social experiments from the 1950s, but interestingly adults did not show the same vulnerability to robot peer pressure. Research from Rosenfeld, Zemel, and Tsotsos showed that “minor” perturbations in images (such as shifting the location of an elephant) can cause misclassifications to occur, again highlighting the potential for failures in image classifiers. Andy recommends “The Seven Tools of Causal Inference with Reflections on Machine Learning” by Pearl; Algorithms for Reinforcement Learning by Szepesvari is available online; Robin Sloan has a novel, Sourdough, with much use of AI and robots; Wolfram has an interview on the computational universe; a new documentary on AI look at the life and role of Geoffrey Hinton, and Josh Tenenbaum examines the issues of “Growing a Mind in a Machine.”
ai with ai: Enter the Dragonfly
/our-media/podcasts/ai-with-ai/season-1/1-45
In breaking news, Andy and Dave discuss the Convention on Conventional Weapons meeting on lethal autonomous weapons systems (LAWs) at the United Nations, where more than 70 countries are participating in the sixth meeting since 2014. Highlights include the priorities for discussion, as well as the UK delegation's role and position. The Pentagon’s AI programs get a boost in the defense budget. DARPA announces the Automating Scientific Knowledge Extraction (ASKE) project, with the lofty goal of building an AI tool that can automatically generate, test, and refine its own scientific hypotheses. Google employees react to and protest the company’s secret, censored search engine (Dragonfly) for China. The Electronic Frontier Foundation releases a white paper on Mitigating the Risks of Military AI, which includes applications outside of the “kill chain.” And Brookings releases the results of a survey that asks people whether AI technologies should be developed for warfare.
ai with ai: How I Learned to Stop Worrying and Love AI
/our-media/podcasts/ai-with-ai/season-1/1-44
The Director for CNA’s Center for Autonomy and AI, Dr. Larry Lewis, joins Dave for a discussion on understanding and mitigating the risks of using autonomy and AI in war. They discuss some of the commonly voiced risks of autonomy and AI, in the application for war, but also in general application, which includes: AI will destroy the world; AI and lethal autonomy are unethical; lack of accountability; and lack of discrimination. Having examined the underpinnings of these commonly voiced risks, Larry and Dave move on to practical descriptions and identifications of risks for use of AI and autonomy in war, including the context of military operations, the supporting institutional development (including materiel, training, and test & evaluation), as well as the law and policy that govern their use. They wrap up with a discussion about the current status of organizations and thought leaders in the Department of Defense and the Department of the Navy.
ai with ai: I Have No Eyes and I Must Meme
/our-media/podcasts/ai-with-ai/season-1/1-43
In breaking news, Andy and Dave discuss the Dota 2 competition between the Open AI Five team of AIs and a top (99.95 th   percentile) human team, where the humans won one game in a series of three; the Pentagon signs an $885M AI contract with Booz Allen; MIT builds Cheetah 3, a “blind” robot that has no visual sensors but can climb stairs and maneuver in a space with obstacles; Tencent Machine Learning trains AlexNet in just 4 minutes on ImageNet (breaking the previous record of 11 minutes); researchers at MIT Media Lab have developed a machine-learning model to perceive human emotions; and the 2018 Conference on Uncertainty in AI (UAI) may have been held 7-10 August in Monterey, CA – we’re not certain (but what is certain is that Dave will never tire of these jokes). In other news, IBM Watson reportedly recommended cancer treatments that were “unsafe and incorrect, and Amazon’s Rekognition software incorrectly identifies 28 lawmakers as crime suspects, about which Andy and Dave yet again highlight the dangerous gap in AI between expectations and reality. Lipton (CMU) and Steinhardt (Standford) identify “troubling trends” in machine learning research and scientific scholarship. The Institute for Theoretical Physics in Zurich describes SciNet, a neural network that can discover physical concepts (such as the motion of a damped pendulum). A paper by Kott and Perconti makes an empirical assessment of forecasting military technology on the 20-30 year horizon and finds the forecasts are surprisingly accurate (65-87%). “Elements of Statistical Learning Data Mining, Inference, and Prediction,” is available online. Andy recommends the Ellison classic story, I Have No Mouth, and I Must Scream, and finally, a video by Percy Liang at Stanford discusses ways of evaluating machine learning for AI.
ai with ai: People for the Ethical Tasking of AIs
/our-media/podcasts/ai-with-ai/season-1/1-42
Continuing in a discussion of recent topics, Andy and Dave discuss research from Johns Hopkins University, which used supervised machine learning to predict the toxicity of chemicals (the results of which beat animal tests). DeepMind probes toward general AI by exploring AI’s abstract reasoning capability; in their tests, they found that systems did OK (75% correct) when problems used the same abstract factors, but those AI systems fared very poorly if the testing differed from the training set (even minor variations such as using dark-colored objects instead of light-colored objects) – in a sense, suggesting that deep neural nets cannot “understand” problems they have not been explicitly trained to solve. Research from Spyros Makridakis demonstrated that existing traditional statistical methods outperform (better accuracy; lower computation requirements) than a variety of popular machine-learning methods, suggesting the need for better benchmarks and standards when discussing the performance of machine learning methods. Finally, Andy and Dave wrap up with two reports from the Center for a New American Security, on Technology Roulette, and Strategic Competition in an Era of AI, the latter of which highlights that the U.S. has not yet experienced a true “Sputnik moment.” Research from MIT, McGill and Masdar IST define and visualizes skill sets required for various occupations, and how these contribute to a growing disparity between high- and low-wage occupations. The conference proceedings of Alife2018 (nearly 700 pages) are available for the 23-27 July event. Art of the Future Warfare Project features a collection of “war stories from the future,” and over 50 videos are available from the 2018 International Joint Conference on AI.