skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap Newsletter

Search Results

Your search for found 2049 results.

ai with ai: Mission SHRIMPossible
/our-media/podcasts/ai-with-ai/season-1/1-41
In breaking news, Andy and Dave discuss the “Future of Life” pledge that various AI tech leaders have signed, promising not to develop lethal autonomous weapons; DARPA announces its Artificial Intelligence Exploration (AIE) program, to provide “unique funding opportunities;” DARPA also announces a Short-Range Independent Microrobotic Platform (SHRIMP) program, which seeks to develop multi-functional tiny robots for use in natural and critical disaster scenarios; GoodAI announces the finalists in the “General AI Challenge,” which produced a series of conceptual papers; and a report from UK’s parliament examines the issues surrounding the government’s use of drones. Then in deeper topics, Andy and Dave discuss various attempts to use AI to predict the FIFA World Cup 2018 champion (all of which failed), which includes a discussion on the appropriate types of questions to which AI is amenable, and also includes an obligatory Star Trek reference. Baidu announced ClariNet, which performs text-to-speech synthesis within one neural network (as opposed to multiple networks).
ai with ai: Russian AI Kryptonite
/our-media/podcasts/ai-with-ai/season-1/1-40
CNA’s expert on Russian AI and autonomous systems,   Samuel Bendett , joins temporary host Larry Lewis (again filling in for Dave and Andy) to discuss Russia’s pursuits with the militarization of AI and autonomy. Russian Ministry of Defense (MOD) has made no secret of its desire to achieve technological breakthroughs in IT and especially artificial intelligence, marshalling extensive resources for a more organized and streamlined approach to information technology R&D. MOD is overseeing a significant public-private partnership effort, calling for its military and civilian sectors to work together on information technologies, while hosting high-profile events aiming to foster dialogue between its uniformed and civilian technologists. For example, Russian state corporation Russian Technologies (Rostec), with extensive ties to the nation’s military-industrial complex, has overseen the creation of a company with the ominous name – Kryptonite. The company’s name – the one vulnerability of a super-hero – was unlikely to be picked by accident. Russia’s government is working hard to see that the Russian technology sector can compete with American, Western and Asian hi-tech leaders. This technology race is only expected to accelerate - and Russian achievements merit close attention.
ai with ai: Terminator or Data? Policy and Safety for Autonomous Weapons
/our-media/podcasts/ai-with-ai/season-1/1-39
This week Andy and Dave take a respite from the world of AI. In the meantime, Larry Lewis hosts Shawn Steene from the Office of Secretary of Defense. Shawn manages DOD Directive 3000.09 – US military policy on autonomous weapons – and is a member of the US delegation to the UN’s CCW meetings on Lethal Autonomous Weapon Systems (LAWS). Shawn and Larry discuss U.S. policy, what DOD Directive 3000.09 actually means, and how the future of AI could more closely resemble the android data than SKYNET from the Terminator movies. That leads to a discussion of some common misconceptions about artificial intelligence and autonomy in military applications, and how these misconceptions can manifest themselves in the UN talks. With data having single-handedly saved the day in the eighth and tenth Star Trek movies (First Contact and Nemesis, respectively), perhaps Star Trek should be required viewing for the next UN meeting in Geneva. Larry Lewis  is the Director of the  Center for Autonomy and Artificial Intelligence  at CNA. His areas of expertise include lethal autonomy, reducing civilian casualties, identifying lessons from current operations, security assistance, and counterterrorism.
ai with ai: Debater of the AI-ncients, Part 2 (Dota 2)
/our-media/podcasts/ai-with-ai/season-1/1-38
In the second part of this epic podcast, Andy and Dave continue their discussion with research from MIT, Vienna University of Technology, and Boston University, which uses human brainwaves and hand gestures to instantly correct robot mistakes. The research uses a combination of electroencephalogram (EEG, brain signals) and electromyogram (EMG, muscle signals) in combination to allow a human (without training) to provide corrective input to a robot while it performs tasks. On a related topic, MIT’s Picower Institute for Learning and Memory demonstrated the rules for human brain plasticity, by showing that when one synapse connection strengthens, the immediately neighboring synapses weaken; while suspected for some time, this research showed for the first time how this balance works. Then, research from Stanford and Berkley introduces a Taskonomy, a system for disentangling task transfer learning. This structured approach maps out 25 different visual tasks to identify the conditions under which transfer learning works from one task to another; such a structure would allow data in some dimensions to compensate for the lack of data in other dimensions. Next up, OpenAI has developed an AI tool for spotting photoshopped photos, by examining three types of manipulation techniques (splicing, copy-move, and removal), and by also examining local noise features. Researchers at Stanford have used machine learning to recreate the periodic table of elements after providing the system with a database of chemical formulae. And finally, Andy and Dave wrap up with a selection of papers and other media, including CNAS’s AI: What Every Policymaker Needs to Know; a beautifully-done tutorial on machine learning; the Question for AI by Nilsson; Nonserviam by Lem; IPI’s Governing AI; the US Congressional Hearing on the Power of AI; and Twitch Plays Robotics.
ai with ai: Debater of the AI-ncients, Part 1 (Dota)
/our-media/podcasts/ai-with-ai/season-1/1-37
In breaking news, Andy and Dave discuss a potentially groundbreaking paper on the scalable training of artificial neural nets with adaptive sparse connectivity; MIT researchers unveil the Navion chip, only 20 square millimeters in size and consume 24 milliwatts of power, it can process real-time camera images up to 171 frames per second, and can be integrated into drones the size of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals “reclaim their digital identity;” and the Foundation for Responsible Robotics, which wants to shape the responsible design and use of robotics, releases a report on Drones in the Service of Society. Then, Andy and Dave discuss IBM’s Project Debater, the follow-on to Watson that engaged in a live, public debate with humans on 18 June. IBM spent 6 years developing PD’s capabilities, with over 30 technical papers and benchmark datasets, Debater can debate nearly 100 topics. PD uses three pioneering capabilities: data-driven speech writing and delivery, listening comprehension, and the ability to model human dilemmas. Next up, OpenAI announces OpenAI Five, a team of 5 AI algorithms trained to take on a human team in the tower defense game, Dota 2; Andy and Dave discuss the reasons for the impressive achievement, including that the 5 AI networks do not communicate with each other, and that coordination and collaboration naturally emerge from their incentive structures. The system uses 256 Nvidia graphics cards and 128,000 processor cores; it has taken on (and won) a variety of human teams, but OpenAI plans to stream a match against a top Dota 2 team in late July.
ai with ai: The AIth Sense - I See WiFi People
/our-media/podcasts/ai-with-ai/season-1/1-36
In breaking news, Andy and Dave discuss the recently unveiled Wolfram Neural Net Repository with 70 neural net models (as of the podcast recording) accessible in the Wolfram Language; Carnegie Mellon and STRUDEL announce the Code/Natural Language (CoNaLa) Challenge with a focus on Python; Amazon releases its Deep Lens video camera that enables deep learning tools; and the Computer Vision and Pattern Recognition 2018 conference in Salt Lake City. Then, Andy and Dave discuss DeepMind’s Generative Query Network, a framework where machines learn to turn 2D scenes into 3D views, using only their own sensors. MIT’s RF-Pose trains a deep neural net to “see” people through walls by measuring radio frequencies from WiFi devices. Research at the University of Bonn is attempting to train an AI to predict future results based on current observations (with the goal of “seeing” 5 minutes into the future), and a healthcare group of Google Brain has been developing an AI to predict when a patient will die, based on a swath of historical and current medical data. The University of Wyoming announced DeepCube, an “autodidactic iteration” method from McAleer that allows solving a Rubik’s Cube without human knowledge. And finally, Andy and Dave discuss a variety of books and videos, including The Next Step: Exponential Life, The Machine Stops, and a Ted Talk from Max Tegmark on getting empowered, not overpowered, by AI.
ai with ai: How to Train Your DrAIgon (for good, not for bad)
/our-media/podcasts/ai-with-ai/season-1/1-35
In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using "analog" synapses to improve neural net implementation, and Stanford U researchers develop an artificial sensory nervous system; and Berkley Deep Drive provides the largest self-driving car dataset for free public download. Next, the topic of "hard exploration games with sparse rewards" returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring ("curiosity") than from performing tasks as dictated by the researchers. From Cognition Expo 18, work from Martinez-Plumed attempts to "Forecast AI," but largely highlights the challenges in making comparisons due to the neglected, or un-reported, aspects of developments, such as the data, human oversight, computing cycles, and much more. From the Google AI Blog, researchers improve deep learning performance by finding and describing the transformation policies of the data and using that information to increase the amount and diversity of the training dataset. Then, Andy and Dave discuss attempts to use drone surveillance to identify violent individuals (for good reasons only, not for bad ones). And in a more sporty application, "AI enthusiast" Chintan Trivedi describes his efforts to train a bot to play a soccer video game, by observing his playing. Finally, Andy recommends an NSF workshop report, a book on AI: Foundations of Computational Agents, Permutation City, and over 100 video hours of the CogX 2018 conference.
ai with ai: Game of Drones - AI Winter Is Coming
/our-media/podcasts/ai-with-ai/season-1/1-34
In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new Xavier chip packs $10k worth of power into a $1299 box. Next, Andy and Dave discuss a benevolent application of adversarial attack methods, with a “privacy filter” for photos that are designed to stop AI face detection (reducing detection from nearly 100 percent to 0.5 percent). MIT used AI in the development of nanoparticles, training neural nets to “learn” how a nanoparticle’s structure affects its behavior. Then the remaining topics dip deep into the philosophical realm, starting with a discussion on empiricism and the limits of gradient descent, and how philosophical concepts of empiricist induction compare with critical rationalism. Next, the topic of a potential AI Winter continues to percolate with a viral blog from Piekniewski, leading into a paper from Berkley/MIT that discovers a 4-15% reduction in accuracy for CIFAR-10 classifiers on a new set of similar training images (bringing into doubt the idea of robustness of these systems). Andy shares a possible groundbreaking paper on “graph networks,” that provides a new conceptual framework for thinking about machine learning. And finally, Andy and Dave close with some media selections, including Blood Music by Greg Bear and Swarm by Frank Schatzing.
ai with ai: Detective Centaur and the Curse of Footstep Awareness
/our-media/podcasts/ai-with-ai/season-1/1-33
Andy and Dave didn’t have time to do a short podcast this week, so they did a long one instead. In breaking news, they discuss the establishment of the Joint Artificial Intelligence Center (JAIC), yet another-Tesla autopilot crash, Geurts defending the decision to dissolve the Navy’s Unmanned Systems Office, and Germany published a paper that describes its stance on autonomy in weapon systems. Then, Andy and Dave discuss DeepMind’s approach to using YouTube videos to train an AI to learn “hard exploration games” (with sparse rewards). In another “centaur” example, facial recognition experts form best when combined with an AI. University of Manchester researchers announce a new footstep-recognition AI system, but Dave pulls a Linus and has a fit of “footstep awareness.” In other recent reports, Andy and Dave discuss another example of biomimicry, where researchers at ETH Zurich have modeled the schooling behavior of fish. And in brain-computer interface research, a noninvasive BCI system co-trained with tetraplegics to control avatars in a racing game. Finally, they round out the discussion with a mention of ZAC Inc and its purported general AI, a book on How People and Machines are Smarter Together, and a video on deep reinforcement learning.
ai with ai: Shiny Heart Reflecting in the Dark Lights Up (SHRDLU)
/our-media/podcasts/ai-with-ai/season-1/1-32
In breaking news, Andy and Dave discuss a few cracks seem to be appearing in Google's Duplex demonstration; more examples of the breaking of Moore's Law; a Princeton effort to advance the dialogue on AI and ethics; India joins the global AI-sabre-rattling; the UK Ministry of Defence launches an AI hub/lab; and the U.S. Navy dissolves its secretary-level unmanned systems office. Andy and Dave then discuss a demonstration of "zero-shot" learning, by which a robot learns to do a task by watching a human perform it once. The work reminds Andy of the early natural language "virtual block world" SHRDLU, from the 1970s. In other news, the research team that designed Libratus (a world-class poker-playing AI) announced they had developed a better AI that, more importantly, is also computationally orders of magnitude less expensive (using a 4-core CPU with 16 GB of memory). Next, research with Intel and the University of Illinois UC has developed a convolutional neural net to significantly improve low-ISO image quality while shooting at faster shutter speeds; Andy and Dave both found the results for improving low-light images to be quite stunning. Finally, after yet another round of a generative adversarial example (in which Dave predicts the creation of a new field), Andy closes with some recommendations on papers, books, and videos, including Galatea 2.2 and The Space of Possible Minds.