skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap Newsletter

Search Results

Your search for found 2049 results.

ai with ai: The One about ‘Bots…
/our-media/podcasts/ai-with-ai/season-2/2-28
“Bots” reign supreme in this week’s episode, though Andy and Dave start the discussion NIST’s RFI on the development of technical standards for AI. A Harvard Medical School project demonstrates a catheter that can autonomously move inside a live, beating pig’s heart. Zipline uses medical delivery drones in Rwanda. University of Maryland researchers demonstrate drone delivery of a kidney for transplant. NASA tests a CACADA swarm and is also investigating Marsbees. And Starship robo-couriers deliver food to students at GMU. In research from Berkeley, a robot learns to use improvised tools to complete tasks, including those with physical cause-and-effect relationships. Researchers at MIT, MIT-IBM Watson, and DeepMind create the Neuro-Symbolic Concept Learner (NSCL), which uses a hybrid connectionist/symbolic approach, and seems to be a “true” AI implementation of Winograd’s SHRDLU system from the 60s. Research from Tsinghua University and Google demonstrates Neural Logic Machines, a neural-symbolic architecture for both inductive learning and logic reasoning. Two papers compare logistic regression with machine learning methods for clinical predictions; one shows no benefit of one method over the other, while the other claims better performance with neural network methods (although Andy and Dave wonder whether this statement is true, given the error bars in the results). Algorithm Watch publishes a Global Inventory of AI Ethics Guidelines. Times Higher Education (THE) and Microsoft release a survey on AI of more than 100 AI experts and university leaders. The Department of Information Technology at the University of Uppsala in Sweden has made its lecture notes for a statistical machine learning course available. The Santa Fe Institute reprints a classic collection of essays from its Founding Workshops. Robert Kranekg pens a story about an Angry Engineer. And the OpenAI Robotics Symposium 2019 releases the full video proceedings online.
ai with ai: Salvere Rex or Salve Getafix?
/our-media/podcasts/ai-with-ai/season-2/2-27
Professor Jennifer McArdle, Assistant Professor of Cyber Defense at Salve Regina University, joins Andy and Dave for a discussion on AI and machine learning. Jenny is leading a group of graduate students who are working on creating a strategic-level primer on AI, particularly aimed at those who may be less familiar with the technical aspects, as well as a War on the Rocks article on AI in training and synthetic environments. Her students are studying in a variety of areas, including cyber defense and digital forensics, cyber and synthetic training, cyber intelligence, healthcare and healthcare administration, and administrative justice. Graduate students Mackenzie Mandile and Saurav Chatterjee also join for a discussion on their research topics. In the photo (from left to right): Maria Hendrickson, Gabrielle Cusano, Abigail Verille, Erin Rorke, (John Cleese), Saurav Chatterjee, Allegra Graziano, Santiago Durango, Eric Baucke, Mackenzie Mandile, Dave Broyles, Jennifer McArdle, Andy Ilachinski, John Crooks, (Getafix), and Lt. Col. David Lyle.
ai with ai: LAWS & DOTAr: Synthetic Voice Unit
/our-media/podcasts/ai-with-ai/season-2/2-26
Andy and Dave welcome Dr. Anna Williams and Dr. Larry Lewis to discuss the recent UN Convention on Certain Conventional Weapons, and the latest developments in the global discussion on Lethal Autonomous Weapons Systems (LAWS).
ai with ai: AstroBees in Your BonNAT
/our-media/podcasts/ai-with-ai/season-2/2-25
Andy and Dave discuss the Department of Energy’s attempt to create the world’s longest acronym, with DIFFERENTIATE (Design Intelligence for Formidable Energy Reduction Engendering Numerous Totally Impactful Advanced Technology Enhancements), and to accelerate the incorporation of ML into energy technology and product design. Google cancels its AI ethics board after thousands of employees sign a petition calling for the removal of one member with anti-LGBTQ and anti-immigrant views. NASA unveils the Astrobees, one-foot cube robots that will work autonomously on the International Space Station to check inventory and monitor noise levels, among other things. And Microsoft partners with the French online education platform OpenClasrooms to train and recruit promising students in AI. Research from the Eindhoven University of Technology and the University of Trento takes a biologically “inspired” approach to neural net learning, through Neuron Elevation Traces (NATs), that allow additional data storage in each synapse; the result appears to increase the plasticity of the synapses. A mathematical reasoning model from DeepMind can solve some arithmetic, algebra, and probability problems, though sometimes gets simple calculations incorrect (such as 1 + 1 + … + 1, for n>=7). And research creates a musculoskeletal system that can use muscle activation to simulate movement and control. A report from Element AI examines the Global AI Talent distributions in 2019, to include (perhaps not surprisingly) the observation that the supply of top-tier AI talent does not meet the demand. A paper in Nature Reviews Physics surveys the physics of brain network structure, function, and control. A short sci-fi story from Jeffrey Ford describes The Seventh Expression of the Robot General. And Andy highlights a video from 1961 on The Thinking Machine.
ai with ai: Black Hole Watson
/our-media/podcasts/ai-with-ai/season-2/2-24
Andy and Dave discuss the first image of a black hole, and its link to machine learning -- with research from Katie Bouman while she was at MIT, developing Continuous High-resolution Image Reconstruction using Patch priors (CHIRP), as a way to stitch together different sources to create a continuous whole. Next, Andy and Dave discuss research from the Sorbonne and IST Austria that tries to deduce the reward function of a recurrent neural network by assuming the neurons are agents. And research from Hopfield and Krotov examines a way to approach neural network learning in a more “plausible” biological fashion, with a more physically local method of plasticity. In reports, the European Commission releases its 41-page report on Ethics Guidelines for Trustworthy AI. Elizabeth Holm publishes a short paper in defense of the black box. A paper in IEEE Spectrum examines the actual health care products (compared to the partnerships and promises) of IBM Watson. Sean Luke publishes the second edition of The Essentials of Metaheuristics. And the video of the week is a 2016 TED Talk by Katie Bouman on the development of the software that combines the data collected by individual telescopes.
ai with ai: TossBot’s Physics Residu-ALE, with SimPLe syrup
/our-media/podcasts/ai-with-ai/season-2/2-23
Andy and Dave discuss Simulated Policy Learning (SimPLe), from Google Brain, which attempts to help reinforcement learning methods learn effective policies for complex tasks, such as Atari games (using the Atari Learning Environment, ALE); the method trains a policy in a simulated environment so that it achieves good performance in the original environment. From Google and Princeton University, the TossingBot learns to throw arbitrary objects into bins; research use “residual physics” to provide baseline knowledge of the world (e.g., ballistics) to further improve tossing accuracies. Researchers at Rutgers demonstrate a probabilistic approach for reasoning the 3D shapes of unknown objects, as a robot manipulates its environment. DeepMind publishes results that use the AI itself to figure out where the AI will fail. And research from Northwestern, the University of Chicago, and the Santa Fe Institute examines the dynamics of failure across science, startups, and security efforts. In clickbait-y news, scientists create an AI that can predict when a person will die (when in actuality, they used machine learning methods to examine the prediction of premature death and compared it with standard epidemiological approaches). Researchers create a memristor-based hybrid analog-digital computing platform to demonstrate deep-Q reinforcement learning. Microsoft demonstrates end-to-end automation of DNA data storage (21 hours to encode the word “hello”). The US Air Force is exploring AI-powered autonomous drones in its Skyborg program. Keen Security Lab of Tencent reports vulnerabilities of Telsa Autopilot, including inducing the vehicle to switch lanes. A paper in the Springer AI Review-Journal provides a survey of ML and DL frameworks and libraries for large-scale data mining. Los Alamos Labs publishes a survey of quantum algorithm implementations. Scott Cunningham publishes Causal Inference. Yaneer Bar-Yam makes a 2003 work, Dynamics of Complex Systems, available. Easley and Kleinberg publish Networks, Crowds, and Markets: Reasoning About a Highly Connected World. Andy highlights a sci-fi story from 2008 from Elizabeth Bear, Tideline. Paul Oh pens a fictional story of the Army’s C2 AI program, Project AlphaWare. The National Academies-Royal Society Public Symposium will hold a discussion on 24 May, AI: An International Dialogue. More videos appear from DARPA’s AI Colloquium. A website compiles datasets for machine learning. And Stephen Jordan provides a comprehensive catalog of quantum algorithms.
ai with ai: Doctor Omega and the Tsar’s Particle Bots
/our-media/podcasts/ai-with-ai/season-2/2-22
The Institute of Electrical and Electronics Engineers (IEEE) has released its first edition of Ethically Aligned Design (EAD1e), a nearly 300-page report involving thousands of global experts; the report covers 8 major principles including transparency, accountability, and awareness of misuse. DARPA announces the Artificial Social Intelligence for Successful Teams program, which will attempt to help AI build shared mental models and understand the intentions, expectations, and emotions of its human counterparts. DARPA also announced a program to design chips for Real Time Machine Learning (RTML), which will generate optimized hardware design configurations and standard code, based on the objectives of the specific ML algorithms and systems. The U.S. Army awarded a $152M contract to QinetiQ North America for producing “backpack-sized” robots; the common robotic system individual (CRS(I)) is a remotely operated, unmanned ground vehicle. The White House has launched a site to highlight AI initiatives. Anduril Industries gets a Project MAVEN contract to support the Joint AI Center. And the 2019 Turing Award goes to neural network pioneers Hinton, LeCun, and Bengio. Researchers at Johns Hopkins demonstrate that humans can decipher adversarial images; that is, they can “think like machines” and anticipate how image classifiers will incorrectly identify unrecognizable images. A group of researchers at MIT, Columbia, Cornell, and Harvard demonstrate “particle robots” inspired by biological cells; these robots can’t move, but can pulsate from a size of 6in to about 9in, and as a collective, they can demonstrate movement and other collective behavior (even with a 20% failure of the components). Researchers at the Harbin Institute of Technology and Machine State University control a swarm of “microbots” (here, single grains of hematite) through application of different magnetic fields. And researchers use honey bees (in Austria) and zebrafish (in Switzerland) to influence each other’s collective behavior through robotic mediation. A report from the Interregional Crime and Justice Research Institute released a report on AI in law enforcement, from a recent meeting organized by INTERPOL. DefenseOne publishes a report from Tucker, Glass, and Bendett, on how the U.S. military services are using AI. An e-book from Frontiers in Robotics and AI collects 13 papers on the topic of “Consciousness in Humanoid Robots.” Andy highlights a book from 2007, “Artificial General Intelligence,” which claims to be the first to codify the use of AGI as a term-of-art. MIT Tech Review’s EnTech Digital 2019 has released the videos from its 25-26 March event. And DARPA has released more videos from its AI Colloquium. The U.N. Group of Governmental Experts is meeting in Geneva to discuss lethal autonomous weapons systems (LAWS). A short story from Husain and Cole describes a hypothetical future war in Europe between Russian and NATO forces. And Ian McDonald pens a story that captures the life of military drone pilots in Sanjeev and Robotwallah.
ai with ai: The World Ends with Robots
/our-media/podcasts/ai-with-ai/season-2/2-21
Andy and Dave begin with an AI-generated podcast, using the “dumbed down” GPT-2 with the repository of podcast notes; GPT-2 ends the faux podcast with a video called “The World Ends with Robots” and Dave later discovers that a Google search on the title brings up zero hits. Ominous! Andy and Dave continue with a discussion of the Boeing 737 MAX crashes and the implications for autonomous systems. Stanford University launches the Stanford Institute for Human-centered Artificial Intelligence (HAI), which seeks to advance AI research to improve the human condition. Ahead of the Convention on Certain Conventional Weapons in Geneva, Japan announces its intention to submit a plan for maintaining control over lethal autonomous weapons systems. A new report from Hal Hodson at the Economist reveals that should DeepMind successfully create artificial general intelligence, its Ethics Board will have legal “control” of the entity. And Steve Walker and Vint Cerf discuss other US Department of Defense projects that Google is working on, including the identification of deep fakes, and exploring new architectures to create more computing power. NVidia announces a $99 AI development kit, the AI Playground, and the GauGAN. In research topics, Google explores whether neural networks show gestalt phenomena, looking specifically at the law of closure. Researchers with IBM Watson and Oxford examine supervised learning with quantum-enhanced feature spaces. Shashu and co-workers explore quantum entanglement in deep learning architectures. Dan Falk takes a look at how AI is changing science. And researchers at Facebook AI and Google AI examine the pitfalls of measuring emergent communication between agents. The World Intellectual Property Organization releases its 2019 trends in AI. A report takes a survey of the European Union’s AI ecosystem. While another paper surveys the field of robotic construction. Kiernan Healy releases a book on Data Visualization. Allen Downey publishes Think Bayes: Bayesian Statistics Made Simple. The Defense Innovation Board releases a video from its public listening session on AI ethics at CMU from 14 March. The 2019 Human-Centered AI Institute Symposium releases a video. And Irina Raicu compiles a list of readings about AI ethics.
ai with ai: Reflecting on Huginn and Muninn
/our-media/podcasts/ai-with-ai/season-2/2-20
Andy and Dave discuss “activation atlases,” recent work from OpenAI and Google that offers a new technique for visualizing interactions between the neurons in an image classifying deep neural network. The UCLA Center for Vision, Cognition, Learning, and Autonomy together with the International Center for AI and Robot Autonomy publish work on RAVEN – a dataset for Relational and Analogical Visual rEasoNing, which uses John Raven’s Progressive Matrices for testing joint spatial-temporal reasoning; in combination with a dynamic residual tree method, they see improvement over other methods, but still short of human performance. Research from the University of New South Wales uses machine learning to predict which of two patterns a subject will choose before the subject is aware of which one they have chosen. And Google Brain publishes research that demonstrates BigGAN, capable of generating high-fidelity images with much fewer (10-20%) labeled data. In announcements, DARPA holds its AI Colloquium on 6-7 March; the US Army is investing $72M into CMU for AI research; OpenAI launches OpenAI LP, a new company for funding safe artificial *general* intelligence; and the IEEE is set to release on 29 March the first edition of its Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. In reports of the week, the Allen Institute for AI examines the quality of AI papers and predicts that China will soon overtake the US in quality AI research; MMC publishes an examination of the State of AI in Europe; a paper looks at predicting research trends in the publications on Arxiv, and another paper surveys deep learning advances on different 3D data representations. Dive into Deep Learning is the book of the week, available online. The University of Vermont uses AI and Project Gutenberg stories to identify six main arcs of storytelling. Dear Machine, by Greg Kieser, is the AI sci-fi story of the week. John Sunda Hsia’s website compiles the “ultimate guide” to all of the upcoming AI and ML conferences. And the Allen Institute releases a “dumbed down” version of OpenAI’s GPT-2, with some resulting humorous reflections.
ai with ai: A Neural Reading rAInbow
/our-media/podcasts/ai-with-ai/season-2/2-19
Andy and Dave discuss research from Neil Johnson, who looked to the movements of fly larvae to model financial systems, where a collection of agents share a common goal, but have no way to communicate and coordinate their activities (a memory of five past events ends up being the ideal balance). Researchers at Carnegie Mellon demonstrate that random search with early-stopping is a competitive Neural Architecture Search baseline, performing at least as well as “Efficient” NAS. Unrelated research, but near-simultaneously published, from AI Lab Swisscom, shows that random search outperforms state-of-the-art NAS algorithms. Researchers at DeepMind investigate the possibility of creating an agent that can discover its world, and introduce NDIGO (Neural Differential Information Gain Optimization), designed to be “information seeking.” And the Electronics and Telecomm Research Institute in South Korea creates SC-FEGAN, a face-editing GAN that builds off of a user’s sketches and other information. Georgetown University announces a $55M grant to create the Center for Security and Emerging Technology (CSET). Microsoft workers call on the company to cancel its military contract with the U.S. Army. DeepMind uses machine learning to predict wind turbine energy production. Australia’s Defence Department invests ~$5M to study how to make autonomous weapons behave ethically. And the U.K. government invests in its people and funds AI university courses with £115. Reports suggest that U.S. police departments are using biased data to train crime-predicting algorithms. A thesis on Neural Reading Comprehension and Beyond by Danqi Chen becomes highly read. A report looks at the evaluation of citation graphs in AI research, and researchers provide a survey of deep learning for image super-resolution. Bryon Reese blogs that we need new words to adjust to AI (to which Dave adds “AI-chemy” to the list). In Point and Counterpoint, David Sliver argues that AlphaZero exhibits the “essence of creativity,” while Sean Dorrance Kelly argues that AI can’t be an artist. Interpretable Machine Learning by Christoph Molnar hits version 1.0, and Andy highlights Asimov’s classic short story, The Machine that Won the War. And finally, a symposium at Princeton University’s Institute for Advanced Studies examines deep learning – alchemy or science?