skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report

Search Results

Your search for 7 found 135 results.

ai with ai: Thunderbots
/our-media/podcasts/ai-with-ai/season-4/4-6
Sam Bendett joins Andy and Dave to discuss the latest developments and happenings in Russia's research into artificial intelligence and autonomy capabilities. They discuss Russia's national strategy and the challenges that have occurred in programmatic implementation due to COVID impacts. They also discuss the status of higher education in Russia and the standing of various institutions, as well as their relationship and interaction with the global community of researchers. They cover a variety of other trends and topics, including the Army 2020 convention and some of the announcements made during that event; and they discuss CNA's Russia Program and its on-going series of newsletters dedicated to summarizing the latest in Russian advances and research in AI.
Issue 8: August 14, 2020 Issue 7: July 31, 2020 Issue 6: July 17, 2020 Issue 5: July 1, 2020 ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle
ai with ai: Private AIs, They’re Watching You
/our-media/podcasts/ai-with-ai/season-3/3-13
In a string of related news items on facial recognition, Andy and Dave discuss San Diego’s reported experiences with facial recognition over the last 7 years (coming to an end on 1 January 2020 with the enacting of California’s ban on facial recognition for law enforcement).  Across the Atlantic, the European Union is considering a ban on facial recognition in public spaces for 5 years while it determines the broader implications. And the New York Times puts the spotlight on Clearview AI, a company that claims to have billions of photos of people scraped from the web, and that identify people (and the sources of the photos, to include profiles and other information about the individuals) within seconds. In other news, the JAIC is looking for public input on an upcoming AI study, and it is also looking for help in applying machine learning to humanitarian assistance and disaster relief efforts. In research, Google announces that it has developed a “physics free” model for short-term local precipitation forecasting. And researchers at DeepMind and Harvard find experimental evidence that dopamine neurons in the brain may predict rewards in a distributional way (with insight gained from efforts in optimizing reinforcement-learning algorithms). Nature Communications examines the role of AI, whether positive or negative, in achieving the United Nations’ Sustainable Development Goals. The U.S. National Science Board releases its biennial report on Science and Engineering Indicators. The MIT Deep Learning Series has Lex Fridman speaking on Deep Learning State of the Art (and as a bonus, Andy recommends a video of Fridman interviewing Daniel Kahneman, author of “Thinking, Fast and Slow”). GPT-2 wields its sword and dashed bravely into the realm of Dungeons and Dragons. And GPT-2 tries its hand at chess, knowing nothing about the rules, with surprising results.
In a string of related news items on facial recognition, Andy and Dave discuss San Diego’s reported experiences with facial recognition over the last 7 years (coming to an end on 1 January 2020 with the enacting of California’s ban on facial recognition for law enforcement).  Across the Atlantic, the European Union is considering a ban on facial recognition in public spaces for 5 years while it determines the broader implications. And the New York Times puts the spotlight on Clearview AI, a company that claims to have billions of photos of people scraped from the web, and that identify
ai with ai: 20,000 Layers Under the CNN
/our-media/podcasts/ai-with-ai/season-3/3-7
Andy and Dave discuss the full release of the algorithm that originally had to be locked up for the safety of humanity (GPT-2). NATO releases its final reports on the implications of AI for NATO’s Armed Forces. The US Army Research Lab wraps up a series of events on its efforts in robotics collaborative technology. The UAE announces the world’s first graduates level AI University opening in September 2020. And John Carmack announces he will step down at CTO of Oculus to tackle the challenge of artificial general intelligence, as a Victorian Gentleman Scientist. In research, two independent research groups introduce adversarial T-shirts. A report examines the taxonomy of real faults in deep learning systems. Krohn, Deyleveld, and Bassens publish Deep Learning Illustrated. The Nov/Dec issue of MIT Technology Review features a variety of AI and related stories. And Manuel Blum of CMU discusses Towards a Conscious AI: A Computer Architecture Inspired by Neuroscience.
/*/Contact/Email ContactPhone /*/Contact/Phone 3 7 12307418
ai with ai: Some Superintelligent Assembly Required
/our-media/podcasts/ai-with-ai/season-3/3-6
In news, the Defense Innovation Board releases AI Principles: Recommendations on the Ethical Use of AI by the Department of Defense. The National Institute of Standards and Technology’s National Cybersecurity Center of Excellent releases a draft for public comment on adversarial machine learning, which includes an in-depth taxonomy on the possibilities. Google adds BERT to its search algorithm, with its capability for bidirectional representations, in an attempt to “let go of some of your keyword-ese.” In research, Stanford University and Google demonstrate a method for explaining how image classifiers make their decisions, with Automatic Concept-based Explanations (ACE) that extra visual concepts such as colors and textures, or objects and parts. And GoogleAI, Stanford, and Columbia researchers teach a robot arm the concept of assembling objects, with Form2Fit, which is also capable of generalizing its learning to new objects and tasks. Danielle Tarraf pens the latest response to the National Security Commission on AI’s call for ideas, with Our Future Lies in Making AI Robust and Verifiable. Jure Leskovec, Anand Rajaraman, and Jeff Ullman make their second edition of Mining of Massive Datasets available. The Defense Innovation Board posts a video of its public meeting from 31 October at Georgetown University. Maciej Ceglowski’s “Superintelligence: the idea that eats smart people” takes a look at the arguments against superintelligence as a risk to humanity.
: Research Towards Automatic Concept-based Explanations (ACE) Learning to Assemble and to Generalize from Self-Supervised Disassembly Nontechnical summary Technical paper (7
ai with ai: Darcraft Shadows
/our-media/podcasts/ai-with-ai/season-2/2-15
 In recent announcements, Andy and Dave discuss the National Endowment for Science, Technology, and the Arts (Nesta) launch of a project that is ‘Mapping AI Governance;’ MIT Tech Review’s survey of AI and ML research suggests that “the era of deep learning coming to an end” (or does it?); a December 2018 survey shows strong opposition to “killer robots;” China has (internally) released a report on its view of the “State of AI in China;” and DARPA wants to build conscious robots using insect brains, announcing its (mu)BRAIN Program. In research topics, Andy and Dave discuss the recent competition between DeepMInd’s AlphaStar and human professional gamers in playing Starcraft II. MIT and Microsoft have created a model that can identify instances where autonomous systems have learned from training examples that don’t match what’s happening in the real world, thus creating blind spots. Boston University publishes research that allows an ordinary camera to “see” around corners using shadow projection, in essence turning a wall into a mirror – and doing so without any AI or ML techniques. In papers and reports, the Office of the Director for National Intelligence releases its AIM Initiative – a strategy for augmenting intelligence using machines; a report provides a survey of the state of self-driving cars, and another report surveys the state of AI/ML in medicine. Game Changer takes a look at AlphaZero’s chess strategies, while The Hundred-Page Machine Learning Book offers a condensed overview of ML. The Association for the Advancement of AI conference (27 Jan – 1 Feb) begins to release videos of the conference, including an Oxford-style debate on the Future of AI. And finally, Andy and Dave conclude with a “hype teaser” for next week – with SELF AWARE robots!
Overview Announcement and Details (7 pages) Research DeepMind’s AlphaStar defeats professional human pro-gamers at Starcraft II for 1st time Nontechnical overview
ai with ai: Eleventh Voyage into Morphospace
/our-media/podcasts/ai-with-ai/season-2/2-9
The Joint Artificial Intelligence Center is up and running, and Andy and Dave discuss some of the newer revealed details. And the rebranded NeurIPS (originally NIPS), the largest machine learning conference of the year, holds its 32nd annual conference in Montreal, Canada, with a keynote discussion on “What Bodies Think About” by Michael Levin. And a group of graduate students have create a community-driven database to provide links to tasks, data, metrics, and results on the “state of the art” for AI. In other news, one of the “best paper” awards at NeurIPS goes to Neural Ordinary Differential Equations, research from University of Toronto that replaces the nodes and connections of typical neural networks with one continuous computation of differential equations. DeepMind publishes its paper on AlphaZero, which details the announcements made last year on the ability of the neural network to play chess, shogi, and go “from scratch.” And AlphaFold from DeepMind brings machine learning methods to a protein folding competition. In reports of the week, the AI Now Institute at New York University releases a 3rd annual report on understanding social implications of AI. With a blend of technology and philosophy, Arsiwalla and co-workers break up the complex “morphospace” of consciousness into three categories: computational, autonomy, and social; and they map various examples to this space. For interactive fun of generating images with a GAN, check out the “Ganbreeder,” though maybe not before going to sleep. In videos of the week, “Earworm” tells the tale of an AI that deleted a century; and CIMON, the ISS Robot, interacts with the space crew. And finally, Russia24 joins a long history of people dressing up and pretending to be robots.
, the ISS Robot Throws a Tantrum :   7 min video Hype of the Week -   “Russia’s Most Advanced Robot” Turns Out to Be Man in Robot Suit ContactName /*/Contact/ContactName ContactTitle
ai with ai: AI with AI: It Can Only Be Attributable to Human Error
/our-media/podcasts/ai-with-ai/season-2/2-7
In the latest news, Andy and Dave discuss OpenAI releasing “Spinning Up in Deep RL,” an online educational resource; Google AI and the New York Times team up to digitize over 5 million photos and find “untold stories;” China is recruiting its brightest children to develop AI “killer bots;” and China unveils the world’s first AI news anchor; and Douglas Rain, the voice of HAL 9000 has died at age 90. In research topics, Andy and Dave discuss research from MIT, Tegmark, and Wu, that attempts to improve unsupervised machine learning by using a framework that more closely mirrors scientific thought and process. Albrecht and Stone examine the issue of autonomous agents modeling other agents, which leads to an interesting list of open problems for future research. Research from Stanford makes an empirical examination of bias and generalization in deep generative models, and Andy notes striking similarities to previously reported experiments in cognitive psychology. Other research surveys data collection for machine learning, from the perspective of the data. In blog posts of the week, the Mad Scientist Initiative reveals the results from a recent competition, which suggests themes of the impacts of AI on the future battlefield; and Piekniewski follows up his May 2018 “Is an AI Winter On Its Way?” in which he reviews cracks appearing in the AI façade, with particular focus on the arena of self-driving vehicles. And Melanie Mitchell provides some insight about AI hitting the barrier of meaning. CSIS publishes a report on the Importance of the AI Ecosystem. And another paper takes insights from the social sciences to provide insight into AI. Finally, MIT press has updated one of the major sources on Reinforcement Learning with a second edition; AI Superpowers examines the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging technology; Major Voke’s entire presentation on AI for C2 of Airpower is now available, and the Bionic Bug Podcast has an interview with CNA’s own Sam Bendett to talk AI and robotics.
/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact/Email ContactPhone /*/Contact/Phone 2 7 7786559
ai with ai: How to Train Your DrAIgon (for good, not for bad)
/our-media/podcasts/ai-with-ai/season-1/1-35
In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using "analog" synapses to improve neural net implementation, and Stanford U researchers develop an artificial sensory nervous system; and Berkley Deep Drive provides the largest self-driving car dataset for free public download. Next, the topic of "hard exploration games with sparse rewards" returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring ("curiosity") than from performing tasks as dictated by the researchers. From Cognition Expo 18, work from Martinez-Plumed attempts to "Forecast AI," but largely highlights the challenges in making comparisons due to the neglected, or un-reported, aspects of developments, such as the data, human oversight, computing cycles, and much more. From the Google AI Blog, researchers improve deep learning performance by finding and describing the transformation policies of the data and using that information to increase the amount and diversity of the training dataset. Then, Andy and Dave discuss attempts to use drone surveillance to identify violent individuals (for good reasons only, not for bad ones). And in a more sporty application, "AI enthusiast" Chintan Trivedi describes his efforts to train a bot to play a soccer video game, by observing his playing. Finally, Andy recommends an NSF workshop report, a book on AI: Foundations of Computational Agents, Permutation City, and over 100 video hours of the CogX 2018 conference.
conference. How to Train Your DrAIgon (for good, not for bad) BREAKING (June 7, Brookings) Report on Views of AI, Robots, and Automation based on Internet Search Data May survey (concern over
ai with ai: Game of Drones - AI Winter Is Coming
/our-media/podcasts/ai-with-ai/season-1/1-34
In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new Xavier chip packs $10k worth of power into a $1299 box. Next, Andy and Dave discuss a benevolent application of adversarial attack methods, with a “privacy filter” for photos that are designed to stop AI face detection (reducing detection from nearly 100 percent to 0.5 percent). MIT used AI in the development of nanoparticles, training neural nets to “learn” how a nanoparticle’s structure affects its behavior. Then the remaining topics dip deep into the philosophical realm, starting with a discussion on empiricism and the limits of gradient descent, and how philosophical concepts of empiricist induction compare with critical rationalism. Next, the topic of a potential AI Winter continues to percolate with a viral blog from Piekniewski, leading into a paper from Berkley/MIT that discovers a 4-15% reduction in accuracy for CIFAR-10 classifiers on a new set of similar training images (bringing into doubt the idea of robustness of these systems). Andy shares a possible groundbreaking paper on “graph networks,” that provides a new conceptual framework for thinking about machine learning. And finally, Andy and Dave close with some media selections, including Blood Music by Greg Bear and Swarm by Frank Schatzing.
) -   AI researchers should help with some military work (June 7)   Google’s AI Ethics Principles (March 20-21) Royal Australian Air Force’s biennial Air Power Conference Conference
ai with ai: Can Anything Stop the Malicious AI??
/our-media/podcasts/ai-with-ai/season-1/1-20
Andy and Dave discuss a recently released report on the Malicious Use of AI: Forecasting, Prevention, and Mitigation, which describes scenarios where AI might have devious applications (hint: there’s a lot). They also discuss a recent report that describes the extent of missing data in AI studies, which makes it difficult to reproduce published results. Andy then describes research that looks into ways to alter information (in this case, classification of an image) to fool both AI and humans. Dave has to repeat the research in order to understand the sheer depth of the terror that could be lurking below. Then Andy and Dave quickly discuss a new algorithm that can mimic any voice with just a few snippets of audio. The only non-terrifying topic they discuss involves an attempt to make Alexa more chatty. Even then, Dave decides that this effort will only result in a more-empty wallet.
Cloning Experiment I VIDEOS X-Files: Season 11, Episode 7 (28 Feb 2018),   Rm9sbG93ZXJz   = "Followers" in base 64 Full video (with commercials), from Fox-TV (available for a limited time