skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report

Search Results

Your search for Autonomy and Artificial Intelligence found 126 results.

ai with ai: AI with AI: Curiosity Killed the Poison Frog, Part II
/our-media/podcasts/ai-with-ai/season-1/1-47b
Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases it's Unmanned Systems Integrated Roadmap 2017-2042; Google announces Dataset Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life, and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability to poison the training data set of a neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpeanAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”
  |   Technical paper AI can track 200 eye movements to determine personality traits Overview:   Artificial intelligence can predict your personality… simply by tracking your eyes Technical paper ... to Build the Driverless Car—And How It Will Reshape Our World , by Lawrence Burns and Christopher Shulgan Videos of the Week Artificial Intelligence (AI) in the Command and Control (C2 ... Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week
ai with ai: Curiosity Killed the Poison Frog, Part I
/our-media/podcasts/ai-with-ai/season-1/1-47
Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases it's Unmanned Systems Integrated Roadmap 2017-2042; Google announces Dataset Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life, and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability to poison the training data set of a neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpeanAI, Berkley and Edinburgh's research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”
Conference papers   |   DeepMind papers at ICLR 2018 An assault on “catastrophic forgetting” - toward an AI that remembers HLAI Joint Multi-Conference on Human-Level Artificial Intelligence ... paper AI can track 200 eye movements to determine personality traits Overview:   Artificial intelligence can predict your personality… simply by tracking your eyes Technical paper:   Eye ... the Driverless Car—And How It Will Reshape Our World , by Lawrence Burns and Christopher Shulgan Videos of the Week Artificial Intelligence (AI) in the Command and Control (C2) of Airpower Do
ai with ai: Someday My ‘Nets Will Code
/our-media/podcasts/ai-with-ai/season-4/4-32
Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council’s Panel of Experts, which notes the March 2020 use of the “fully autonomous” Kargu-2 to engage retreating forces; it’s unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses a 825GB data “Pile” to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as “code generators,” creating a benchmark (the Automated Programming Progress Standard) to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with “Someday.” And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally Diana Gehlhaus from Georgetown’s Center for Security and Emerging Technology (CSET), joins Andy and Dave to preview an upcoming event, “Requirements for Leveraging AI.” The interview with Diana Gehlhaus begins at 33:32
of the Week #1: Artificial Intelligence and Autonomy in Russia Full Report #2: Unknown Unknowns: Data, Issues, and Military Autonomous Systems Executive Summary Full Report ... evaluation model for user trust in artificial intelligence NIST report An Open-Source version of GPT-3 – called GPT-Neo – is Out! Eleuther AI homepage Hugging face GPT-Neo homepage ... Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous
ai with ai: All Good Things
/our-media/podcasts/ai-with-ai/season-6/6-8
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!
Campaign to Stop Killer Robots’ take NATO starts work on Artificial Intelligence certification standard IEEE Introduces New Program for Free Access to AI Ethics and Governance ... : Power in the Age of Artificial Intelligence We Started with AlphaGo, and – in a way – we’ll end with AlphaGo Man beats machine at Go in human victory over AI Story (Financial Times – may ... military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February
ai with ai: No Time to AI
/our-media/podcasts/ai-with-ai/season-4/4-33
Andy and Dave discuss the latest in AI news, starting with the US Consumer Products Safety Commission report on AI and ML. The Deputy Secretary of Defense outlines Responsible AI Tenets, along with mandating the JAIC to start work on four activities for developing a responsible AI ecosystem. The Director of the US Chamber of Commerce’s Center for Global Regulatory Cooperation outlines concerns with the European Commission’s newly drafted rules on regulating AI. Amnesty International crowd-sources an effort to identify surveillance cameras that the New York City Police Department have in use, resulting in a map of over 15,000 camera locations. The Royal Navy uses AI for the first time at sea against live supersonic missiles. And the Ghost Fleet Overlord unmanned surface vessel program completes its second autonomous transit from the Gulf Coast, through the Panama Canal, and to the West Coast. Finally, CNA Russia Program team members Sam Bendett and Jeff Edmonds join Andy and Dave for a discussion on their latest report, which takes a comprehensive look at the ecosystem of AI in Russia, including its policies, resourcing, infrastructure, and activities.
of Artificial Intelligence Applications NYPD can use 15K+ cameras to track people using FR in Manhattan, Bronx and Brooklyn Video Surveillance Heat Map The Royal Navy uses AI for the 1 st   time at sea against live supersonic missiles Ghost Fleet Overlord USV Program Completes 2 nd   Autonomous Transit to the Pacific Interview – Jeff Edmonds and Sam Bendett Artificial Intelligence and Autonomy in Russia Full Report ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact/Email ContactPhone /*/Contact/Phone 4 33
ai with ai: Just the Tip of the Skyborg
/our-media/podcasts/ai-with-ai/season-4/4-31
Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force’s Skyborg autonomy core system. The UK Office for AI publishes a new set of guidance on automated decision-making in government, with Ethics, Transparency and Accountability Framework for Automated Decision-Making. The International Red Cross calls for new international rules on how governments use autonomous weapons. Senators introduce two AI bills to improve the US’s AI readiness, with the AI Capabilities and Transparency Act and the AI for the Military Act. Defense Secretary Lloyd Austin lays out his vision for the Department of Defense in his first major speech, stressing the importantance of emerging technology and rapid increases in computing power. A report from the Allen Institute for AI shows that China is closing in on the US in AI research, expecting to become the leader in the top 1% of most-cited papers in 2023. In research, Ziming Liu and Max Tegmark introduce AI Poincaré, an algorithm that auto-discovers conserved quantities using trajectory data from unknown dynamics systems. Researchers enable a paralyzed man to “text with his thoughts,” reaching 16 words per minute. The Stimson Center publishes A New Agenda for US Drone Policy and the Use of Lethal Force. The Onlife Manifesto: Being Human in a Hyperconnected Era, first published in 2015, is available for open access. And Cade Metz publishes Genius Makers, with stories of the pioneers behind AI.
paper (mentioned in Maurer’s letter) Two AI Bills introduced To Boost AI-Ready National Security Personnel & Increase Transparency Artificial Intelligence Capabilities and Transparency Act Artificial Intelligence for the Military (AIM) Act Defense secretary lays out vision of future in first major speech Video - Change of Command Ceremony for U.S. Indo-Pacific Command ... 4-31 Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force’s Skyborg autonomy core system. The UK Office for AI publishes a new set
ai with ai: When This Savvy Slime Mold Encountered a Morphogenic Robotic Swarm, You Won't Believe What Happened Next...!
/our-media/podcasts/ai-with-ai/season-2/2-11
Andy and Dave discuss Rodney Brooks' predictions on AI from early 2018 and his (on-going) review of those predictions. The European Commission releases a report on AI and Ethics, a framework for "Trustworthy AI." DARPA announces the Knowledge-directed AI Reasoning over Schemas (KAIROS) program, aimed at understanding "complex events." The Standardized Project Gutenberg Corpus attempts to provide researchers with broader data across the project's complete data holdings. And MORS announces a special meeting on AI and Autonomy at JHU/APL in February. In research, Andy and Dave discuss work from Keio University, which shows that slime mold can approximate solutions to NP-hard problems in linear time (and differently from other known approximations). Researchers in Spain, the UK, and the Netherlands demonstrate that kilobots (small 3 cm robots) with basic communication rule-sets will self-organize. Research from UCLA and Stanford creates an AI system that mimics how humans visualize and identify objects by feeding the system many pieces of an object, called "viewlets." NVIDIA shows off its latest GAN that can generate fictional human faces that are essentially indistinguishable from real ones; further, they structure their generator to provide more control over various properties of the latent space (such as pose, hair, face shape, etc). Other research attempts to judge a paper on how good it looks. And in the "click-bait" of the week, Andy and Dave discuss an article from TechCrunch, which misrepresented bona fide (and dated) AI research from Google and Stanford. Two surveys provide overviews on different topics: one on the safety and trustworthiness of deep neural networks, and the other on mini-UAV-based remote sensing. A report from CIFAR summarizes national and regional AI strategies (minus the US and Russia). In books of the week, Miguel Herman and James Robins are working on a Causal Inference Book, and Michael Nielsen has provided a book on Neural Networks and Deep Learning. CW3 Jesse R. Crifasi provides a fictional peek into a combat scenario involving AI. And Samim Winiger has started a mini-documentary series, "LIFE," on the intersection of humans and machines.
Corpus (SPGC) Announced Technical paper Code/GitHub Standardized Project Gutenberg Corpus Special Meeting on ‘Artificial Intelligence and Autonomy’ at John Hopkins APL ... to provide researchers with broader data across the project's complete data holdings. And MORS announces a special meeting on AI and Autonomy at JHU/APL in February. In research, Andy and Dave discuss work ... Intelligence of Microbes ,” Quanta Magazine Morphogenesis in robot swarms Kilobots Technical paper Video (2 min) New AI system mimics how humans visualize and identify objects
ai with ai: Unmanned Systems, AI, and the U.S. Navy, with CAPT Sharif Calfee, Part II
/our-media/podcasts/ai-with-ai/season-1/1-26b
Anna Williams   joins Dave in welcoming CAPT Sharif Calfee for a two-part discussion on unmanned systems and artificial intelligence. As part of his fellowship research, CAPT Calfee has been speaking with organizations and subject matter experts across the U.S. Navy, the U.S. Government, Federally Funded Research and Development Centers, University Affiliated Research Centers, and Industry, in order to understand the broader efforts involving unmanned systems, autonomy, and artificial intelligence. In the first part of their discussion, the group discusses the progress and the challenges that the CAPT has observed in his engagements. In the second part, the group discusses various steps that the U.S. Navy can take to move forward more deliberately, including the consideration for a new Naval Reactors-like office to oversee AI.
1-26B Anna Williams   joins Dave in welcoming CAPT Sharif Calfee for a two-part discussion on unmanned systems and artificial intelligence. As part of his fellowship research, CAPT Calfee has been speaking with organizations and subject matter experts across the U.S. Navy, the U.S. Government, Federally Funded Research and Development Centers, University Affiliated Research Centers, and Industry, in order to understand the broader efforts involving unmanned systems, autonomy, and artificial intelligence. In the first part of their discussion, the group discusses the progress
ai with ai: Unmanned Systems, AI, and the U.S. Navy, with CAPT Sharif Calfee, Part I
/our-media/podcasts/ai-with-ai/season-1/1-26
Anna Williams   joins Dave in welcoming CAPT Sharif Calfee for a two-part discussion on unmanned systems and artificial intelligence. As part of his fellowship research, CAPT Calfee has been speaking with organizations and subject matter experts across the U.S. Navy, the U.S. Government, Federally Funded Research and Development Centers, University Affiliated Research Centers, and Industry, in order to understand the broader efforts involving unmanned systems, autonomy, and artificial intelligence. In the first part of their discussion, the group discusses the progress and the challenges that the CAPT has observed in his engagements. In the second part, the group discusses various steps that the U.S. Navy can take to move forward more deliberately, including the consideration for a new Naval Reactors-like office to oversee AI.
Anna Williams   joins Dave in welcoming CAPT Sharif Calfee for a two-part discussion on unmanned systems and artificial intelligence. As part of his fellowship research, CAPT Calfee has been speaking with organizations and subject matter experts across the U.S. Navy, the U.S. Government, Federally Funded Research and Development Centers, University Affiliated Research Centers, and Industry, in order to understand the broader efforts involving unmanned systems, autonomy, and artificial intelligence. In the first part of their discussion, the group discusses the progress and the challenges
cna talks: Red Robots
/our-media/podcasts/cna-talks/2017/red-robots
Artificial Intelligence and unmanned systems are rapidly changing the dynamics of warfighting and military power. With Russia, China, and other nations investing in and testing these capabilities, the United States has to understand what capabilities other countries have and how their governments could seek to use them. On this episode of CNA Talks, experts Larry Lewis, Samuel Bendett, and Kevin Pollpeter discuss unmanned military systems, how Russia and China are developing and employing their capabilities, and what the United States should do in response.
Lewis   is the Director of the Center for Autonomy and Artificial Intelligence at CNA. His areas of expertise include lethal autonomy, reducing civilian casualties, identifying lessons from current ... Red Robots Artificial Intelligence and unmanned systems are rapidly changing the dynamics of warfighting and military power. With Russia, China, and other nations investing in and testing these capabilities, the United States has to understand what capabilities other countries have and how their governments could seek to use them. On this episode of CNA Talks, experts Larry Lewis, Samuel Bendett