skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote

Search Results

Your search for Autonomy and Artificial Intelligence found 129 results.

ai with ai: The GPT Blob
/our-media/podcasts/ai-with-ai/season-3/3-32
In this week's COVID-related AI news, Andy and Dave discuss "SciFact" from the Allen Institute for AI, which built on neural network VeriSci and can link to supporting or refuting materials for claims about COVID-19. Berkeley Labs releases COVIDScholar, which uses natural language processing text-mining to search over 60,000 papers and draw insights and connections. Berekely Labs also announces plans to use machine learning to estimate COVID-19's seasonal cycle. In non-COVID AI news, Google publishes a response to the European Commission's white paper on AI, cautioning that their definition of AI is far too broad and risks stifling innovation. CSET maps where AI talent is produced in the U.S., where it gets concentrated, and where AI funding equity goes. In research, OpenAI releases GPT-3, a 175B parameter NLP model, and shows that massively scaling up the language model greatly improves task-agnostic few-shot performance. A report from the European Parliament's Panel for the Future of Science and Technology shows the ethics initiatives of nations around the globe. A review paper in Science suggests that progress in AI has stalled (perhaps as much as 10 years) in some fields. Abbass, Scholz, and Reid publish Foundations of Trusted Autonomy, a collection of essays and reports on trustworthiness and autonomy. And in the video of the week, CSIS sponsored a conversation with (now retired) JAIC Director, Lt Gen Shanahan.
, May-June 2015) Report of the Week The Ethics of Artificial Intelligence: Issues and Initiatives 128 page report Review Paper(s) of the Week Core progress in AI has stalled in some ... fields. Abbass, Scholz, and Reid publish Foundations of Trusted Autonomy, a collection of essays and reports on trustworthiness and autonomy. And in the video of the week, CSIS sponsored a conversation ... of the Week Foundations of Trusted Autonomy 400 page Book Video of the Week Online Event: A Conversation with JAIC Director Lt. Gen. John N.T. "Jack" Shanahan Shanahan Steps Down
ai with ai: What is AI?
/our-media/podcasts/ai-with-ai/season-2/2-14
CNA’s Center for Autonomy and Artificial Intelligence kicks off its first panel for 2019 with a live recording of AI with AI! Andy and Dave take a step back and look at the broader trends of research and announcements involving AI and machine learning, including: a summary of historical events and issues; the myths and hype, looking at expectations, buzzwords, and reality; hits and misses (and more hype!), and some of the many challenges of why AI is far from a panacea.
CNA’s Center for Autonomy and Artificial Intelligence kicks off its first panel for 2019 with a live recording of AI with AI! Andy and Dave take a step back and look at the broader trends of research and announcements involving AI and machine learning, including: a summary of historical events and issues; the myths and hype, looking at expectations, buzzwords, and reality; hits and misses (and more hype!), and some of the many challenges of why AI is far from a panacea. /images/AI-Posters/AI_2_14.jpg What is AI? Announcements Members for the National Security Commission for AI
ai with ai: Eleventh Voyage into Morphospace
/our-media/podcasts/ai-with-ai/season-2/2-9
The Joint Artificial Intelligence Center is up and running, and Andy and Dave discuss some of the newer revealed details. And the rebranded NeurIPS (originally NIPS), the largest machine learning conference of the year, holds its 32nd annual conference in Montreal, Canada, with a keynote discussion on “What Bodies Think About” by Michael Levin. And a group of graduate students have create a community-driven database to provide links to tasks, data, metrics, and results on the “state of the art” for AI. In other news, one of the “best paper” awards at NeurIPS goes to Neural Ordinary Differential Equations, research from University of Toronto that replaces the nodes and connections of typical neural networks with one continuous computation of differential equations. DeepMind publishes its paper on AlphaZero, which details the announcements made last year on the ability of the neural network to play chess, shogi, and go “from scratch.” And AlphaFold from DeepMind brings machine learning methods to a protein folding competition. In reports of the week, the AI Now Institute at New York University releases a 3rd annual report on understanding social implications of AI. With a blend of technology and philosophy, Arsiwalla and co-workers break up the complex “morphospace” of consciousness into three categories: computational, autonomy, and social; and they map various examples to this space. For interactive fun of generating images with a GAN, check out the “Ganbreeder,” though maybe not before going to sleep. In videos of the week, “Earworm” tells the tale of an AI that deleted a century; and CIMON, the ISS Robot, interacts with the space crew. And finally, Russia24 joins a long history of people dressing up and pretending to be robots.
The Joint Artificial Intelligence Center is up and running, and Andy and Dave discuss some of the newer revealed details. And the rebranded NeurIPS (originally NIPS), the largest machine learning ... Eleventh Voyage into Morphospace Breaking Pentagon's artificial intelligence center to coordinate military AI projects above $15 million 32nd Conference on Neural Information Processing Systems ... of technology and philosophy, Arsiwalla and co-workers break up the complex “morphospace” of consciousness into three categories: computational, autonomy, and social; and they map various examples
ai with ai: Keep Talking and No Robot Explodes, Part II
/our-media/podcasts/ai-with-ai/season-1/1-48b
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
1-48B Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging ... for Autonomy and AI blog : CNA Statement to UN Group of Government Experts on Lethal Autonomous Weapon Systems, August 29, 2018, Larry Lewis CNA report:   AI and Autonomy in War: Understanding
ai with ai: Keep Talking and No Robot Explodes, Part I
/our-media/podcasts/ai-with-ai/season-1/1-48
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include: Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
1-48 Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging ... for Autonomy and AI blog : CNA Statement to UN Group of Government Experts on Lethal Autonomous Weapon Systems, August 29 2018, Larry Lewis CNA report:   AI and Autonomy in War: Understanding
ai with ai: Russian AI Kryptonite
/our-media/podcasts/ai-with-ai/season-1/1-40
CNA’s expert on Russian AI and autonomous systems,   Samuel Bendett , joins temporary host Larry Lewis (again filling in for Dave and Andy) to discuss Russia’s pursuits with the militarization of AI and autonomy. Russian Ministry of Defense (MOD) has made no secret of its desire to achieve technological breakthroughs in IT and especially artificial intelligence, marshalling extensive resources for a more organized and streamlined approach to information technology R&D. MOD is overseeing a significant public-private partnership effort, calling for its military and civilian sectors to work together on information technologies, while hosting high-profile events aiming to foster dialogue between its uniformed and civilian technologists. For example, Russian state corporation Russian Technologies (Rostec), with extensive ties to the nation’s military-industrial complex, has overseen the creation of a company with the ominous name – Kryptonite. The company’s name – the one vulnerability of a super-hero – was unlikely to be picked by accident. Russia’s government is working hard to see that the Russian technology sector can compete with American, Western and Asian hi-tech leaders. This technology race is only expected to accelerate - and Russian achievements merit close attention.
1-40 CNA’s expert on Russian AI and autonomous systems,   Samuel Bendett , joins temporary host Larry Lewis (again filling in for Dave and Andy) to discuss Russia’s pursuits with the militarization of AI and autonomy. Russian Ministry of Defense (MOD) has made no secret of its desire to achieve technological breakthroughs in IT and especially artificial intelligence, marshalling extensive resources for a more organized and streamlined approach to information technology R&D. MOD is overseeing a significant public-private partnership effort, calling for its military and civilian sectors
ai with ai: Common Sense, Black Boxes, and Getting Robots to Teach Themselves
/our-media/podcasts/ai-with-ai/season-1/1-21
Larry Lewis , Director of CNA’s   Center for Autonomy and AI , sits in for Dave this week, as he and Andy discuss: a recent report that not all Google employees are happy with Google’s partnership with DoD (in developing a drone-footage-analyzing AI); research efforts designed to lift the lid – just a bit - on the so-called “black box” reasoning of neural-net-based AIs; some novel ways of getting robots/AIs to teach themselves; and an arcade-playing AI that has essentially “discovered” that if you can’t win at the game, it is best to either kill yourself or cheat. The podcast ends with a nod to a new free online AI resource offered by Google, another open access book (this time on the subject of Robotics), and a fascinating video of Stephen Wolfram of Mathematica fame, lecturing about artificial general intelligence and the “computational universe” to a computer science class at MIT.
about artificial general intelligence and the “computational universe” to a computer science class at MIT. Common Sense, Black Boxes, and Getting Robots to Teach Themselves BREAKING Google working ... hrs) Stephen Wolfram – presentation on artificial general intelligence at MIT ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact/Email ... Larry Lewis , Director of CNA’s   Center for Autonomy and AI , sits in for Dave this week, as he and Andy discuss: a recent report that not all Google employees are happy with Google’s
ai with ai: Welcome to AI with AI
/our-media/podcasts/ai-with-ai/season-1/1-1
In the inaugural podcast for AI with AI, Andy provides an overview of his recent report on AI, Robots, and Swarms, and discusses the bigger picture of the development and breakthroughs in artificial intelligence and autonomy. Andy also discusses some of his recommended books and movies.
In the inaugural podcast for AI with AI, Andy provides an overview of his recent report on AI, Robots, and Swarms, and discusses the bigger picture of the development and breakthroughs in artificial intelligence and autonomy. Andy also discusses some of his recommended books and movies. Welcome to AI with AI ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact/Email ContactPhone /*/Contact/Phone 1 1 5904853
cna talks: The Race for Autonomy: David Broyles on Getting Autonomy to Work for Defense
/our-media/podcasts/cna-talks/2022/08/the-race-for-autonomy-david-broyles-on-getting-autonomy-to-work-for-defense
This content was originally published on BMNT's YouTube Channel. You can find the original video   here . In this follow-up conversation to BMNT’s June panel "The Race for Autonomy: Navigating a New Battlefield," A'ndre Gonawela talks to Dr. David Broyles, Research Program Director at the Center for Naval Analysis and co-host of "AI with AI", on the challenges facing the Department of Defense when it comes to developing and leveraging autonomous systems and capabilities. Dr. Broyles digs into why he (like our prior panelists) believes the state of autonomy today is ‘brittle’, and why the end goal for many is ‘general AI’ – the ability for artificial intelligence to behave and adapt like human intelligence can. We discuss Dr. Broyles’ belief that an ‘AI Winter’ may be approaching, where momentum in the development of systems is slowed or even halted. We then dig into where the Department of Defense is on the racetrack, dissecting the lingering confusion that underlies the differences between unmanned systems and autonomous systems, and how we can better equip DoD leaders in understanding how autonomous systems can operate. Dr. Broyles highlights opportunities to build trust in autonomous systems with the warfighter, in addition to addressing the edge cases and ‘fat tails’ that can impede the success of autonomous vehicles.   You can read about our first panel here:   https://www.bmnt.com/post/the-race-for-autonomy-is-here
prior panelists) believes the state of autonomy today is ‘brittle’, and why the end goal for many is ‘general AI’ – the ability for artificial intelligence to behave and adapt like human intelligence ... The Race for Autonomy: David Broyles on Getting Autonomy to Work for Defense This content was originally published on BMNT's YouTube Channel. You can find the original video   here . In this follow-up conversation to BMNT’s June panel "The Race for Autonomy: Navigating a New Battlefield," A'ndre Gonawela talks to Dr. David Broyles, Research Program Director at the Center for Naval Analysis