skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap

Search Results

Your search for Emerging Technologies found 101 results.

Five Key Challenges in US-Russian Relations
/our-media/indepth/2021/06/five-key-challenges-in-us-russian-relations
Both US President Joe Biden and Russian President Vladimir Putin aim to halt the downward spiral of the bilateral relationship when they meet on June 16 in Geneva, Switzerland.
stability concerns include emerging military technologies: artificial intelligence, hypersonic weapons, and space capabilities that target satellite-based nuclear command, control and communications
Exploring a Cyber Nirvana
/our-media/indepth/2019/07/exploring-a-cyber-nirvana
Throughout 2018, CNA collaborated with University of California Berkeley’s Center for Long-Term Cybersecurity and the World Economic Forum to conduct a series of workshops around the world.
’ approach to developing and emerging technologies will manifest themselves more broadly than the topic alone might suggest. And, if so, shouldn’t we be thinking about how this might play out? How might we
ai with ai: The Ode to Decoy
/our-media/podcasts/ai-with-ai/season-5/5-2
Andy and Dave discuss the latest in AI news and research, including: NATO releases its first AI strategy, which included the announcement of a one billion euro “NATO innovation fund.” [0:52] Military research labs in the US and UK collaborate on autonomy and AI in a combined demonstration, integrating algorithms and automated workflows into military operations. [2:58] A report from CSET and MITRE identifies that the Department of Defense already has a number of AI and related experts, but that the current system hides this talent. [6:45] The National AI Research Resource Task Force partners with Stanford’s Human-Centered AI and the Stanford Law School to publish Building a National AI Research Resource: A Blueprint for the National Research Cloud. [6:45] And in a trio of “AI fails,” a traffic camera in the UK mistakes a woman for a car and issues a fine to the vehicle’s owner; [9:10] the Allen Institute for AI introduces Delphi as a step toward developing AI systems that behave ethically (though it sometimes thinks that it’s OK to murder everybody if it creates jobs); [10:07] and a WSJ report reveals that Facebook’s automated moderation tools were falling far short on accurate identification of hate speech and videos of violence and incitement. [12:22] Ahmed Elgammal from Rutgers teams up with Playform to compose two movements for Beethoven’s Tenth Symphony, for which the composer left only sketches before he died. And finally, Andy and Dave welcome Dr. Heather Wolters and Dr. Megan McBride to discuss their latest research on the Psychology of (Dis)Information, with a pair of publications, one providing a primer on key psychological mechanisms, and another examining case studies and their implications.
Cards Streamlined ML Emerging Military Technologies:   Background and Issues for Congress Report The DOD’s Hidden Artificial Intelligence Workforce Building a National AI
ai with ai: Horrorscope
/our-media/podcasts/ai-with-ai/season-4/4-43
Andy and Dave discuss the latest in AI news and research, including: 0:57: The Allen Institute for AI and others come together to create a publicly available “COVID-19 Challenges and Directions” search engine, building off of the corpus of COVID-related research. 5:06: Researchers with the University of Warwick perform a systematic review of test accuracy for the use of AI in image analysis of breast cancer screening and find most (34 or 36) AI systems were less accurate than a single radiologist, and all were less accurate than a consensus of two or more radiologists (among other concerning findings). 10:19: A US judge rejects an appeal for the AI system DABUS to own a patent, noting that US federal law requires an “individual” to be an owner, and the legal definition of an “individual” is a natural person. 17:01: The US Patent and Trademark Office uses machine learning to analyze the history of AI in patents. 19:42: BCS publishes Priorities for the National AI Strategy, as the UK seeks to set global AI standards. 20:42: In research, MIT, Northeastern, and U Penn explore the challenges of discerning emotion from a person’s facial movements (which largely relates to context), and highlight the reasons why facial recognition algorithms will struggle with this task. 28:02: GoogleAI uses diffusion models to generate high fidelity images; the approach slowly adds noise to corrupt the training data and then using a neural network to reverse that corruption. 35:07: Springer-Verlag makes AI for a Better Future, by Bernd Carsten Stahl, available for open access. 36:19: Thomas Smith, the co-founder of Gado Images, chats with GPT-3 about theCOVID-19 pandemic and finds that it provides some interesting responses to his questions.
on the Ethics of AI and Emerging Digital Technologies Book Interesting Link of the Week I Asked GPT-3 About Covid-19: Its Responses Shocked Me ContactName /*/Contact/ContactName
ai with ai: AI with AI: It Can Only Be Attributable to Human Error
/our-media/podcasts/ai-with-ai/season-2/2-7
In the latest news, Andy and Dave discuss OpenAI releasing “Spinning Up in Deep RL,” an online educational resource; Google AI and the New York Times team up to digitize over 5 million photos and find “untold stories;” China is recruiting its brightest children to develop AI “killer bots;” and China unveils the world’s first AI news anchor; and Douglas Rain, the voice of HAL 9000 has died at age 90. In research topics, Andy and Dave discuss research from MIT, Tegmark, and Wu, that attempts to improve unsupervised machine learning by using a framework that more closely mirrors scientific thought and process. Albrecht and Stone examine the issue of autonomous agents modeling other agents, which leads to an interesting list of open problems for future research. Research from Stanford makes an empirical examination of bias and generalization in deep generative models, and Andy notes striking similarities to previously reported experiments in cognitive psychology. Other research surveys data collection for machine learning, from the perspective of the data. In blog posts of the week, the Mad Scientist Initiative reveals the results from a recent competition, which suggests themes of the impacts of AI on the future battlefield; and Piekniewski follows up his May 2018 “Is an AI Winter On Its Way?” in which he reviews cracks appearing in the AI façade, with particular focus on the arena of self-driving vehicles. And Melanie Mitchell provides some insight about AI hitting the barrier of meaning. CSIS publishes a report on the Importance of the AI Ecosystem. And another paper takes insights from the social sciences to provide insight into AI. Finally, MIT press has updated one of the major sources on Reinforcement Learning with a second edition; AI Superpowers examines the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging technology; Major Voke’s entire presentation on AI for C2 of Airpower is now available, and the Bionic Bug Podcast has an interview with CNA’s own Sam Bendett to talk AI and robotics.
the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging
ai with ai: Keep Talking and No Robot Explodes, Part II
/our-media/podcasts/ai-with-ai/season-1/1-48b
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging ... in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019). /images/AI-Posters/AI_1_48.jpg Keep Talking and No Robot Explodes, Part II TOPICS GGE on LAWs - Emerging Commonalities, Conclusions and Recommendation UK position paper US position paper Main GGE/LAWs 2018 homepage Center
ai with ai: Keep Talking and No Robot Explodes, Part I
/our-media/podcasts/ai-with-ai/season-1/1-48
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include: Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging ... in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019). /images/AI-Posters/AI_1_48.jpg Keep Talking and No Robot Explodes, Part I TOPICS GGE on LAWs - Emerging Commonalities, Conclusions and Recommendation UK positon paper US position paper Main GGE/LAWs 2018 homepage Center
ai with ai: Debater of the AI-ncients, Part 1 (Dota)
/our-media/podcasts/ai-with-ai/season-1/1-37
In breaking news, Andy and Dave discuss a potentially groundbreaking paper on the scalable training of artificial neural nets with adaptive sparse connectivity; MIT researchers unveil the Navion chip, only 20 square millimeters in size and consume 24 milliwatts of power, it can process real-time camera images up to 171 frames per second, and can be integrated into drones the size of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals “reclaim their digital identity;” and the Foundation for Responsible Robotics, which wants to shape the responsible design and use of robotics, releases a report on Drones in the Service of Society. Then, Andy and Dave discuss IBM’s Project Debater, the follow-on to Watson that engaged in a live, public debate with humans on 18 June. IBM spent 6 years developing PD’s capabilities, with over 30 technical papers and benchmark datasets, Debater can debate nearly 100 topics. PD uses three pioneering capabilities: data-driven speech writing and delivery, listening comprehension, and the ability to model human dilemmas. Next up, OpenAI announces OpenAI Five, a team of 5 AI algorithms trained to take on a human team in the tower defense game, Dota 2; Andy and Dave discuss the reasons for the impressive achievement, including that the 5 AI networks do not communicate with each other, and that coordination and collaboration naturally emerge from their incentive structures. The system uses 256 Nvidia graphics cards and 128,000 processor cores; it has taken on (and won) a variety of human teams, but OpenAI plans to stream a match against a top Dota 2 team in late July.
of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals
Autonomous Systems
/centers-and-divisions/ipr/esm/aviation/aviation-areas/autonomous-systems
transforms emerging technologies into mission-aligned solutions. blue none Blurbs /our-media/podcasts/cna-talks/2025/03/creating-an-air-space-that-is-assured-safe
cna talks: Tomorrow's Technology in the Ukraine War
/our-media/podcasts/cna-talks/2024/01/tomorrows-technology-in-the-ukraine-war
The role of AI and autonomous systems in the war in Ukraine has attracted much attention in the media and from analysts tracking the use of new technologies in warfare. But what impact has it had on the battlefield? In this episode, Margarita Konaev and Samuel Bendett join the show to discuss how these technologies impact the situation on the ground, the private sector’s role in the conflict, and what this means for the future of warfare. 
Tomorrow's Technology in the Ukraine War The role of AI and autonomous systems in the war in Ukraine has attracted much attention in the media and from analysts tracking the use of new technologies in warfare. But what impact has it had on the battlefield? In this episode, Margarita Konaev and Samuel Bendett join the show to discuss how these technologies impact the situation on the ground ... capabilities.  Dr.   Margarita Konaev   is Deputy Director of Analysis and a Research Fellow at Georgetown’s Center for Security and Emerging Technology (CSET), interested in military applications of AI