skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report

Search Results

Your search for Emerging Technologies found 93 results.

ai with ai: Countering AI Classifiers, and Introducing Doubt to AI
/our-media/podcasts/ai-with-ai/season-1/1-13
Andy and Dave discuss a newly announced method of attack on the speech-to-text capability DeepSpeech, which introduces noise to an audio waveform so that the AI does not hear the original message, but instead hears a message that the attacker intends. They also discuss the introduction of probabilistic models to AI as a way for AI to "embrace uncertainty" and make better decisions (or perhaps doubt whether or not humans should remain alive). And finally, Andy and Dave discuss some recent applications of AI to different areas of scientific study, particularly in the examination of very large data sets.
VIDEO Paul Scharre’s testimony before the House Armed Services Subcommittee on Emerging Threats and Capabilities (9 Jan 2018): China’s Pursuit of Emerging and Exponential Technologies.   Watch
Dr. Adam Monsalve on Keeping Drone Traffic Communications Secure
/our-media/indepth/2024/10/meet-the-innovator-adam-monsalve-on-drone-traffic-security
CNA systems engineer Adam Monsalve innovates to ensure that government can operate and regulate uncrewed aircraft systems safely and securely.
projects that explore new tools and approaches for addressing emerging national safety and security challenges.  These projects are showcased in the CNA Innovation Incubator (CNAi 2 ). From analyzing ... had to be low cost, something a public safety officer could actually implement, because they've got limited budgets for these types of technologies. We developed the concept for a little widget
Crime Analysts: Using Data to Make Communities Safer
/our-media/indepth/2023/09/crime-analysts-as-profession
Crime analysts identify patterns, trends, and connections in vast datasets to help law enforcement. The crime analysis profession expands along with police data.
in enabling law enforcement to respond to emerging threats more efficiently. According to the Bureau of Justice Assistance in the Department of Justice, “Modern crime analysts utilize complex computer ... in the past 20 years. As technologies grow more sophisticated and information sharing increases rapidly, the role of crime analysis continues to grow with them. The federal government is also helping crime
Five Key Challenges in US-Russian Relations
/our-media/indepth/2021/06/five-key-challenges-in-us-russian-relations
Both US President Joe Biden and Russian President Vladimir Putin aim to halt the downward spiral of the bilateral relationship when they meet on June 16 in Geneva, Switzerland.
stability concerns include emerging military technologies: artificial intelligence, hypersonic weapons, and space capabilities that target satellite-based nuclear command, control and communications
Exploring a Cyber Nirvana
/our-media/indepth/2019/07/exploring-a-cyber-nirvana
Throughout 2018, CNA collaborated with University of California Berkeley’s Center for Long-Term Cybersecurity and the World Economic Forum to conduct a series of workshops around the world.
’ approach to developing and emerging technologies will manifest themselves more broadly than the topic alone might suggest. And, if so, shouldn’t we be thinking about how this might play out? How might we
ai with ai: The Ode to Decoy
/our-media/podcasts/ai-with-ai/season-5/5-2
Andy and Dave discuss the latest in AI news and research, including: NATO releases its first AI strategy, which included the announcement of a one billion euro “NATO innovation fund.” [0:52] Military research labs in the US and UK collaborate on autonomy and AI in a combined demonstration, integrating algorithms and automated workflows into military operations. [2:58] A report from CSET and MITRE identifies that the Department of Defense already has a number of AI and related experts, but that the current system hides this talent. [6:45] The National AI Research Resource Task Force partners with Stanford’s Human-Centered AI and the Stanford Law School to publish Building a National AI Research Resource: A Blueprint for the National Research Cloud. [6:45] And in a trio of “AI fails,” a traffic camera in the UK mistakes a woman for a car and issues a fine to the vehicle’s owner; [9:10] the Allen Institute for AI introduces Delphi as a step toward developing AI systems that behave ethically (though it sometimes thinks that it’s OK to murder everybody if it creates jobs); [10:07] and a WSJ report reveals that Facebook’s automated moderation tools were falling far short on accurate identification of hate speech and videos of violence and incitement. [12:22] Ahmed Elgammal from Rutgers teams up with Playform to compose two movements for Beethoven’s Tenth Symphony, for which the composer left only sketches before he died. And finally, Andy and Dave welcome Dr. Heather Wolters and Dr. Megan McBride to discuss their latest research on the Psychology of (Dis)Information, with a pair of publications, one providing a primer on key psychological mechanisms, and another examining case studies and their implications.
Cards Streamlined ML Emerging Military Technologies:   Background and Issues for Congress Report The DOD’s Hidden Artificial Intelligence Workforce Building a National AI
ai with ai: Horrorscope
/our-media/podcasts/ai-with-ai/season-4/4-43
Andy and Dave discuss the latest in AI news and research, including: 0:57: The Allen Institute for AI and others come together to create a publicly available “COVID-19 Challenges and Directions” search engine, building off of the corpus of COVID-related research. 5:06: Researchers with the University of Warwick perform a systematic review of test accuracy for the use of AI in image analysis of breast cancer screening and find most (34 or 36) AI systems were less accurate than a single radiologist, and all were less accurate than a consensus of two or more radiologists (among other concerning findings). 10:19: A US judge rejects an appeal for the AI system DABUS to own a patent, noting that US federal law requires an “individual” to be an owner, and the legal definition of an “individual” is a natural person. 17:01: The US Patent and Trademark Office uses machine learning to analyze the history of AI in patents. 19:42: BCS publishes Priorities for the National AI Strategy, as the UK seeks to set global AI standards. 20:42: In research, MIT, Northeastern, and U Penn explore the challenges of discerning emotion from a person’s facial movements (which largely relates to context), and highlight the reasons why facial recognition algorithms will struggle with this task. 28:02: GoogleAI uses diffusion models to generate high fidelity images; the approach slowly adds noise to corrupt the training data and then using a neural network to reverse that corruption. 35:07: Springer-Verlag makes AI for a Better Future, by Bernd Carsten Stahl, available for open access. 36:19: Thomas Smith, the co-founder of Gado Images, chats with GPT-3 about theCOVID-19 pandemic and finds that it provides some interesting responses to his questions.
on the Ethics of AI and Emerging Digital Technologies Book Interesting Link of the Week I Asked GPT-3 About Covid-19: Its Responses Shocked Me ContactName /*/Contact/ContactName
ai with ai: AI with AI: It Can Only Be Attributable to Human Error
/our-media/podcasts/ai-with-ai/season-2/2-7
In the latest news, Andy and Dave discuss OpenAI releasing “Spinning Up in Deep RL,” an online educational resource; Google AI and the New York Times team up to digitize over 5 million photos and find “untold stories;” China is recruiting its brightest children to develop AI “killer bots;” and China unveils the world’s first AI news anchor; and Douglas Rain, the voice of HAL 9000 has died at age 90. In research topics, Andy and Dave discuss research from MIT, Tegmark, and Wu, that attempts to improve unsupervised machine learning by using a framework that more closely mirrors scientific thought and process. Albrecht and Stone examine the issue of autonomous agents modeling other agents, which leads to an interesting list of open problems for future research. Research from Stanford makes an empirical examination of bias and generalization in deep generative models, and Andy notes striking similarities to previously reported experiments in cognitive psychology. Other research surveys data collection for machine learning, from the perspective of the data. In blog posts of the week, the Mad Scientist Initiative reveals the results from a recent competition, which suggests themes of the impacts of AI on the future battlefield; and Piekniewski follows up his May 2018 “Is an AI Winter On Its Way?” in which he reviews cracks appearing in the AI façade, with particular focus on the arena of self-driving vehicles. And Melanie Mitchell provides some insight about AI hitting the barrier of meaning. CSIS publishes a report on the Importance of the AI Ecosystem. And another paper takes insights from the social sciences to provide insight into AI. Finally, MIT press has updated one of the major sources on Reinforcement Learning with a second edition; AI Superpowers examines the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging technology; Major Voke’s entire presentation on AI for C2 of Airpower is now available, and the Bionic Bug Podcast has an interview with CNA’s own Sam Bendett to talk AI and robotics.
the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging
ai with ai: Keep Talking and No Robot Explodes, Part II
/our-media/podcasts/ai-with-ai/season-1/1-48b
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging ... in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019). /images/AI-Posters/AI_1_48.jpg Keep Talking and No Robot Explodes, Part II TOPICS GGE on LAWs - Emerging Commonalities, Conclusions and Recommendation UK position paper US position paper Main GGE/LAWs 2018 homepage Center
ai with ai: Keep Talking and No Robot Explodes, Part I
/our-media/podcasts/ai-with-ai/season-1/1-48
Dr. Larry Lewis, the Director of CNA’s Center for Autonomy and Artificial Intelligence, joins Andy and Dave to provide a summary of the recent United Nations Convention on Certain Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging Commonalities, Conclusions, and Recommendations.” The topics include: Possible Guiding Principles; characterization to promote a common understanding; human elements and human-machine interactions in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019).
Conventional Weapons meeting in Geneva on Lethal Autonomous Weapon Systems. Larry discusses the different viewpoints of the attendees, and walks through the draft document that the group published on “Emerging ... in LAWS; review of related technologies; possible options; and recommendations (SPOILER ALERT: the group recommends 10 days of discussion for 2019). /images/AI-Posters/AI_1_48.jpg Keep Talking and No Robot Explodes, Part I TOPICS GGE on LAWs - Emerging Commonalities, Conclusions and Recommendation UK positon paper US position paper Main GGE/LAWs 2018 homepage Center