skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap

Search Results

Your search for cna found 1474 results.

ai with ai: Bridge on the River NukkAI
/our-media/podcasts/ai-with-ai/season-5/5-12
Andy and Dave discuss the latest in AI news and research, including DoD’s 2023 budget for research, engineering, development, and testing at $130B, around 9.5% higher than the previous year [0:59]. DARPA announces the “In the Moment” (ITM) program, which aims to create rigorous and quantifiable algorithms for evaluating situations where objective ground truth is not available [2:58]. The European Parliament’s Special Committee on AI in a Digital Age (AIDA) adopts its final recommendations, though the report is still in draft including that the EU should not regulate AI as a technology, but rather focus on risk [6:22]. Other EP committees debated the proposal for an “AI Act” on 21 March, and included speakers such as Tegmark, Russell, and many others [8:19]. The OECD AI Policy Observatory provides an interactive visual database of national AI policies, initiatives, and strategies [10:46]. In research, a brain implant allows a fully paralyzed patient to communicate solely by “thought,” using neurofeedback [11:51]. Researchers from Collaborations Pharmaceuticals and King’s College London discover that they could repurpose their AI drug-seeking system to instead generate 40,000 possible chemical weapons [14:26]. And NukkAI holds a bridge competition and claims its NooK AI “beats eight world champions,” though others take exception to the methods [18:16]. And Kevin Pollpeter, from CNA’s China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict, and other topics [21:52].
CNA’s China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict, and other topics [21:52]. /images/AI-Posters/Season%205/AI_5_12.jpg Bridge
ai with ai: A PIG GR_PH
/our-media/podcasts/ai-with-ai/season-5/5-11
Andy and Dave discuss the latest in AI news and research, including an announcement that Ukraine’s defense ministry has begun to use Clearview AI’s facial recognition technology and that Clearview AI has not offered the technology to Russia [1:10]. In similar news, WIRED provides an overview of a topic mentioned in the previous podcast – using open-source information and facial recognition technology to identify Russian soldiers [2:46]. The Department of Defense announces its classified Joint All-Domain Command and Control (JADC2) implementation plan, and also provides an unclassified strategy [3:24]. Stanford University Human-Centered AI (HAI) releases its 2022 AI Index Report, with over 200 pages of information and trends related to AI [5:03]. In research, DeepMind, Oxford, and Athens University present Ithaca, a deep neural network for restoring ancient Greek texts, while including both geographic and chronological attribution; they designed the system to work *with* ancient historians, and the combination achieves a lower error rate (18.3%) than either alone [10:24]. NIST continues refining its taxonomy for identifying and managing bias in AI, to include systemic bias, human bias, and statistical/computational bias [13:51]. Authors Pavel Brazdil, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren, Springer-Verlag makes Metalearning available for download, which provides a comprehensive introduction to metalearning and automated machine learning [15:28]. And finally, CNA’s Dr. Anya Fink joins Andy and Dave for a discussion about the uses of disinformation in the Ukraine-Russian conflict [17:15].
provides a comprehensive introduction to metalearning and automated machine learning [15:28]. And finally, CNA’s Dr. Anya Fink joins Andy and Dave for a discussion about the uses of disinformation
ai with ai: Slightly Unconscionable
/our-media/podcasts/ai-with-ai/season-5/5-10
Andy and Dave discuss the latest in AI news and research, including a GAO report on AI – Status of Developing and Acquiring Capabilities for Weapon Systems [1:01]. The U.S. Army has awarded a contract for the demonstration of an offensive drone swarm capability (the HIVE small Unmanned Aircraft System), seemingly similar but distinct from DARPA’s OFFSET demo [4:11]. A ‘pitch deck’ from Clearview AI reveals their intent to expand beyond law enforcement and aiming to have 100B facial photos in its database within a year [5:51]. Tortoise Media releases a global AI index that benchmarks nations based on their level of investment, innovation, and implementation of AI [7:57]. Research from UC Berkeley and the University of Lancaster shows that humans can no longer distinguish between real and fake (generated by GANs) faces [10:30]. MIT, Aberdeen, and the Centre of Governance of AI look at trends of computation in machine learning, identifying three eras and trends, including a ‘large-scale model’ trend where large corporations use massive training runs [13:37]. A tweet from the chief scientist at OpenAI, speculating on the ‘slightly conscious’ attribute of today’s large neural networks, sparks much discussion [17:23]. While a white paper in the International Journal of Astrobiology examines what intelligence might look like at the planetary level, placing Earth as an immature Technosphere [19:04]. And Kush Varchney at IBM publishes for open access a book on Trustworthy Machine Learning, examining issues of trust, safety, and much more [21:29]. Finally, CNA Russia Studies Program member Sam Bendett returns for a quick update on autonomy and AI in the Ukraine-Russia conflict [23:30].
Technosphere [19:04]. And Kush Varchney at IBM publishes for open access a book on Trustworthy Machine Learning, examining issues of trust, safety, and much more [21:29]. Finally, CNA Russia Studies
ai with ai: Short Circuit RACER
/our-media/podcasts/ai-with-ai/season-5/5-9
Andy and Dave discuss the latest in AI news and research, starting with the Aircrew Labor In-Cockpit Automation System (ALIAS) program from DARPA, which flew a UH-60A Black Hawk autonomously and without pilots on board, to include autonomous (simulated) obstacle avoidance [1:05]. Another DARPA program, Robotic Autonomy in Complex Environments with Resiliency (RACER) entered its first phase, focused on high-speed autonomous driving in unstructured environments, such as off-road terrain [2:39]. The National Science Board releases its State of U.S. Science and Engineering 2022 report, which shows the U.S. continues to lose its leadership position in global science and engineering [4:30]. The Undersecretary of Defense for Research and Engineering, Heidi Shyu, formally releases its technology priorities, 14 areas grouped into three categories: seed areas, effective adoption areas, and defense-specific areas [6:31]. In research, OpenAI creates InstructGPT in an attempt to align language models to follow human instructions better, resulting in a model with 100x fewer parameters than GPT-3 and provided a user-favored output 70% of the time, though still suffering from toxic output [9:37]. DeepMind releases AlphaCode, which has succeeded in programming competitions with an average ranking in the top 54% across 10 contests with more than 5,000 participants each though it approaches the problem through more of a brute-force approach [14:42]. DeepMind and the EPFL’s Swiss Plasma Center also announce they have used reinforcement learning algorithms to control nuclear fusion (commanding the full set of control coils of a tokamak magnetic controller). Venture City publishes Timelapse of AI (2028 – 3000+), imagining how the next 1,000 years will play out for AI and the human race [18:25]. And finally, with the Russia-Ukraine conflict continuing to evolve, CNA’s Russia Program experts Sam Bendett and Jeff Edmonds return to discuss what Russia has in its inventory when it comes to autonomy and how they might use it in this conflict, wrapping up insights from their recent paper on Russian Military Autonomy in a Ukraine Conflict [22:52]. Listener Note: The interview with Sam Bendett and Jeff Edmonds was recorded on Tuesday, February 22 at 1 pm. At the time of recording, Russia had not yet launched a full-scale invasion of Ukraine.
:25]. And finally, with the Russia-Ukraine conflict continuing to evolve, CNA’s Russia Program experts Sam Bendett and Jeff Edmonds return to discuss what Russia has in its inventory when it comes
ai with ai: K9mm
/our-media/podcasts/ai-with-ai/season-5/5-1
Welcome to Season 5.0 of AI with AI! Andy and Dave discuss the latest in AI news and research, including. The White House calls for an AI “bill of rights,” and invites comments for information. In its 4th year, Nathan Benaich and Ian Hogarth publish their State of AI Report, 2021. [1:50] OpenAI uses reinforcement learning from human feedback and recursive task decomposition to improve algorithms’ abilities to summarize books. [3:14] IEEE Spectrum publishes a paper that examines the diminishing returns of deep learning, questioning the long-term viability of the technology. [5:12] In related news, Nvidia and Microsoft release a 530 billion-parameter style language model, the Megatron-Turing Natural Language Generation model (MT-NLG). [6:54] DeepMind demonstrates the use of a GAN in improving high-resolution precipitation “nowcasting.” [10:05] Researchers from Waterloo, Guelph, and IIT Madras publish research on deep learning that can identify early warning signals of tipping points. [11:54] Military robot maker Ghost Robots creates a robot dog with a rifle, the Special Purpose Unmanned Rifle, or SPUR. [14:25] And Dr. Larry Lewis joins Dave and Andy to discuss the latest report from CNA on Leveraging AI to Mitigate Civilian Harm, which describes the causes of civilian harm in military operations, identifies how AI could protect civilians from harm and identifies ways to lessen the infliction of suffering, injury, and destruction overall. [16:36]
from CNA on Leveraging AI to Mitigate Civilian Harm, which describes the causes of civilian harm in military operations, identifies how AI could protect civilians from harm and identifies ways
ai with ai: No Time to AI
/our-media/podcasts/ai-with-ai/season-4/4-33
Andy and Dave discuss the latest in AI news, starting with the US Consumer Products Safety Commission report on AI and ML. The Deputy Secretary of Defense outlines Responsible AI Tenets, along with mandating the JAIC to start work on four activities for developing a responsible AI ecosystem. The Director of the US Chamber of Commerce’s Center for Global Regulatory Cooperation outlines concerns with the European Commission’s newly drafted rules on regulating AI. Amnesty International crowd-sources an effort to identify surveillance cameras that the New York City Police Department have in use, resulting in a map of over 15,000 camera locations. The Royal Navy uses AI for the first time at sea against live supersonic missiles. And the Ghost Fleet Overlord unmanned surface vessel program completes its second autonomous transit from the Gulf Coast, through the Panama Canal, and to the West Coast. Finally, CNA Russia Program team members Sam Bendett and Jeff Edmonds join Andy and Dave for a discussion on their latest report, which takes a comprehensive look at the ecosystem of AI in Russia, including its policies, resourcing, infrastructure, and activities.
program completes its second autonomous transit from the Gulf Coast, through the Panama Canal, and to the West Coast. Finally, CNA Russia Program team members Sam Bendett and Jeff Edmonds join Andy
ai with ai: Someday My ‘Nets Will Code
/our-media/podcasts/ai-with-ai/season-4/4-32
Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council’s Panel of Experts, which notes the March 2020 use of the “fully autonomous” Kargu-2 to engage retreating forces; it’s unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses a 825GB data “Pile” to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as “code generators,” creating a benchmark (the Automated Programming Progress Standard) to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with “Someday.” And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally Diana Gehlhaus from Georgetown’s Center for Security and Emerging Technology (CSET), joins Andy and Dave to preview an upcoming event, “Requirements for Leveraging AI.” The interview with Diana Gehlhaus begins at 33:32
could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine tune the larger model). The CNA Russia Studies
ai with ai: the social bot network
/our-media/podcasts/ai-with-ai/season-4/4-1
Andy and Dave kick off Season 4.0 of AI with AI with a discussion on social media bots. CNA colleagues Meg McBride and Kasey Stricklin join to discuss the results of their recent research efforts, in which they explored the national security implications of social media bots. They describe the types of activities that social media bots engage in (distributing, amplifying, distorting, hijacking, flooding, and fracturing), how these activities might evolve in the near future, the legal frameworks (or lack thereof), and the implications for US special operations forces and the broader national security community.
Andy and Dave kick off Season 4.0 of AI with AI with a discussion on social media bots. CNA colleagues Meg McBride and Kasey Stricklin join to discuss the results of their recent research efforts, in which they explored the national security implications of social media bots. They describe the types of activities that social media bots engage in (distributing, amplifying, distorting, hijacking, flooding, and fracturing), how these activities might evolve in the near future, the legal frameworks (or lack thereof), and the implications for US special operations forces and the broader national
ai with ai: Newton & the 3-Body Problem
/our-media/podcasts/ai-with-ai/season-3/3-2
Andy and Dave discuss the AI-related supplemental report to the President’s Budget Request. The California governor signs a bill banning facial recognition use by the state’s law enforcement agencies. The 2019 Association of the US Army meeting focuses on AI. A DoD panel discussion explores the Promise and Risk of the AI Revolution. And the 3rd Annual DoD AI Industry Day will be 13 November in Silver Spring, MD. Researchers at the University of Edinburgh, the University of Cambridge, and Leiden University announce using a deep neural network to solve the chaotic 3-body problem, providing accurate solutions up to 100 million times faster than a state-of-the-art solver. Research from MIT uses a convolutional neural network to recover or recreate probable ensembles of dimensionally collapsed information (such as a video collapsing to one single image). Kate Crawford and Meredith Whittaker take a look at 2019 and the Growing Pushback Against Harmful AI. Air University Press releases AI, China, Russia, and the Global Order, edited by Nicholas Wright, with contributions from numerous authors, including Elsa Kania and Sam Bendett. Michael Stumborg from CNA pens a response to the National Security Commission’s request for ideas, on AI’s Long Data Tail. Deisenroth, Faisal, and Ong make their Mathematics for Machine Learning available. Melanie Mitchell pens AI: A Guide for Thinking Humans. An article in the New Yorker by John Seabrook examines the role of AI/ML in writing, with The Next Word. And the Allen Institute for AI updates its Semantic Scholar with now more than 175 million scientific papers across even more fields of research.
, China, Russia, and the Global Order, edited by Nicholas Wright, with contributions from numerous authors, including Elsa Kania and Sam Bendett. Michael Stumborg from CNA pens a response
ai with ai: Salvere Rex or Salve Getafix?
/our-media/podcasts/ai-with-ai/season-2/2-27
Professor Jennifer McArdle, Assistant Professor of Cyber Defense at Salve Regina University, joins Andy and Dave for a discussion on AI and machine learning. Jenny is leading a group of graduate students who are working on creating a strategic-level primer on AI, particularly aimed at those who may be less familiar with the technical aspects, as well as a War on the Rocks article on AI in training and synthetic environments. Her students are studying in a variety of areas, including cyber defense and digital forensics, cyber and synthetic training, cyber intelligence, healthcare and healthcare administration, and administrative justice. Graduate students Mackenzie Mandile and Saurav Chatterjee also join for a discussion on their research topics. In the photo (from left to right): Maria Hendrickson, Gabrielle Cusano, Abigail Verille, Erin Rorke, (John Cleese), Saurav Chatterjee, Allegra Graziano, Santiago Durango, Eric Baucke, Mackenzie Mandile, Dave Broyles, Jennifer McArdle, Andy Ilachinski, John Crooks, (Getafix), and Lt. Col. David Lyle.
Photos from Salve Regina’s visit to CNA ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact/Email ContactPhone /*/Contact/Phone 2 27 9734531