skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap Newsletter

Search Results

Your search for found 2049 results.

ai with ai: Leggo my Stego!
/our-media/podcasts/ai-with-ai/season-5/5-14
Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven’s GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24].
ai with ai: The Amulet of NeRFdor
/our-media/podcasts/ai-with-ai/season-5/5-13
Andy and Dave discuss the latest in AI news and research, including a proposal from the Ada Lovelace Institute with 18 recommendations to strengthen the EU AI Act. [0:57] NVidia updates its Neural Radiance Fields to Instant NeRF, which can reconstruct a 3D scene from 2D images nearly 1000 times faster than other implementations. [2:53] Nearly 100 Chinese-affiliated researchers publish a 200-page position paper about large-scale models, a “roadmap.” [4:13] In research, GoogleAI introduces PaLM (Pathway Language Model), at 540B parameters, which demonstrates the ability for logical inference and joke explanation. [7:09] OpenAI announces DALL-E 2, the successor to its previous image-from-text generator, which is no longer confused by mislabeling an item; though interestingly demonstrates greater resolution and diversity to similar technology from OpenAI, GLIDE, but not rated as well by humans, and DALL-E 2 still has challenges with ‘binding attributes.’ [11:32] A white paper from Gary Marcus look at ‘Deep Learning Is Hitting a Wall: What would it take for AI to make real progress?’ which includes an examination of a symbol-manipulation system that beat the best deep learning systems at playing ASCII game NetHack. [16:10] And Professor Chad Jenkins from the University of Michigan returns to discuss the latest new developments, including new Department of Robotics, and a new robotics undergraduate degree. [19:10]
ai with ai: Bridge on the River NukkAI
/our-media/podcasts/ai-with-ai/season-5/5-12
Andy and Dave discuss the latest in AI news and research, including DoD’s 2023 budget for research, engineering, development, and testing at $130B, around 9.5% higher than the previous year [0:59]. DARPA announces the “In the Moment” (ITM) program, which aims to create rigorous and quantifiable algorithms for evaluating situations where objective ground truth is not available [2:58]. The European Parliament’s Special Committee on AI in a Digital Age (AIDA) adopts its final recommendations, though the report is still in draft including that the EU should not regulate AI as a technology, but rather focus on risk [6:22]. Other EP committees debated the proposal for an “AI Act” on 21 March, and included speakers such as Tegmark, Russell, and many others [8:19]. The OECD AI Policy Observatory provides an interactive visual database of national AI policies, initiatives, and strategies [10:46]. In research, a brain implant allows a fully paralyzed patient to communicate solely by “thought,” using neurofeedback [11:51]. Researchers from Collaborations Pharmaceuticals and King’s College London discover that they could repurpose their AI drug-seeking system to instead generate 40,000 possible chemical weapons [14:26]. And NukkAI holds a bridge competition and claims its NooK AI “beats eight world champions,” though others take exception to the methods [18:16]. And Kevin Pollpeter, from CNA’s China Studies Program, joins to discuss the role (or lack) of Chinese technology in the Ukraine-Russia conflict, and other topics [21:52].
ai with ai: A PIG GR_PH
/our-media/podcasts/ai-with-ai/season-5/5-11
Andy and Dave discuss the latest in AI news and research, including an announcement that Ukraine’s defense ministry has begun to use Clearview AI’s facial recognition technology and that Clearview AI has not offered the technology to Russia [1:10]. In similar news, WIRED provides an overview of a topic mentioned in the previous podcast – using open-source information and facial recognition technology to identify Russian soldiers [2:46]. The Department of Defense announces its classified Joint All-Domain Command and Control (JADC2) implementation plan, and also provides an unclassified strategy [3:24]. Stanford University Human-Centered AI (HAI) releases its 2022 AI Index Report, with over 200 pages of information and trends related to AI [5:03]. In research, DeepMind, Oxford, and Athens University present Ithaca, a deep neural network for restoring ancient Greek texts, while including both geographic and chronological attribution; they designed the system to work *with* ancient historians, and the combination achieves a lower error rate (18.3%) than either alone [10:24]. NIST continues refining its taxonomy for identifying and managing bias in AI, to include systemic bias, human bias, and statistical/computational bias [13:51]. Authors Pavel Brazdil, Jan N. van Rijn, Carlos Soares, and Joaquin Vanschoren, Springer-Verlag makes Metalearning available for download, which provides a comprehensive introduction to metalearning and automated machine learning [15:28]. And finally, CNA’s Dr. Anya Fink joins Andy and Dave for a discussion about the uses of disinformation in the Ukraine-Russian conflict [17:15].
ai with ai: Slightly Unconscionable
/our-media/podcasts/ai-with-ai/season-5/5-10
Andy and Dave discuss the latest in AI news and research, including a GAO report on AI – Status of Developing and Acquiring Capabilities for Weapon Systems [1:01]. The U.S. Army has awarded a contract for the demonstration of an offensive drone swarm capability (the HIVE small Unmanned Aircraft System), seemingly similar but distinct from DARPA’s OFFSET demo [4:11]. A ‘pitch deck’ from Clearview AI reveals their intent to expand beyond law enforcement and aiming to have 100B facial photos in its database within a year [5:51]. Tortoise Media releases a global AI index that benchmarks nations based on their level of investment, innovation, and implementation of AI [7:57]. Research from UC Berkeley and the University of Lancaster shows that humans can no longer distinguish between real and fake (generated by GANs) faces [10:30]. MIT, Aberdeen, and the Centre of Governance of AI look at trends of computation in machine learning, identifying three eras and trends, including a ‘large-scale model’ trend where large corporations use massive training runs [13:37]. A tweet from the chief scientist at OpenAI, speculating on the ‘slightly conscious’ attribute of today’s large neural networks, sparks much discussion [17:23]. While a white paper in the International Journal of Astrobiology examines what intelligence might look like at the planetary level, placing Earth as an immature Technosphere [19:04]. And Kush Varchney at IBM publishes for open access a book on Trustworthy Machine Learning, examining issues of trust, safety, and much more [21:29]. Finally, CNA Russia Studies Program member Sam Bendett returns for a quick update on autonomy and AI in the Ukraine-Russia conflict [23:30].
ai with ai: Short Circuit RACER
/our-media/podcasts/ai-with-ai/season-5/5-9
Andy and Dave discuss the latest in AI news and research, starting with the Aircrew Labor In-Cockpit Automation System (ALIAS) program from DARPA, which flew a UH-60A Black Hawk autonomously and without pilots on board, to include autonomous (simulated) obstacle avoidance [1:05]. Another DARPA program, Robotic Autonomy in Complex Environments with Resiliency (RACER) entered its first phase, focused on high-speed autonomous driving in unstructured environments, such as off-road terrain [2:39]. The National Science Board releases its State of U.S. Science and Engineering 2022 report, which shows the U.S. continues to lose its leadership position in global science and engineering [4:30]. The Undersecretary of Defense for Research and Engineering, Heidi Shyu, formally releases its technology priorities, 14 areas grouped into three categories: seed areas, effective adoption areas, and defense-specific areas [6:31]. In research, OpenAI creates InstructGPT in an attempt to align language models to follow human instructions better, resulting in a model with 100x fewer parameters than GPT-3 and provided a user-favored output 70% of the time, though still suffering from toxic output [9:37]. DeepMind releases AlphaCode, which has succeeded in programming competitions with an average ranking in the top 54% across 10 contests with more than 5,000 participants each though it approaches the problem through more of a brute-force approach [14:42]. DeepMind and the EPFL’s Swiss Plasma Center also announce they have used reinforcement learning algorithms to control nuclear fusion (commanding the full set of control coils of a tokamak magnetic controller). Venture City publishes Timelapse of AI (2028 – 3000+), imagining how the next 1,000 years will play out for AI and the human race [18:25]. And finally, with the Russia-Ukraine conflict continuing to evolve, CNA’s Russia Program experts Sam Bendett and Jeff Edmonds return to discuss what Russia has in its inventory when it comes to autonomy and how they might use it in this conflict, wrapping up insights from their recent paper on Russian Military Autonomy in a Ukraine Conflict [22:52]. Listener Note: The interview with Sam Bendett and Jeff Edmonds was recorded on Tuesday, February 22 at 1 pm. At the time of recording, Russia had not yet launched a full-scale invasion of Ukraine.
ai with ai: Xenopus in Boots
/our-media/podcasts/ai-with-ai/season-5/5-8
Andy and Dave discuss the latest in AI news and research, including a report from the School of Public Health in Boston that shows why most “data for good” initiatives failed to impact the COVID-19 health crisis [0:45]. The Department of Homeland Security tests the use of robot dogs (from Ghost Robotics) for border patrol duties [5:00]. Researchers find that public trust in AI varies greatly depending on its application [7:52]. Researchers from Stanford University and Toyota Research Institute find extensive label and model errors in training data, such as over 70% of validation scenes (for publicly available autonomous vehicle datasets) containing at least one missing object box [12:05]. And principal researchers Josh Bongard and Mike Levin join Andy and Dave for more discussion on the latest Xenobots research [18:21].
ai with ai: Xenadu
/our-media/podcasts/ai-with-ai/season-5/5-7
Andy and Dave discuss the latest in AI news and research, including an update from the DARPA OFFSET (OFFensive Swarm-Enabled Tactics) program, which demonstrated the use of swarms in a field exercise, to include one event that used 130 physical drone platforms along with 30 simulated [0:33]. DARPA’s GARD (Guaranteeing AI Robustness against Deception) program has released a toolkit to help AI developers test their models against attacks. Undersecretary of Defense for Research and Engineering, Heidi Shyu, announced DoD’s technical priorities, including AI and autonomy, hypersonics, quantum, and others; Shyu expressed a focus on easy-to-use human/machine interfaces [3:35]. The White House AI Initiative Office opened an AI Public Researchers Portal to help connect AI researchers with various federal resources and grant-funding programs [8:44]. A Tesla driver faces felony charges (likely a first) for a fatal crash in which Autopilot was in use, though the criminal charges do not mention the technology [12:23]. In research, MIT’s CSAIL publishes (worrisome) research on high scoring convolution neural networks that still achieve high accuracy, even in the absence of “semantically salient features” (such as graying out most of the image); the research also contains a useful list of known image classifier model flaws [18:29]. David Ha and Yujin Tang, at Google Brain in Tokyo, published a white paper surveying recent developments in Collective Intelligence for Deep Learning [19:46]. Roman Garnett makes available a graduate-level book on Bayesian Optimization. And Doug Blackiston returns to chat about the latest discoveries with the Xenobots research and kinematic self-replication [21:54].
ai with ai: Three Amecas!
/our-media/podcasts/ai-with-ai/season-5/5-6
Andy and Dave discuss the latest in AI news and research, including the signing of the 2022 National Defense Authorization Act, which contains a number of provisions related to AI and emerging technology [0:57]. The Federal Trade Commission wants to tackle data privacy concerns and algorithmic discrimination and is considering a wide range of options to do so, including new rules and guidelines [4:50]. The European Commission proposes a set of measures to regulate digital labor platforms in the EU. Engineered Arts unveils Ameca, a gray-faced humanoid robot with “natural-looking” expressions and body movements [7:07]. And DARPA launches its AMIGOS project, aimed at automatically converting training manuals and videos into augmented reality environments [13:16]. In research, scientists at the Bar-Ilan University in Israel upend conventional wisdom on neural responses by demonstrating that the duration of the resting time (post-excitation) can exceed 20 milliseconds, that the resting period is sensitive to the origin of the input signal (e.g. left versus right), and that the neuron has a sharp transition from the refractory period to full responsiveness without an intermediate stutter phase [15:30]. Researchers at Victoria University use brain cells to play Pong using electric signals and demonstrate that the cells learn much faster than current neural networks, reaching the same point living systems reach after 10 or 15 rallies, vice 5000 rallies for computer-based AIs [19:37]. MIT researchers present evidence that ML is starting to look like human cognition, comparing various aspects of how neural networks and human brains accomplish their tasks [24:34]. And OpenAI creates GLIDE
ai with ai: Is it Alive or is it Xeno-rex?
/our-media/podcasts/ai-with-ai/season-5/5-5
Andy and Dave discuss the latest in AI news and research, starting with the US Department of Defense creating a new position of the Chief Digital and AI Officer, subsuming the Joint AI Center, the Defense Digital Service, and the office of the Chief Data Officer [0:32]. Member states of UNESCO adopt the first-ever global agreement on the ethics of AI, which includes recommendations on protecting data, banning social scoring and mass surveillance, helping to monitor and evaluate, and protecting the environment [3:26]. The European Digital Rights and 119 civil society organizations launch a collective call for an AI Act to articulate fundamental rights (for humans) regarding AI technology and research [6:02]. The Future of Life Institute releases Slaughterbots 2.0: “if human: kill()” ahead of the 3rd session in Geneva of the Group of Governmental Experts discussing lethal autonomous weapons systems [7:15]. In research, Xenobots 3.0, the living robots made from frog cells, demonstrate the ability to replicate themselves kinematically, at least for a couple of generations (extended to four generations by using an evolutionary algorithm to model ideal structures for replication) [12:23]. And researchers from DeepMind, Oxford, and Sydney demonstrate the ability to collaborate with machine learning algorithms to discover new results in mathematics (in knot theory and representation theory); though another researcher attempts to dampen the utility of the claims. [17:57] And finally, Dr. Mike Stumborg joins Dave and Andy to discuss research in Human-Machine Teaming, why it’s important, and where the research will be going [21:44].