skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report

Search Results

Your search for 7 found 135 results.

cna talks: Hurricane Harvey Recovery
/our-media/podcasts/cna-talks/2017/hurricane-harvey-recovery
As initial search and rescue operations in Houston shift to long-term recovery efforts, three CNA experts discuss the various challenges Houston will face based on their experiences with large-scale disaster recovery efforts.  Monica Giovachino moderates as Jason McNamara and Dawn Thomas share perspectives gathered from years of experience working at FEMA and studying disaster response, including major concerns from housing and mass care to considering important questions for federal and local agencies as one of America’s largest cities begins to recover and rebuild.
entities in preparing for and responding to catastrophic events. Related Materials Listen to Episode 7 below for more on Emergency Management and Preparedness CNA's Safety and Security Division
cna talks: Emergency Management and Preparedness
/our-media/podcasts/cna-talks/2017/emergency-management-and-preparedness
Tim Beres, David Kaufman, Monica Giovachino, and Jason McNamara discuss CNA Safety and Security's work on homeland security, domestic preparedness, and emergency management over the last two decades. The four experts discuss the Oklahoma City bombing, the Tokyo subway sarin gas incident, the September 11, 2001 attacks, and the impact of these events on homeland security preparedness, as well as the planning and economic implications of natural disasters.
Palin ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact/Email ContactPhone /*/Contact/Phone 7 5430720
ai with ai: Up, Up, and Autonomy!
/our-media/podcasts/ai-with-ai/season-6/6-7
Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force publishes its final report, in which it details its plans for a national research infrastructure, as well as its request for $2.6 billion over 6 years to fund the initiatives. DARPA announces the Autonomous Multi-domain Adaptive Swarms-of-Swarms (AMASS) program, a much larger effort (aiming for thousands of autonomous entities) than its previous OFFSET program. And finally, from the Naval Postgraduate School’s Energy Academic Group, Kristen Fletcher and Marina Lesse join to discuss their research and efforts in autonomous systems and maritime law and policy, including a discussion about the DoDD 3000.09 update and the high-altitude balloon incident.
ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact/Email ContactPhone /*/Contact/Phone 6 7 25892961
ai with ai: Top Gan: Swarmaverick
/our-media/podcasts/ai-with-ai/season-5/5-16
Description: Andy and Dave discuss the latest in AI news and research, starting with an announcement that DoD will be updating its Directive 3000.09 on “Autonomous Weapons,” with the new Emerging Capabilities Policy Office leading the way [1:25]. The DoD names Diane Staheli as the new chief for Responsible AI [5:19]. NATO launches an AI strategic initiative, Horizon Scanning, to better understand AI and its potential military implications [6:31]. China unveils an autonomous drone carrier ship even though Dave wonders about the use of the terms unmanned and autonomous [8:59]. Stanford University and the Human-Centered AI Center build on their initiative for foundation models by releasing a call to the community for developing norms on the release of foundation models [10:42]. DECIDE-AI continues to develop its reporting guidelines for early-stage clinical evaluation of AI decision support systems [14:39]. The Army successfully demonstrates four waves of seven drones, launched by a single operator, during EDGE 22 [18:31]. Researchers from Zhejiang University and Hong Kong University of S&T demonstrate a swarm of physical micro flying robots, fully autonomous, able to navigate and communicate as a swarm, with fully onboard perception, localization, and control [19:58]. Google Research introduces a new text-to-image generator, Imagen, which uses diffusion models to increase the size and photorealism of an image [24:20]. Researchers discover that an AI algorithm can identify race from X-ray and CT images, even when correcting for variations such as body-mass index but can’t explain why or how [31:21]. And Sonantic uses AI to create the voice lines for Val Kilmer in the new movie Top Gun: Maverick [34:18].
Gan: Swarmaverick RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7. Announcements DoD will be updating its "Autonomous Weapons" Directive 3000.09
ai with ai: El Gato Altinteligento
/our-media/podcasts/ai-with-ai/season-5/5-15
Andy and Dave discuss the latest in AI news and research, starting with the European Parliament adopting the final recommendations of the Special Committee on AI in a Digital Age (AIDA), finding that the EU should not always regulate AI as a technology, but use intervention proportionate to the type of risk, among other recommendations [1:31]. Synchron enrolled the first patient in the U.S. clinical trial of its brain-computer interface, Stentrode, which does not require drilling into the skull or open brain surgery; it is, at present, the only company to receive FDA approval to conduct clinical trials of a permanently implanted BCI [4:14]. MetaAI releases its 175B parameter transformer for open use, Open Pre-trained Transformers (OPT), to include the codebase used to train and deploy the model, and their logbook of issues and challenges [6:25].  In research, DeepMind introduces Gato, a “single generalist agent,” which with a single set of weights, is able to complete over 600 tasks, including chatting, playing Atari games, captioning images, and stacking blocks with a robotic arm; one DeepMind scientist used the results to claim that “the game is over” and it’s all about scale now, to which others that using massive amounts of data as a substitute for intelligence is perhaps “alt intelligence [8:48].” In the opinion essay of the week, Steve Johnson pens “AI is mastering language, should we trust what it says [18:07]?” Daedalus’s Spring 2022 issue focuses on AI and Society, with nearly 400 pages and over 25 essays on a variety of AI-related topics [19:06]. And finally, Professor Ido Kanter from Bar-Ilan University joins to discuss his latest neuroscience research, which suggests a new model for how neurons learn, using dendritic branches [20:48].
/Season%205/%235_15_El_Gato_Inteligento.png El Gato Altinteligento RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7. Apply: Sr. Research Specialist
ai with ai: Leggo my Stego!
/our-media/podcasts/ai-with-ai/season-5/5-14
Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven’s GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24].
, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven’s GEOINT AI
ai with ai: The Amulet of NeRFdor
/our-media/podcasts/ai-with-ai/season-5/5-13
Andy and Dave discuss the latest in AI news and research, including a proposal from the Ada Lovelace Institute with 18 recommendations to strengthen the EU AI Act. [0:57] NVidia updates its Neural Radiance Fields to Instant NeRF, which can reconstruct a 3D scene from 2D images nearly 1000 times faster than other implementations. [2:53] Nearly 100 Chinese-affiliated researchers publish a 200-page position paper about large-scale models, a “roadmap.” [4:13] In research, GoogleAI introduces PaLM (Pathway Language Model), at 540B parameters, which demonstrates the ability for logical inference and joke explanation. [7:09] OpenAI announces DALL-E 2, the successor to its previous image-from-text generator, which is no longer confused by mislabeling an item; though interestingly demonstrates greater resolution and diversity to similar technology from OpenAI, GLIDE, but not rated as well by humans, and DALL-E 2 still has challenges with ‘binding attributes.’ [11:32] A white paper from Gary Marcus look at ‘Deep Learning Is Hitting a Wall: What would it take for AI to make real progress?’ which includes an examination of a symbol-manipulation system that beat the best deep learning systems at playing ASCII game NetHack. [16:10] And Professor Chad Jenkins from the University of Michigan returns to discuss the latest new developments, including new Department of Robotics, and a new robotics undergraduate degree. [19:10]
and joke explanation. [7:09] OpenAI announces DALL-E 2, the successor to its previous image-from-text generator, which is no longer confused by mislabeling an item; though interestingly demonstrates
ai with ai: Slightly Unconscionable
/our-media/podcasts/ai-with-ai/season-5/5-10
Andy and Dave discuss the latest in AI news and research, including a GAO report on AI – Status of Developing and Acquiring Capabilities for Weapon Systems [1:01]. The U.S. Army has awarded a contract for the demonstration of an offensive drone swarm capability (the HIVE small Unmanned Aircraft System), seemingly similar but distinct from DARPA’s OFFSET demo [4:11]. A ‘pitch deck’ from Clearview AI reveals their intent to expand beyond law enforcement and aiming to have 100B facial photos in its database within a year [5:51]. Tortoise Media releases a global AI index that benchmarks nations based on their level of investment, innovation, and implementation of AI [7:57]. Research from UC Berkeley and the University of Lancaster shows that humans can no longer distinguish between real and fake (generated by GANs) faces [10:30]. MIT, Aberdeen, and the Centre of Governance of AI look at trends of computation in machine learning, identifying three eras and trends, including a ‘large-scale model’ trend where large corporations use massive training runs [13:37]. A tweet from the chief scientist at OpenAI, speculating on the ‘slightly conscious’ attribute of today’s large neural networks, sparks much discussion [17:23]. While a white paper in the International Journal of Astrobiology examines what intelligence might look like at the planetary level, placing Earth as an immature Technosphere [19:04]. And Kush Varchney at IBM publishes for open access a book on Trustworthy Machine Learning, examining issues of trust, safety, and much more [21:29]. Finally, CNA Russia Studies Program member Sam Bendett returns for a quick update on autonomy and AI in the Ukraine-Russia conflict [23:30].
based on their level of investment, innovation, and implementation of AI [7:57]. Research from UC Berkeley and the University of Lancaster shows that humans can no longer distinguish between real
ai with ai: Xenopus in Boots
/our-media/podcasts/ai-with-ai/season-5/5-8
Andy and Dave discuss the latest in AI news and research, including a report from the School of Public Health in Boston that shows why most “data for good” initiatives failed to impact the COVID-19 health crisis [0:45]. The Department of Homeland Security tests the use of robot dogs (from Ghost Robotics) for border patrol duties [5:00]. Researchers find that public trust in AI varies greatly depending on its application [7:52]. Researchers from Stanford University and Toyota Research Institute find extensive label and model errors in training data, such as over 70% of validation scenes (for publicly available autonomous vehicle datasets) containing at least one missing object box [12:05]. And principal researchers Josh Bongard and Mike Levin join Andy and Dave for more discussion on the latest Xenobots research [18:21].
Andy and Dave discuss the latest in AI news and research, including a report from the School of Public Health in Boston that shows why most “data for good” initiatives failed to impact the COVID-19 health crisis [0:45]. The Department of Homeland Security tests the use of robot dogs (from Ghost Robotics) for border patrol duties [5:00]. Researchers find that public trust in AI varies greatly depending on its application [7:52]. Researchers from Stanford University and Toyota Research Institute find extensive label and model errors in training data, such as over 70% of validation scenes
ai with ai: Xenadu
/our-media/podcasts/ai-with-ai/season-5/5-7
Andy and Dave discuss the latest in AI news and research, including an update from the DARPA OFFSET (OFFensive Swarm-Enabled Tactics) program, which demonstrated the use of swarms in a field exercise, to include one event that used 130 physical drone platforms along with 30 simulated [0:33]. DARPA’s GARD (Guaranteeing AI Robustness against Deception) program has released a toolkit to help AI developers test their models against attacks. Undersecretary of Defense for Research and Engineering, Heidi Shyu, announced DoD’s technical priorities, including AI and autonomy, hypersonics, quantum, and others; Shyu expressed a focus on easy-to-use human/machine interfaces [3:35]. The White House AI Initiative Office opened an AI Public Researchers Portal to help connect AI researchers with various federal resources and grant-funding programs [8:44]. A Tesla driver faces felony charges (likely a first) for a fatal crash in which Autopilot was in use, though the criminal charges do not mention the technology [12:23]. In research, MIT’s CSAIL publishes (worrisome) research on high scoring convolution neural networks that still achieve high accuracy, even in the absence of “semantically salient features” (such as graying out most of the image); the research also contains a useful list of known image classifier model flaws [18:29]. David Ha and Yujin Tang, at Google Brain in Tokyo, published a white paper surveying recent developments in Collective Intelligence for Deep Learning [19:46]. Roman Garnett makes available a graduate-level book on Bayesian Optimization. And Doug Blackiston returns to chat about the latest discoveries with the Xenobots research and kinematic self-replication [21:54].
ContactPhone /*/Contact/Phone 5 7 21933569
‹ Prev 1 ... 8 9 10 11 12 ... 14 Next ›