skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap

Search Results

Your search for David found 111 results.

ai with ai: All Good Things
/our-media/podcasts/ai-with-ai/season-6/6-8
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!
be behind paywall) Story in ArsTechica (open access) “David beats Go-liath” (by Gary Marcus)   ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail
ai with ai: Drawing Outside the Box
/our-media/podcasts/ai-with-ai/season-6/6-1
Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers’ views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute.
is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi
ai with ai: Xenadu
/our-media/podcasts/ai-with-ai/season-5/5-7
Andy and Dave discuss the latest in AI news and research, including an update from the DARPA OFFSET (OFFensive Swarm-Enabled Tactics) program, which demonstrated the use of swarms in a field exercise, to include one event that used 130 physical drone platforms along with 30 simulated [0:33]. DARPA’s GARD (Guaranteeing AI Robustness against Deception) program has released a toolkit to help AI developers test their models against attacks. Undersecretary of Defense for Research and Engineering, Heidi Shyu, announced DoD’s technical priorities, including AI and autonomy, hypersonics, quantum, and others; Shyu expressed a focus on easy-to-use human/machine interfaces [3:35]. The White House AI Initiative Office opened an AI Public Researchers Portal to help connect AI researchers with various federal resources and grant-funding programs [8:44]. A Tesla driver faces felony charges (likely a first) for a fatal crash in which Autopilot was in use, though the criminal charges do not mention the technology [12:23]. In research, MIT’s CSAIL publishes (worrisome) research on high scoring convolution neural networks that still achieve high accuracy, even in the absence of “semantically salient features” (such as graying out most of the image); the research also contains a useful list of known image classifier model flaws [18:29]. David Ha and Yujin Tang, at Google Brain in Tokyo, published a white paper surveying recent developments in Collective Intelligence for Deep Learning [19:46]. Roman Garnett makes available a graduate-level book on Bayesian Optimization. And Doug Blackiston returns to chat about the latest discoveries with the Xenobots research and kinematic self-replication [21:54].
salient features” (such as graying out most of the image); the research also contains a useful list of known image classifier model flaws [18:29]. David Ha and Yujin Tang, at Google Brain in Tokyo
ai with ai: Elementary, Dear GPT
/our-media/podcasts/ai-with-ai/season-3/3-42
In COVID-related AI news, Andy and Dave discuss survey from Amazon Web Surveys that examines the current status of Internet of Things applications related to COVID-19, include scenarios that might help to reduce the severity of an outbreak. MIT publishes an combinatorial machine learning method to maximize the coverage of a COVID-19 vaccine. In "quick takes" on research, Andy and Dave discuss research from Microsoft, University of Washington, and UC Irvine, which provides a checklist to help identify bugs in natural language processing algorithms. A paper from Element AI and Stanford examines whether benchmarks for natural language systems actually correspond to how we use those systems. University of Illinois at Urbana-Champaign, Columbia University, and US Army Research Lab introduce GAIA, which processes unstructured and heterogeneous multimedia data and creates a coherent knowledge base, and allows for text queries. Research published in Nature Neuroscience examines the brain connectivity of 130 mammalian species and finds efficiency of information transfer through the brain does not depend on the size or structure of any specific brain. And finally, Andy and Dave spend some time talking about the broader implications of GPT-3, the experiments that people are conducting with it, and how it is not an AGI. Dave concludes with an analogy from Star Trek: the Next Generation, that he gets mostly correct, though he misattributes Geordi La Forge's action to Dr. Pulaski. If only he had a positronic matrix!
and God Philosophers On GPT-3: with Replies by GPT-3 "David Chalmers" (really, GPT-3) interviewed by a human on whether GPT3, could be conscious GPT-3: The First Artificial General
ai with ai: COVium-Gatherum
/our-media/podcasts/ai-with-ai/season-3/3-23
Jvion has provided an online mapping tool to view regions of the United States and see the areas most vulnerable to issues related to COVID, a “COVID Vulnerability Map.” A video clip from Tectonix uses anonymized crowdsourced data to show how Spring Breakers at one Fort Lauderdale beach spread back across the United States, to demonstrate the ease with which a virus * could * spread. A new initiative from Boston Children’s Hospital and Harvard Medical School seeks to create a real-time way to get crowdsourced inputs on potential COVID infections, with “COVID Near You.” Kinsa, maker of smart thermometers, uses its information in an attempt to show county-level spread of COVID-19. On 23 March, CIFAR convened an International Roundtable on AI and COVID-19, which had over 60 particpants; among other points, the ground noted the stark gap between data that is available to governments and what is available to epidemiologists and modelers. C3.ai Digital Transformation Institute, a newly formed research consortium dedicated to accelerating applications of AI, seeks research proposals for AI tools to help curb the effects of the coronavirus. The European Commission is seeking ideas for AI and robotic solutions to help combat COVID-19. The New York Times builds the first U.S. county-level COVID-19 database. Complexity Science Hub Vienna compiles a dataset of country- and U.S. state-policy changes related to COVID-19. The Stanford Institute for Human-Centered AI convenes  a virtual conference on 1 April on COVID-19 and AI. And the ELLIS Society sponsors an online workshop on COVID-19 and AI. Finally, AI with AI producer John Stimpson interviews Dr. Alex Wong, co-founder of Darwin.AI and Euclid Labs, on COVID-Net, an open-sourced convolutional neural network for detecting COVID-19 in chest x-rays.
/ Visualization   (developed by   David Schnurr , Data Vis,   Twitter @dschnr ): COVDI-19 Databases: Country and U.S. State-Policy Changes CSH Covid-19 Control Strategies List (CCCSL) Announcement
ai with ai: RIDE of the COV-all-cures
/our-media/podcasts/ai-with-ai/season-3/3-22
In COVID-related news, Andy and Dave discuss CloseLoop.ai and its release of an open-source toolkit for predicting people vulnerable to COVID-19. A Korean biotech company, Seegene, announces that it has used AI to create a coronavirus test. DarwinAI and research at the University of Waterloo announce COVID-Net, a convolutional neural network for detecting COVID-19 in chest x-rays. In non-COVID news, the White House releases its first annual report on AI. The U.S. intelligence community describes its interest in using explainable and interpretable AI. And Microsoft introduces a checklist that attempts to bridge the gap between the AI ethics community and ML practitioners. And House Science Committee members introduce the National AI Initiative Act, which aims to accelerate and coordinate federal investments in AI. In research, the NIH monitors brains replaying memories in real time, by examining neuron firing patterns for word pattern association (such as camel and lime). Facebook AI Research announces Rewarding Impact-Driven Exploration (RIDE), where agents are encouraged to take actions that have significant impact on the environment state. Researchers from the WHO and other institutions examine the landscape of AI applications to COVID-19. Andrea Gilli publishes The Brain and the Processor: Unpacking the Challenges of Human-Machine Interaction, a collection of papers on the topic. And David Foster’s book on Generative Deep Learning becomes available for free.
of papers on the topic. And David Foster’s book on Generative Deep Learning becomes available for free. /images/AI-Posters/AI_3_22.jpg RIDE of the COV-all-cures ContactName
ai with ai: We’ve Got You COVID
/our-media/podcasts/ai-with-ai/season-3/3-21
Not surprisingly, COVID-19 has taken over the news section, but still as it all relates to AI and machine learning. Andy and Dave discuss the COVID-19 Open Research Data Set, a free resource of over 29,000 scholarly articles on the coronavirus family, made available for the Allen Institute, CSET, CZI, Microsoft Research, NIH, and the White House OSTP. In similar news, over 100 organizations have signed a “wellcome statement” to make COVID-19 research and data open for access. The New England Complex Systems Institute provides a host of pandemic resources online. The CDC is using machine learning to forecast COVID-19 (adapting its efforts in forecasting influenza outbreaks). And Anodot launches a public machine learning-driven service to track COVID-19. In research, somehow not COVID-19 related, Google Brain and Google Research demonstrate Auto-ML Zero, which discovers complete machine learning algorithms by using basic mathematical functions as building blocks. The report of the week comes from Complex Multilayer Networks Lab along with Harvard, which provides a COVID-19 Infodemics Observatory, processing more than 100M tweets to quantify various sentiments as well as reliability of information from around the globe (with Singapore topping the list for most reliable information). David Barber provides Bayesian Reasoning and Machine Learning for free. And the Bipartisan Commission on Biodefense and Max Brooks provide Germ Warfare: a Very Graphic History (published in 2019).
as reliability of information from around the globe (with Singapore topping the list for most reliable information). David Barber provides Bayesian Reasoning and Machine Learning for free. And the Bipartisan
ai with ai: In the Year 20XX: 100th Episode Celebration!
/our-media/podcasts/ai-with-ai/season-2/2.38
Happy 100 th   Episode to AI with AI! Andy and Dave celebrate the 100th episode of the AI with AI podcast, starting with a new theme song, inspired by the Mega Man series of games. Andy and Dave take the time to look at the past two years of covering AI news and research, including how the podcast has grown from the first season to the second season. They also take a look back at some of the recurring themes and favorite topics, including GPT2 and the Lottery Ticket hypothesis, among many others; they also look forward to (hopefully!) all the latest and greatest news to come. Throughout this episode, we hear from listeners, supporters, and colleagues who have appeared on the podcast. Here’s to another 100, and thanks for listening!
): Probing fundamental limits New proof reveals fundamental limits of scientific knowledge , by David Wolpert, Sante Fe Institute (Podcast 31, Season 1): Learnability can be undecidable
ai with ai: Salvere Rex or Salve Getafix?
/our-media/podcasts/ai-with-ai/season-2/2-27
Professor Jennifer McArdle, Assistant Professor of Cyber Defense at Salve Regina University, joins Andy and Dave for a discussion on AI and machine learning. Jenny is leading a group of graduate students who are working on creating a strategic-level primer on AI, particularly aimed at those who may be less familiar with the technical aspects, as well as a War on the Rocks article on AI in training and synthetic environments. Her students are studying in a variety of areas, including cyber defense and digital forensics, cyber and synthetic training, cyber intelligence, healthcare and healthcare administration, and administrative justice. Graduate students Mackenzie Mandile and Saurav Chatterjee also join for a discussion on their research topics. In the photo (from left to right): Maria Hendrickson, Gabrielle Cusano, Abigail Verille, Erin Rorke, (John Cleese), Saurav Chatterjee, Allegra Graziano, Santiago Durango, Eric Baucke, Mackenzie Mandile, Dave Broyles, Jennifer McArdle, Andy Ilachinski, John Crooks, (Getafix), and Lt. Col. David Lyle.
Ilachinski, John Crooks, (Getafix), and Lt. Col. David Lyle. /images/AI-Posters/AI_2_27.jpg Salvere Rex or Salve Getafix? Salve Regina University Assistant Professor Jennifer McArdle
ai with ai: A Neural Reading rAInbow
/our-media/podcasts/ai-with-ai/season-2/2-19
Andy and Dave discuss research from Neil Johnson, who looked to the movements of fly larvae to model financial systems, where a collection of agents share a common goal, but have no way to communicate and coordinate their activities (a memory of five past events ends up being the ideal balance). Researchers at Carnegie Mellon demonstrate that random search with early-stopping is a competitive Neural Architecture Search baseline, performing at least as well as “Efficient” NAS. Unrelated research, but near-simultaneously published, from AI Lab Swisscom, shows that random search outperforms state-of-the-art NAS algorithms. Researchers at DeepMind investigate the possibility of creating an agent that can discover its world, and introduce NDIGO (Neural Differential Information Gain Optimization), designed to be “information seeking.” And the Electronics and Telecomm Research Institute in South Korea creates SC-FEGAN, a face-editing GAN that builds off of a user’s sketches and other information. Georgetown University announces a $55M grant to create the Center for Security and Emerging Technology (CSET). Microsoft workers call on the company to cancel its military contract with the U.S. Army. DeepMind uses machine learning to predict wind turbine energy production. Australia’s Defence Department invests ~$5M to study how to make autonomous weapons behave ethically. And the U.K. government invests in its people and funds AI university courses with £115. Reports suggest that U.S. police departments are using biased data to train crime-predicting algorithms. A thesis on Neural Reading Comprehension and Beyond by Danqi Chen becomes highly read. A report looks at the evaluation of citation graphs in AI research, and researchers provide a survey of deep learning for image super-resolution. Bryon Reese blogs that we need new words to adjust to AI (to which Dave adds “AI-chemy” to the list). In Point and Counterpoint, David Sliver argues that AlphaZero exhibits the “essence of creativity,” while Sean Dorrance Kelly argues that AI can’t be an artist. Interpretable Machine Learning by Christoph Molnar hits version 1.0, and Andy highlights Asimov’s classic short story, The Machine that Won the War. And finally, a symposium at Princeton University’s Institute for Advanced Studies examines deep learning – alchemy or science?
super-resolution. Bryon Reese blogs that we need new words to adjust to AI (to which Dave adds “AI-chemy” to the list). In Point and Counterpoint, David Sliver argues that AlphaZero exhibits the “essence