skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report

Search Results

Your search for AI Ethics found 72 results.

ai with ai: HurriCOVID Season
/our-media/podcasts/ai-with-ai/season-3/3-29
In COVID-related AI news, Andy and Dave discuss an approach from FiveThirtyEight that uses a mini-model-ensemble to predict possible trajectories for the COVID-19 death toll. MIT Tech Review has released a tracker for COVID-19 tracing trackers, which includes information on how they work and what policies they have in place. In non-COVID-related AI news, DIU releases a solicitation for Vigilante Keeper, an AI solution for detecting behavioral changes that might indicate increased vulnerability. OpenAI releases an analysis that shows the amount of computation needed to train an ImageNet classifier decreases by a factor of 2 every 16 months, which suggests algorithm progress has resulted in more gains than increased hardware efficiency. The Library of Congress is using machine learning to digitize and organize photos from old newspapers. Microsoft unveils a new tool in Word that makes sentence-level suggestions. And MIT Tech Review Insights publishes an examination of Asia’s advantage in AI with a look at the Asia-Pacific region in the Global AI Agenda. In research, Andy and Dave discuss RTFM (Read to Fight Monsters) from Facebook, which uses roguelike procedural generation to dynamically create goals, monsters, and other attributes, which agents then attempt to fight. The book of the week comes from Miroslav Kubat, with the second edition of An Introduction to Machine Learning. The Australian Defense College has announced the winners of its 2020 Sci-Fi Writing Competition. The full documentary for AlphaGo – The Movie, is now available on YouTube. The proceedings are now available from a federal health virtual forum on AI for COVID-19 Response. CSET will host a discussion on lessons learned for Algorithmic Warfare in DoD on 27 May. And LessWrong by Stuart Armstrong takes a look at Kurzweil's predictions (from 1999) about 2019.
of the Week From Principles to Practice:   An Interdisciplinary Framework to Operationalize AI Ethics 56 page report Decision Points in AI Governance 58 page report Review Paper ... In COVID-related AI news, Andy and Dave discuss an approach from FiveThirtyEight that uses a mini-model-ensemble to predict possible trajectories for the COVID-19 death toll. MIT Tech Review has released a tracker for COVID-19 tracing trackers, which includes information on how they work and what policies they have in place. In non-COVID-related AI news, DIU releases a solicitation
ai with ai: RIDE of the COV-all-cures
/our-media/podcasts/ai-with-ai/season-3/3-22
In COVID-related news, Andy and Dave discuss CloseLoop.ai and its release of an open-source toolkit for predicting people vulnerable to COVID-19. A Korean biotech company, Seegene, announces that it has used AI to create a coronavirus test. DarwinAI and research at the University of Waterloo announce COVID-Net, a convolutional neural network for detecting COVID-19 in chest x-rays. In non-COVID news, the White House releases its first annual report on AI. The U.S. intelligence community describes its interest in using explainable and interpretable AI. And Microsoft introduces a checklist that attempts to bridge the gap between the AI ethics community and ML practitioners. And House Science Committee members introduce the National AI Initiative Act, which aims to accelerate and coordinate federal investments in AI. In research, the NIH monitors brains replaying memories in real time, by examining neuron firing patterns for word pattern association (such as camel and lime). Facebook AI Research announces Rewarding Impact-Driven Exploration (RIDE), where agents are encouraged to take actions that have significant impact on the environment state. Researchers from the WHO and other institutions examine the landscape of AI applications to COVID-19. Andrea Gilli publishes The Brain and the Processor: Unpacking the Challenges of Human-Machine Interaction, a collection of papers on the topic. And David Foster’s book on Generative Deep Learning becomes available for free.
, announces that it has used AI to create a coronavirus test. DarwinAI and research at the University of Waterloo announce COVID-Net, a convolutional neural network for detecting COVID-19 in chest x-rays. In non-COVID news, the White House releases its first annual report on AI. The U.S. intelligence community describes its interest in using explainable and interpretable AI. And Microsoft introduces a checklist that attempts to bridge the gap between the AI ethics community and ML practitioners. And House Science Committee members introduce the National AI Initiative Act, which aims to accelerate
ai with ai: XLand, Simulation of Sweet Adventures
/our-media/podcasts/ai-with-ai/season-4/4-38
Andy and Dave discuss the latest in AI news, including a story from MIT Technology Review (which echoes observations made previously on AI with AI) that “hundreds of AI tools have been built to catch COVID. None of them helped.” DeepMind has used its AlphaFold program to identify the structure for 98.5 percent of roughly 20,000 human proteins, and will make the information publicly available. The Pentagon makes use of machine learning algorithms to create decision space in the latest of Global Information Dominance Experiments. An Australian court rules that AI systems can be “inventors” under patent law (but not “owners”), and South Africa issues the world’s first patent to an “AI System.” The United States Special Operations Command put 300 of its personnel through a unique six-week crash course in AI, including leaders such as Google CEO Eric Schmidt and former Defense Secretary Ash Carter. And President Biden nominates Stanford professor Ramin Toloui, who has experience with AI technologies and impacts, as an Assistant Secretary of State for business. In research, DeepMind develop agents capable of “open-ended learning” in XLand, an environment with diverse tasks and challenges. A survey from the Journal of AI Research finds that AI researchers have varying amounts of trust in different organizations, companies, and governments. The Journal of Strategic Studies dedicates an issue to Emerging Technologies, with free access. Mine Cetinkaya-Rundel and Johanna Hardin make an Introduction to Modern Statistics open access with an option (or with proceeds going to OpenIntro, a US-based nonprofit). And Iyad Rahwan curates a collection of evil AI cartoons.
Ethics and Governance of AI: Evidence from a Survey of ML Researchers Report Journal of the Week Journal of Strategic Studies, Volume 44, Issue 4 (2021) Book of the Week Introduction ... 4-38 Andy and Dave discuss the latest in AI news, including a story from MIT Technology Review (which echoes observations made previously on AI with AI) that “hundreds of AI tools have been built ... available. The Pentagon makes use of machine learning algorithms to create decision space in the latest of Global Information Dominance Experiments. An Australian court rules that AI systems can
ai with ai: NOAA’s Arcade
/our-media/podcasts/ai-with-ai/season-3/3-20
In news items, Andy and Dave discuss an effort by Boston Children’s Hospital to use machine learning to help track the spread of COVID-19. Meanwhile, a proposal from researchers wants to use mobile phones to track the virus’s spread. Fifty-two organization have come together to develop the “first-ever industry-led” standard for AI in healthcare. The National Oceanic and Atmospheric Administration (NOAA) announces its AI strategy. And IBM and Promare begin sea trials for Mayflower, an autonomous ship that, later this year, will make the reverse of the 1620 Mayflower transit, completely unmanned. In research, Google and Columbia University enable a robot to teach itself how to walk with minimal human intervention (bounding the terrain, and making the robot’s trial movements more cautious). Researchers at Harvard, MIT CSAIL, IBM-Watson-AI Lab, and DeepMind introduce CLEVRER (Collision Events for Video Representation and Reasoning), a diagnostic video dataset for the evaluation of models on a wide range of reasoning tasks. And DeepMind proposal a new reinforcement learning technique that models human behavior, using a gifting game in which agents learn to trust each other. The Berkman Klein Center at Harvard updates its data map of Ethical and Rights-based approaches to Principles for AI. The Center for the Study of the Dragon releases its likely last paper, Unarmed and Dangerous, which looks at how non-weaponized drones can still have lethal effects. Cansu Canca has provided a database and interface that looks at global dynamics of AI principles. Mario Alemi provides the book of the week, with the Amazing Journey of Reason: from DNA to AI. And the livestream talks from the 34th AAAI Conference are now available online.
of the Week Dynamics of AI Principles: The Big Picture (15min) Video – Integrating ethics in AI R&D Book of the Week The Amazing Journey of Reason: from DNA to Artificial Intelligence ... mobile phones to track the virus’s spread. Fifty-two organization have come together to develop the “first-ever industry-led” standard for AI in healthcare. The National Oceanic and Atmospheric Administration (NOAA) announces its AI strategy. And IBM and Promare begin sea trials for Mayflower, an autonomous ship that, later this year, will make the reverse of the 1620 Mayflower transit, completely
ai with ai: A Neural Reading rAInbow
/our-media/podcasts/ai-with-ai/season-2/2-19
Andy and Dave discuss research from Neil Johnson, who looked to the movements of fly larvae to model financial systems, where a collection of agents share a common goal, but have no way to communicate and coordinate their activities (a memory of five past events ends up being the ideal balance). Researchers at Carnegie Mellon demonstrate that random search with early-stopping is a competitive Neural Architecture Search baseline, performing at least as well as “Efficient” NAS. Unrelated research, but near-simultaneously published, from AI Lab Swisscom, shows that random search outperforms state-of-the-art NAS algorithms. Researchers at DeepMind investigate the possibility of creating an agent that can discover its world, and introduce NDIGO (Neural Differential Information Gain Optimization), designed to be “information seeking.” And the Electronics and Telecomm Research Institute in South Korea creates SC-FEGAN, a face-editing GAN that builds off of a user’s sketches and other information. Georgetown University announces a $55M grant to create the Center for Security and Emerging Technology (CSET). Microsoft workers call on the company to cancel its military contract with the U.S. Army. DeepMind uses machine learning to predict wind turbine energy production. Australia’s Defence Department invests ~$5M to study how to make autonomous weapons behave ethically. And the U.K. government invests in its people and funds AI university courses with £115. Reports suggest that U.S. police departments are using biased data to train crime-predicting algorithms. A thesis on Neural Reading Comprehension and Beyond by Danqi Chen becomes highly read. A report looks at the evaluation of citation graphs in AI research, and researchers provide a survey of deep learning for image super-resolution. Bryon Reese blogs that we need new words to adjust to AI (to which Dave adds “AI-chemy” to the list). In Point and Counterpoint, David Sliver argues that AlphaZero exhibits the “essence of creativity,” while Sean Dorrance Kelly argues that AI can’t be an artist. Interpretable Machine Learning by Christoph Molnar hits version 1.0, and Andy highlights Asimov’s classic short story, The Machine that Won the War. And finally, a symposium at Princeton University’s Institute for Advanced Studies examines deep learning – alchemy or science?
Industry Australia’s Defense Department Takes Lead in Ethics Research U.K. Government to Fund AI University Courses With £115m Reports of the Week Police across the US are training ... Neural Architecture Search baseline, performing at least as well as “Efficient” NAS. Unrelated research, but near-simultaneously published, from AI Lab Swisscom, shows that random search outperforms ... government invests in its people and funds AI university courses with £115. Reports suggest that U.S. police departments are using biased data to train crime-predicting algorithms. A thesis on Neural
ai with ai: This is Feyn
/our-media/podcasts/ai-with-ai/season-3/3-26
Andy and Dave discuss the initial results from King’s College London’s COVID Symptom Tracker, which found fatigue, loss of taste and smell, and cough to be the most common symptoms. MIT’s CSAIL and clinical team at Heritage Assisted Living announce Emerald, a wi-fi box that uses machine learning analyzes wireless signals to record (non-invasively) a person’s vital signs. AI Landing has developed a tool that monitors the distance between people and can send an alert when they get too close. And Johns Hopkins University updates its COVID tracker to provide greater levels of detail on information in the US. In non-COVID news, OpenAI releases Microscope, which contains visualizations of the layers and neurons of eight vision systems (such as AlexNet). The JAIC announces its “Responsible AI Champions” for AI Ethics Principles, and also issues a new RFI for new testing and evaluation technologies. In research, Udrescu and Tegmark publish AI Feynman, and improved algorithm that can find symbolic expressions that match data from an unknown function; they apply the method to 100 equations from Feynman’s Lectures on Physics, and it discovers all of them. The report of the week comes from nearly 60 authors across 30 organizations, a publication on Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. The review paper of the week provides an overview of the State of the Art on Neural Rendering. The book of the week takes a look at the history of DARPA, in Transformative Technologies: Perspectives on DARPA. Stuart Kauffman gives his thoughts on complexity science and prediction, as they related to COVID-19. The ELLIS society holds its second online workshop on COVID on 15 April. Matt Reed creates Zoombot, a personalized chatbot to take your place in Zoom meetings. Ali Aliev creates Avatarify, to make yourself look like somebody else in real-time for your next Zoom meeting.
on information in the US. In non-COVID news, OpenAI releases Microscope, which contains visualizations of the layers and neurons of eight vision systems (such as AlexNet). The JAIC announces its “Responsible AI Champions” for AI Ethics Principles, and also issues a new RFI for new testing and evaluation technologies. In research, Udrescu and Tegmark publish AI Feynman, and improved algorithm that can find ... and clinical team at Heritage Assisted Living announce Emerald, a wi-fi box that uses machine learning analyzes wireless signals to record (non-invasively) a person’s vital signs. AI Landing has
ai with ai: NIPS, Docs, and Clips
/our-media/podcasts/ai-with-ai/season-1/1-10
Andy and Dave continue their discussion on the 31st Annual Conference on Neural Information Processing Systems (NIPS), covering Sokoban, chemical reactions, and a variety of video disentanglement and recognition capabilities. They also discuss a number of breakthroughs in medicine that involve artificial intelligence: a robot passing a medical licensing exam, an algorithm that can diagnose pneumonia better than expert radiologists, a venture between GE Healthcare and NVIDIA to tap into volumes of unrealized medical data, and deep-brain stimulation. Finally, for reading material and reference, Andy recommends a technical lecture on reinforcement learning, as well as two books on robot ethics.
and reference, Andy recommends a technical lecture on reinforcement learning, as well as two books on robot ethics. NIPS, Docs, and Clips TOPICS NASA announcement on Dec. 14 Follow-up on AlphaGo ... Artificial Intelligence Built an AI That Outperforms Any Made by Humans Google Research Blog: AutoML for large scale image classification and object detection Several Milestones in Artificial ... at diagnosing pneumonia than radiologists On Nov. 26, NVIDIA and GE Healthcare (a division of General Electric) announced that they were joining forces to apply NVIDIA's AI technologies to GE
ai with ai: Schrödinger’s Slime Mold
/our-media/podcasts/ai-with-ai/season-4/4-21
Andy and Dave discuss the latest AI news, which includes lots of new reports, starting with the release of the final report of the National Security Commission on AI, with over 750 pages that outlines steps the U.S. must take to use AI responsibly for national security and defense. The Stanford University Institute for Human-Centered AI (HAI) releases its fourth and most comprehensive report of its AI index, which covers global R&D, technical performance, education, and other topics in AI. Peter Layton at the Defence Research Centre in Australia publishes Fighting AI Battles: Operational Concepts for Future AI-Enabled Wars, with a look at war at sea, land, and air. Drone Wars in the UK and the Centre for War Studies in Denmark release Meaning-Less Human Control: Lessons from Air Defence Systems on Meaningful Human Control for the Debate of AWS, examining automation and autonomy in 28 air defense systems used around the world. And the European Union Agency for Cybersecurity publishes a report on Cybersecurity Challenges in the Uptake of AI in Autonomous Driving. In research, scientists demonstrate that an organism without a nervous systems, slime mold, can encode memory of its environment through the hierarchy of its own tube diameter structure. And the Fun Site of the Week uses GPT-3 to generate classic “title/description/question” thought experiments.
Andy and Dave discuss the latest AI news, which includes lots of new reports, starting with the release of the final report of the National Security Commission on AI, with over 750 pages that outlines steps the U.S. must take to use AI responsibly for national security and defense. The Stanford University Institute for Human-Centered AI (HAI) releases its fourth and most comprehensive report of its AI index, which covers global R&D, technical performance, education, and other topics in AI. Peter Layton at the Defence Research Centre in Australia publishes Fighting AI Battles: Operational
ai with ai: A.I. in the Sky
/our-media/podcasts/ai-with-ai/season-4/4-8
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges of creating definitions to enable meaningful global discussion on AI.
Andy and Dave welcome Arthur Holland Michel to the podcast for a discussion on predictability and understandability in military AI. Arthur is an Associate Researcher at the United Nations Institute for Disarmament Research, a Senior Fellow at the Carnegie Council for Ethics in International Affairs, and author of the book Eyes in the Sky: the Secret Rise of Gorgon Stare and How It Will Watch Us All. Arthur recently published The Black Box, Unlocked: Predictability and Understandability in Military AI, and the three discuss the inherent challenges of artificial intelligence and the challenges
ai with ai: From A to Z
/our-media/podcasts/ai-with-ai/season-2/2.42
Two special guests join Andy and Dave for a discussion about research in AI and autonomy. First, Dr. Andrea Gilli is a researcher at the NATO Defense College in Rome, where he works on defense innovation, military transformation, and armed forces modernization. And second, Ms. Zoe Stanley-Lockman is a fellow at the Maritime Security Programme of the Institute of Defence and Strategic Studies at the Rajartnam School of International Studies in Singapore, where she is researching, among other things, the roles of ethics in AI.
2.42 Two special guests join Andy and Dave for a discussion about research in AI and autonomy. First, Dr. Andrea Gilli is a researcher at the NATO Defense College in Rome, where he works on defense innovation, military transformation, and armed forces modernization. And second, Ms. Zoe Stanley-Lockman is a fellow at the Maritime Security Programme of the Institute of Defence and Strategic Studies at the Rajartnam School of International Studies in Singapore, where she is researching, among other things, the roles of ethics in AI. /images/AI-Posters/AI_2_42.jpg From A to Z Biographies