skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report

Search Results

Your search for AI Ethics found 72 results.

ai with ai: AI with AI: Bots Without Ethics - Safety Dance
/our-media/podcasts/ai-with-ai/season-2/2-2
Andy and Dave focus on a variety of big news items, including Google bows out of the bidding for the Pentagon’s “JEDI” cloud contract valued at $10 billion; the Government Accountability Office releases a 50-page report on the poor state of the cybersecurity of U.S. weapons systems; “The Big Hack” makes big news, with Bloomberg reporting that China inserted a tiny chip on hardware in order to infiltrate U.S. networks; the U.S. Department of Transportation looks to rewrite safety rules in order to accommodate fully driverless vehicles on public roads; two leaders in collaborative robots (Rethink and Jibo) close their doors; and DeepMind announces efforts to discuss “Technical AI safety” including the areas of specification (true intentions), robustness (safety upon perturbation), and assurance (understanding and control). The latter topic launches further discussion into ethics-related efforts for AI, including the UK Machine Intelligence Garage Ethics Committee; a paper on the motivations and risks of machine ethics; and research from North Caroline State University shows that the (Association for Computing Machinery) code of ethics does not appear to affect the decisions made by software developers. All the excitement somehow causes Dave to invoke Jean Valjean when he means to say Javert. C’est la vie! Finally, Andy describes a couple of motherlodes of papers; Biostorm by Anthony DeCapite makes the story of the week; ZDNet ranks 36 of the best movies on AI; AutoML is prepping an open access book on AutoML, and Dave goes fanboy over the Automata web series from Penny Arcade.
), and assurance (understanding and control). The latter topic launches further discussion into ethics-related efforts for AI, including the UK Machine Intelligence Garage Ethics Committee; a paper ... Arcade. /images/AI-Posters/AI_2_2.jpg AI with AI: Bots Without Ethics - Safety Dance Bots Without Ethics - Safety Dance Breaking Google sits out $10 billion Pentagon cloud contest over AI ... Intelligence Committee Report AI Ethics Committee Ethics Framework Upcoming AI Conferences World Summit AI 2018 Conference on Neural Information Processing Systems (NIPS) Things
ai with ai: A Tesseract to Follow
/our-media/podcasts/ai-with-ai/season-3/3-37
In COVID-related AI news, Purdue University has built a website that tracks global response to social distancing, by pulling live footage and images from over 30,000 cameras in 100 countries. Simon Fong, Nilanjan Dey, and Jyotismita Chaki have published Artificial Intelligence for Coronavirus Outbreak, which examines AI’s contribution to combating COVID-19. Researchers at Harvard and Boston Children’s Hospital use a "regular" Bayesian model to identify COVID-19 hotspots over 14 days before they occur. In non-COVID AI news, the acting director of the JAIC announces a shift to enabling joint warfighting operations. The DoD Inspector General releases an Audit of Governance and Protection of DoD AI Data and Technology, which reveals a variety of gaps and weaknesses in AI governance across DoD. Detroit Police Chief James Craig reveals that the police department's experience with facial recognition technology resulted in misidentified people about 96% of the time. Over 1400 mathematicians sign and deliver a letter to the American Mathematical Society, urging researchers to stop working on predictive-policing algorithms. DARPA awards the Meritorious Public Service Medal to Professor Hava Siegelmann for her creation and research in the Lifelong Learning Machines Program. And Horace Barlow, one of the founders of modern visual neuroscience, passed away on 5 July at the age of 98. In research, Udrescu and Tegmark release AI Feynman 2.0, with unsupervised learning of equations of motion by viewing objects in raw and unlabeled video. Researchers at CSAIL, NVidia, and the University of Toronto created the Visual Causal Discovery Network, which learns to recognize underlying dependency structures for simulated fabrics, such as shirts, pants, and towels. In reports, the Montreal AI Ethics Institute publishes its State of AI Ethics. In the video of the week, Max Tegmark discusses the previously mentioned research on equations of motion and also discusses progress in symbolic regression. And GanBreeder upgrades to ArtBreeder, which can create realistic-looking images from paintings, cartoons, or just about anything.
, the Montreal AI Ethics Institute publishes its State of AI Ethics. In the video of the week, Max Tegmark discusses the previously mentioned research on equations of motion and also discusses progress ... of the Week Brain computation by assemblies of neurons Nontechnical summary Technical Paper Project Github page Report of the Week The State of AI Ethics 128 Page Report AI Ethics Newsletter Montreal Ethics AI Homepage Video of the Week AI for physics & physics for AI – By   Max Tegmark , MIT 49 Minute Video Fun Sites of the Week Artbreeder
ai with ai: The World Ends with Robots
/our-media/podcasts/ai-with-ai/season-2/2-21
Andy and Dave begin with an AI-generated podcast, using the “dumbed down” GPT-2 with the repository of podcast notes; GPT-2 ends the faux podcast with a video called “The World Ends with Robots” and Dave later discovers that a Google search on the title brings up zero hits. Ominous! Andy and Dave continue with a discussion of the Boeing 737 MAX crashes and the implications for autonomous systems. Stanford University launches the Stanford Institute for Human-centered Artificial Intelligence (HAI), which seeks to advance AI research to improve the human condition. Ahead of the Convention on Certain Conventional Weapons in Geneva, Japan announces its intention to submit a plan for maintaining control over lethal autonomous weapons systems. A new report from Hal Hodson at the Economist reveals that should DeepMind successfully create artificial general intelligence, its Ethics Board will have legal “control” of the entity. And Steve Walker and Vint Cerf discuss other US Department of Defense projects that Google is working on, including the identification of deep fakes, and exploring new architectures to create more computing power. NVidia announces a $99 AI development kit, the AI Playground, and the GauGAN. In research topics, Google explores whether neural networks show gestalt phenomena, looking specifically at the law of closure. Researchers with IBM Watson and Oxford examine supervised learning with quantum-enhanced feature spaces. Shashu and co-workers explore quantum entanglement in deep learning architectures. Dan Falk takes a look at how AI is changing science. And researchers at Facebook AI and Google AI examine the pitfalls of measuring emergent communication between agents. The World Intellectual Property Organization releases its 2019 trends in AI. A report takes a survey of the European Union’s AI ecosystem. While another paper surveys the field of robotic construction. Kiernan Healy releases a book on Data Visualization. Allen Downey publishes Think Bayes: Bayesian Statistics Made Simple. The Defense Innovation Board releases a video from its public listening session on AI ethics at CMU from 14 March. The 2019 Human-Centered AI Institute Symposium releases a video. And Irina Raicu compiles a list of readings about AI ethics.
Bayes: Bayesian Statistics Made Simple. The Defense Innovation Board releases a video from its public listening session on AI ethics at CMU from 14 March. The 2019 Human-Centered AI Institute Symposium releases a video. And Irina Raicu compiles a list of readings about AI ethics. /images/AI-Posters/AI_2_21.jpg The World Ends with Robots Announcements / Popular-Press Reports & Stories Boeing ... Listening Session on AI Ethics DIB AI homepage Video (1.5 hrs) Conference of the Week 2019 Human-Centered Artificial Intelligence Institute Symposium Symposium homepage
ai with ai: 52 Views of HOListic Imagination
/our-media/podcasts/ai-with-ai/season-2/2-31
In news items, Andy and Dave discuss China’s call for international cooperation on a code of ethics for AI. The Organisation for Economic Co-operation and Development (OECD) unveils the first intergovernmental standards for AI policies, with support from 42 countries. The US Army has invited the design of prototypes for the Next-Generation Squad Weapon, which may include wind-sensing and even facial-recognition technology. DARPA’s Spectrum Collaboration Challenge (SC2) presents an essay at IEEE Spectrum, which describes the challenges of making the most out of an increasingly crowded electromagnetic spectrum, including running contests for better spectrum management, and using Colosseum as the testing ground. Google announces the ‘AI Workshop,’ which offers early access to AI capabilities and experiments. In research, Google DeepMind announces an AI that has achieved human-level performance in Quake III Arena Capture the Flag mode; among other things, human players rated the AI as “more collaborative than other humans” (though had mixed reaction to the AI as their teammates). Google Research presents HOList, an environment for machine learning of higher-order theorem proving. Research from Oxford University creates a model for human-like machine thinking by mimicking the prefrontal cortex for language-guided imagination. A paper from Jeff Cline at Uber AI Labs suggests a different approach to Artificial General Intelligence, by means of AI-generating algorithms that learn how to produce AgI. MacroPolo produces a series of 6 charts on Chinese AI talent. CBInsights compiles the view of 52 “experts” on “How AI Will Go Out of Control.” Blum, Kopcroft, Kannan, and Microsoft release Foundations of Data Science; Hutter, Kotthoff, Vanschoren, and Springer-Verlag make Automated Machine Learning available. The Purdue Symposium on Ethics, Technology, and the Future of War and Security release a video on the Ethical, Legal, and Social Implications of Autonomy and AI in Warfare. The University of Colorado Boulder creates an Index of Complex Networks (ICON). And Alexander Reben creates a repository of 1 million fake AI-generated faces.
2-31 In news items, Andy and Dave discuss China’s call for international cooperation on a code of ethics for AI. The Organisation for Economic Co-operation and Development (OECD) unveils the first ... Automated Machine Learning available. The Purdue Symposium on Ethics, Technology, and the Future of War and Security release a video on the Ethical, Legal, and Social Implications of Autonomy and AI ... /AI_2_31.jpg 52 Views of HOListic Imagination News China releases a code of ethics for AI, Calls For International Cooperation 42 Countries Agree to International Principles for Artificial Intelligence
ai with ai: Remember, Remember, the Fakes of November
/our-media/podcasts/ai-with-ai/season-3/3-41
In COVID-related AI news, Andy and Dave discuss an article from Wired that describes how COVID confounded most predictive models (such as finance). And NIST investigates the effect of face masks on facial recognition software. In regular-AI news, CSET and the Bipartisan Policy Center released a report on “AI and National Security,” the first of four “meant to be a roadmap for Washington’s future efforts on AI.” The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce “Fawkes,” a way to “cloak” images and befuddle facial recognition software. In research, OpenAI demonstrates that GPT-2, a generator designed for text, can also generate pixels (instead of words) to fill out 2D pictures. Researchers at Texas A&M, University of S&T of China, and MIT-IBM Watson AI Lab create a 3D adversarial logo to cloak people from facial recognition. And other research explores how the brain rewires when given an additional thumb. CSET publishes a Deepfakes: a Grounded Threat Assessment. And MyHeritage provides a "photo enhancer" that uses machine learning to restore old photos.
future efforts on AI.” The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce “Fawkes,” a way to “cloak” images ... by NIST" Announcements / News - "Just" AI New "AI and National Security" Released by BPC and CSET Announcement (18 page) Report Intelligence Community Releases AI Ethics   Principles and Framework Announcement AI Principles of Ethics for the IC AI Ethics Framework for the IC Fawkes: Image "Cloaking" for Personal Privacy Nontechncial summary Technical
ai with ai: The Brainy Bunch
/our-media/podcasts/ai-with-ai/season-3/3-27
In COVID-related AI news, hospitals across the US are using an AI system called Deterioration Index to provide a snapshot of patients’s risks, even though the software has not yet been validated to be effective for those with COVID-19. Meanwhile, Qure.ai has retooled its qXR system, designed for chest x-rays, to detect COVID-induced pneumonia, and a preliminary validation study with 11,000 images found a 95% accuracy in distinguishing patients with and without COVID-19. The Digital Ethics Lab at University of Oxford has provided a set of ethical guidelines (16 yes/no questions) for those making COVID-19 Digital Tracking and Tracing (DTT) systems. And Carnegie Mellon provides five interactive maps for COVID-related issues in the US. The Joint AI Center unveils Salus, a prototype AI tool for examining where COVID-19 might impact logistics and supply chains. And Reuters spends time to debunk a false claim on the relation of AI to COVID-19. In regular AI news, Washington State passes major facial recognition legislation, defining how state and local government may use facial recognition. DARPA selects Georgia Tech and Intel to lead its Guaranteeing AI Robustness against Deception (GARD) program. And the Association for the Understanding of AI launches AIhub.org, to connect the public and AI community. In research, two German Institutes investigate the roles of different neurons in neural networks, and found populations that serve different functions; in addition, these populations could be extracted to a new network without having to train the new network on the same knowledge. Research from Bar-Ilan University demonstrate human brain learning mechanisms that outperform common AI learning algorithms, to include observing the same image 10 times in a second being more effective than observing the same imagine 1000 times in one month. The book of the week comes from Matthieu Thiboust, with Insights from the Brain, which aims to provide "neuroscience chunks of information related to AI." And CBS News 60 Minutes has a report on BlueDot, the company that warned its clients about the COVID-19 outbreak a week before the CDC.
: An Interdisciplinary Framework to Operationalize AI Ethics 56 page Report Review Paper of the Week Shortcut Learning in Deep Neural Networks 27 page Review Book of the Week Insights From ... In COVID-related AI news, hospitals across the US are using an AI system called Deterioration Index to provide a snapshot of patients’s risks, even though the software has not yet been validated ... images found a 95% accuracy in distinguishing patients with and without COVID-19. The Digital Ethics Lab at University of Oxford has provided a set of ethical guidelines (16 yes/no questions) for those
ai and autonomy in russia: Issue 41, June 27, 2022
/our-media/newsletters/ai-and-autonomy-in-russia/issue-41-a
and productivity.  Russia advances ethical regulations surrounding AI  Three more regions will sign the Code of Ethics in the field of artificial intelligence (AI), including Khanty-Mansiysk, Innopolis ... . Before the establishment of this Code of Ethics, there were no regulatory guidelines on the ethics of AI implementation and development. As discussed in issue 39 of AI in Russia, the document consists of two sections: ethical AI implementation and development, and personal and information security within AI technology and innovation.  In addition to the new Code of Ethics, there is a new
ai with ai: All Good Things
/our-media/podcasts/ai-with-ai/season-6/6-8
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!
For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February ... Campaign to Stop Killer Robots’ take NATO starts work on Artificial Intelligence certification standard IEEE Introduces New Program for Free Access to AI Ethics and Governance
ai with ai: Through the Looking Glass (Part 2)
/our-media/podcasts/ai-with-ai/season-2/2-36b
An AI that Can Visualize Objects Using Touch Nontechnical summary Technical paper Artificial Skin Can Sense Temperature, Pressure, and Humidity Nontechnical summary Technical paper Bill Gates’ “Top 10 Breakthrough” predictions Reports of the Week The Ethics of AI Ethics: An Evaluation of Guidelines (15 page) paper AI: An Overview of State Initiatives (42 page) Report Book of the Week Statistics with Julia: Fundamentals for Data Science, Machine Learning, and AI Book Julia homepage Resources of the Week Meta-Academy Roadmaps – A Package Manager for Knowledge Classic Paper of the Week "A Mathematical Theory of Communication" by Claude Shannon “The Early Days of Information Theory,” by J. R. Pierce, IEEE Transactions on Information Theory A Mind at Play, by Jimmy Soni and Rob Goodman, Simon and Schuster, July 2017 Video of the Week Stephen Wolfram’s testimony about AI at a hearing of the US Senate Commerce Committee’s Subcommittee on Communications, Technology, Innovation and the Internet Full video (2.4 hrs)  (14 page) Transcript of Wolfram’s testimony Wolfram’s own write-up about his testimony Interesting Site of the Week A Technical Look at Creating an AI to Restore and Colorize Photos Essay (Russian site: careful)
2-36B An AI that Can Visualize Objects Using Touch Nontechnical summary Technical paper Artificial Skin Can Sense Temperature, Pressure, and Humidity Nontechnical summary Technical paper Bill Gates’ “Top 10 Breakthrough” predictions Reports of the Week The Ethics of AI Ethics: An Evaluation of Guidelines (15 page) paper AI: An Overview of State ... Nontechnical summary Technical paper Bill Gates’ “Top 10 Breakthrough” predictions Reports of the Week The Ethics of AI Ethics: An Evaluation of Guidelines (15 page) paper
ai with ai: When This Savvy Slime Mold Encountered a Morphogenic Robotic Swarm, You Won't Believe What Happened Next...!
/our-media/podcasts/ai-with-ai/season-2/2-11
Andy and Dave discuss Rodney Brooks' predictions on AI from early 2018 and his (on-going) review of those predictions. The European Commission releases a report on AI and Ethics, a framework for "Trustworthy AI." DARPA announces the Knowledge-directed AI Reasoning over Schemas (KAIROS) program, aimed at understanding "complex events." The Standardized Project Gutenberg Corpus attempts to provide researchers with broader data across the project's complete data holdings. And MORS announces a special meeting on AI and Autonomy at JHU/APL in February. In research, Andy and Dave discuss work from Keio University, which shows that slime mold can approximate solutions to NP-hard problems in linear time (and differently from other known approximations). Researchers in Spain, the UK, and the Netherlands demonstrate that kilobots (small 3 cm robots) with basic communication rule-sets will self-organize. Research from UCLA and Stanford creates an AI system that mimics how humans visualize and identify objects by feeding the system many pieces of an object, called "viewlets." NVIDIA shows off its latest GAN that can generate fictional human faces that are essentially indistinguishable from real ones; further, they structure their generator to provide more control over various properties of the latent space (such as pose, hair, face shape, etc). Other research attempts to judge a paper on how good it looks. And in the "click-bait" of the week, Andy and Dave discuss an article from TechCrunch, which misrepresented bona fide (and dated) AI research from Google and Stanford. Two surveys provide overviews on different topics: one on the safety and trustworthiness of deep neural networks, and the other on mini-UAV-based remote sensing. A report from CIFAR summarizes national and regional AI strategies (minus the US and Russia). In books of the week, Miguel Herman and James Robins are working on a Causal Inference Book, and Michael Nielsen has provided a book on Neural Networks and Deep Learning. CW3 Jesse R. Crifasi provides a fictional peek into a combat scenario involving AI. And Samim Winiger has started a mini-documentary series, "LIFE," on the intersection of humans and machines.
Andy and Dave discuss Rodney Brooks' predictions on AI from early 2018 and his (on-going) review of those predictions. The European Commission releases a report on AI and Ethics, a framework for "Trustworthy AI." DARPA announces the Knowledge-directed AI Reasoning over Schemas (KAIROS) program, aimed at understanding "complex events." The Standardized Project Gutenberg Corpus attempts ... Predictions European Commission Releases Report on AI and Ethics |   (37 page) Report   |   About the High-Level Expert Group on AI DARPA announces KAIROS program tandardized Project Gutenberg