skip to main content
Article Podcast Report Summary Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube

Search Results

Your search for AI Ethics found 73 results.

Artificial Intelligence in Russia Issue 20
/reports/2021/02/artificial-intelligence-in-russia-issue-20
This report, the twentieth in a series of biweekly updates, is part of an effort by CNA to provide timely, accurate, and relevant information and analysis of the field of civilian and military artificial intelligence (AI) in Russia and, in particular, how Russia is applying AI to its military capabilities. It relies on Russian-language open source material.
, as they will be responsible for how it is used. Aksakov also warned of the consequences of ignoring risks in the digital sphere, saying there is a need to regulate the ethics of the relationship between humans and AI ... to provide timely, accurate, and relevant information and analysis of the field of civilian and military artificial intelligence (AI) in Russia and, in particular, how Russia is applying AI to its military capabilities. It relies on Russian-language open source material. The Artificial Intelligence in Russia newsletter features a summary of recent Russian-language reports on the Russian AI
14 More COVID-19 Resources that Use Artificial Intelligence
/our-media/indepth/2020/06/14-more-covid-19-resources-that-use-artificial-intelligence
Andy and Dave, co-hosts of AI with AI, CNA’s popular podcast on artificial intelligence, have compiled a second annotated list of AI developments and resources related to COVID-19.
14 More COVID-19 Resources that Use Artificial Intelligence Andy and Dave, co-hosts of AI with AI, CNA’s popular podcast on artificial intelligence, have compiled a second annotated list of AI ... , co-hosts of AI with AI, CNA’s popular podcast on artificial intelligence, have compiled a second annotated list of AI developments and resources related to COVID-19. Entries are cross-referenced ... validation study using more than 11,000 patient images found the tool was able to distinguish between COVID-19 and non-COVID-19 patients with 95% accuracy. Discussed on "AI with AI" Podcast 3.27
Checklist Advances the Ethical Use of Artificial Intelligence
/our-media/press-releases/2022/01-24
CNA introduces tool to implement ethics policies for autonomous systems
CNA introduces tool to implement ethics policies for autonomous systems /images/news/PressRelease.png Checklist Advances the Ethical Use of Artificial Intelligence CNA has introduced a process ... system ethics principles and current rules of engagement? Has the mission been analyzed to verify that no transfer of command and control over the system can occur without specific authorization ... whether it is a risk the decision-maker would take responsibility for. “The DOD established five principles for ethical AI ,” says the project leader, CNA Principal Research Scientist Michael
Dimensions of Autonomous Decision Making
/reports/2021/12/dimensions-of-autonomous-decision-making
We identify the dimensions of autonomous decision-making—the potential risk that one should consider before transferring decision-making to an intelligent autonomous system.
. This approach can elevate artificial intelligence (AI) ethics from a set of subjectively defined and thus unactionable policies and principles, to a set of measurable and testable contractual ... Dimensions of Autonomous Decision Making Dimensions of Autonomous Decision-making: A First Step in Transforming the Policies and Ethics Principles Regarding Autonomous Systems into Practical System ... designed for use within the defense acquisition system and the military planning process, they provide a first step in transforming the policies and ethics principles regarding autonomous systems
ai with ai: How to Train Your DrAIgon (for good, not for bad)
/our-media/podcasts/ai-with-ai/season-1/1-35
In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using "analog" synapses to improve neural net implementation, and Stanford U researchers develop an artificial sensory nervous system; and Berkley Deep Drive provides the largest self-driving car dataset for free public download. Next, the topic of "hard exploration games with sparse rewards" returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring ("curiosity") than from performing tasks as dictated by the researchers. From Cognition Expo 18, work from Martinez-Plumed attempts to "Forecast AI," but largely highlights the challenges in making comparisons due to the neglected, or un-reported, aspects of developments, such as the data, human oversight, computing cycles, and much more. From the Google AI Blog, researchers improve deep learning performance by finding and describing the transformation policies of the data and using that information to increase the amount and diversity of the training dataset. Then, Andy and Dave discuss attempts to use drone surveillance to identify violent individuals (for good reasons only, not for bad ones). And in a more sporty application, "AI enthusiast" Chintan Trivedi describes his efforts to train a bot to play a soccer video game, by observing his playing. Finally, Andy recommends an NSF workshop report, a book on AI: Foundations of Computational Agents, Permutation City, and over 100 video hours of the CogX 2018 conference.
1-35 In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using "analog" synapses to improve neural net ... exploration games with sparse rewards" returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring ("curiosity") than from
ai with ai: GPT Is My CoPilot
/our-media/podcasts/ai-with-ai/season-4/4-36
Andy and Dave discuss the latest in AI news, including a report that the Israel Defense Forces used a swarm of small drones in mid-May in Gaza to locate, identify, and attack Hamas militants, using Thor, a 9-kilogram quadrotor drone. A paper in the Journal of American Medical Association examines an early warning system for sepsis and finds that it misses out on most instances (67%) of cases, and frequently issued false alarms (to which the developer contests the results). A new bill, the Consumer Safety Technology Act, directs the US Consumer Product Safety Commission to run a pilot program to use AI to help in safety inspections. A survey from FICO on The State of Responsible AI (2021) shows, among other things, a disinterest in the ethical and responsible use of AI among business leaders (with 65% of companies saying that can’t explain how specific AI model predictions are made, and 22% of companies have an AI ethics board to consider questions on AI ethics and fairness). In a similar vein, a survey from the Pew Research Center and Elon University’s Imagining the Internet Center found that 68% of respondents (from across 602 leaders in the AI field) believe that AI ethical principles will NOT be employed by most AI systems within the next decade; the survey includes a summary of the respondents’ worries and hopes, as well as some additional commentary. GitHub partners with OpenAI to launch CoPilot, a “Programming Partner” that uses contextual cues to suggest new code. Researchers from Stanford University, UC San Diego, and MIT research Physician, a visual and physical prediction benchmark to measure predictions about commonplace real-world physical events (such as when objects: collide, drop, roll, domino, etc). CSET releases a report on Machine Learning and Cybersecurity: Hype and Reality, finding that it is unlikely that machine learning will fundamentally transform cyber-defense. Bengio, Lecun, and Hinton join together to pen a white paper on the role of deep learning in AI, not surprisingly eschewing the need for symbolic systems. Aston Zhang and Zack C. Lipton, and Alex J Smola release the latest version of Dive into Deep Learning, now over 1000 pages, and living only as an online version.
program to use AI to help in safety inspections. A survey from FICO on The State of Responsible AI (2021) shows, among other things, a disinterest in the ethical and responsible use of AI among business leaders (with 65% of companies saying that can’t explain how specific AI model predictions are made, and 22% of companies have an AI ethics board to consider questions on AI ethics and fairness ... 4-36 Andy and Dave discuss the latest in AI news, including a report that the Israel Defense Forces used a swarm of small drones in mid-May in Gaza to locate, identify, and attack Hamas militants
ai with ai: The Fake That Launched 1,000 Clips (Part 1)
/our-media/podcasts/ai-with-ai/season-2/2-34
Andy and Dave discuss the update to the US National AI Research and Development Strategic Plan, which establishes 8 objectives for federally funded AI research. Meanwhile, the European Commission starts its pilot phase for ethics guidelines for trustworthy AI, with the first AI Alliance Assembly meeting in Brussels and the High-Level Expert Group of AI (AI HLEG). The Joint AI Center, in conjunction with CMU, CrowdAI, and DIU, plans to make available xBD (x-Building-Damage), an open-source labeled data set of satellite imagery of some of the largest natural disasters in the past decade; it will contain ~700k building annotations across over 5,000 km^2 of imagery from 15 countries. The JAIC also announced a partnership with Singapore’s Defence Science and Technology Agency to collaborate on AI in humanitarian assistance and disaster relief. A white paper by Pactera suggests that 85% of AI projects fail. A new DARPA program, Virtual Intelligence Processing (VIP) aims to explore “brain-inspired” methods for dealing with incomplete, spare, and noisy data. Facebook releases AI Habitat, an open-source environment for training and testing AI agents. And NIST’s RFI on AI Standards receives nearly 100 respondents. Researchers at Adobe Research and Berkeley use AI to detect facial image manipulations that were done by Photoshop’s “Face-Aware Liquify” feature; while humans were able to judge an altered face 53% of the time, the Convolutional Neural Network tool achieved results as high as 99%.
2-34 Andy and Dave discuss the update to the US National AI Research and Development Strategic Plan, which establishes 8 objectives for federally funded AI research. Meanwhile, the European Commission starts its pilot phase for ethics guidelines for trustworthy AI, with the first AI Alliance Assembly meeting in Brussels and the High-Level Expert Group of AI (AI HLEG). The Joint AI Center ... to collaborate on AI in humanitarian assistance and disaster relief. A white paper by Pactera suggests that 85% of AI projects fail. A new DARPA program, Virtual Intelligence Processing (VIP) aims to explore
ai with ai: Game of Drones - AI Winter Is Coming
/our-media/podcasts/ai-with-ai/season-1/1-34
In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new Xavier chip packs $10k worth of power into a $1299 box. Next, Andy and Dave discuss a benevolent application of adversarial attack methods, with a “privacy filter” for photos that are designed to stop AI face detection (reducing detection from nearly 100 percent to 0.5 percent). MIT used AI in the development of nanoparticles, training neural nets to “learn” how a nanoparticle’s structure affects its behavior. Then the remaining topics dip deep into the philosophical realm, starting with a discussion on empiricism and the limits of gradient descent, and how philosophical concepts of empiricist induction compare with critical rationalism. Next, the topic of a potential AI Winter continues to percolate with a viral blog from Piekniewski, leading into a paper from Berkley/MIT that discovers a 4-15% reduction in accuracy for CIFAR-10 classifiers on a new set of similar training images (bringing into doubt the idea of robustness of these systems). Andy shares a possible groundbreaking paper on “graph networks,” that provides a new conceptual framework for thinking about machine learning. And finally, Andy and Dave close with some media selections, including Blood Music by Greg Bear and Swarm by Frank Schatzing.
) -   AI researchers should help with some military work (June 7)   Google’s AI Ethics Principles (March 20-21) Royal Australian Air Force’s biennial Air Power Conference Conference ... 1-34 In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new
ai with ai: the sentience of the lamdas
/our-media/podcasts/ai-with-ai/season-5/5-18
Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense release of its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional harm. YouTuber Yannic Kilcher trains an AI on 4chan’s “infamously toxic” Politically Incorrect board, creating a predictably toxic bot, GPT-4chan; he then uses the bot to generate 15,000 posts on the board, quickly receiving condemnation from the academic community. Google suspends and then fires an engineer who claimed that one of its chatbots, LaMDA, achieving sentience; former Google employees Gebru and Mitchell write an opinion piece saying they warned this would happen. For the Fun Site of the Week, a mini version of DALL-E comes to Hugging Face. And finally, IBM researcher Kush Varshney joins Andy and Dave to discuss his book, Trustworthy Machine Learning, which provides AI researchers with practical tools and concepts when developing machine learning systems.
the deployment of GPT-4chan Research Google Suspends Engineer Who Rang Alarms About a Company AI Achieving Sentience Medium : "May be Fired Soon for Doing AI Ethics Work" by Blake Lemoine ... Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense release of its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional
ai with ai: Face/Off
/our-media/podcasts/ai-with-ai/season-5/5-3
Andy and Dave discuss the latest in AI news and research, including the Defense Innovation Unit releasing Responsible AI Guidelines in Practice, which seeks to ensure tech contractors adhere to the Department of Defense’s existing ethical principles for AI [0:53]. “Meta” (the Facebook re-brand) announces that it will end its use of facial recognition software and delete data on more than a billion people, though it will retain the technology for other products in its metaverse [3:12]. Australia’s information and privacy commissioners release an order to Clearview AI to stop collecting facial biometrics from Australian citizens and to destroy all existing data [5:16]. The U.S. Marine Corps releases a Talent Management 2030 report, which describes the need for more cognitively mature Marines and seeks to “leverage the power of AI,” and to be “at the vanguard of service efforts to operationalize AI [7:39].” DOD releases at 2021 Report on Military and Security Developments Involving the People’s Republic of China, which describes China’s use of AI technology in influence operations, the digital silk road, military capabilities, and more [10:46]. A competition using unrestricted adversarial examples at the 2021 Conference on Computer Vision and Pattern Recognition includes as co-authors several members of the Army Engineering University of the People’s Liberation Army [11:43]. Research from Okinawa and Australia demonstrates that deep reinforcement learning can produce accurate quantum control, even with noisy measurements, using a small particle moving in a double-well. [14:31] MIT Press makes available a nearly 700-page book, Algorithms for Decision Making, organized around four sources of uncertainty (outcome, model, state, and interaction) [18:01]. And Dr. Amanda Kerrigan and Kevin Pollpeter join Andy and Dave to discuss their latest research in what China is doing with AI technology, including a bi-weekly newsletter on the topic, and a preliminary analysis on China’s view of Intelligent Warfare [20:06].
on China’s view of Intelligent Warfare [20:06]. /images/AI-Posters/AI_5_3.jpg Face/Off Announcements / News Just In   - The DoD issues AI ethics guidelines for tech contractors Summary ... Andy and Dave discuss the latest in AI news and research, including the Defense Innovation Unit releasing Responsible AI Guidelines in Practice, which seeks to ensure tech contractors adhere to the Department of Defense’s existing ethical principles for AI [0:53]. “Meta” (the Facebook re-brand) announces that it will end its use of facial recognition software and delete data on more than