Search Results
Your search for AI Ethics found 72 results.
- ai with ai: Just the Tip of the Skyborg
- /our-media/podcasts/ai-with-ai/season-4/4-31
- Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force’s Skyborg autonomy core system. The UK Office for AI publishes a new set of guidance on automated decision-making in government, with Ethics, Transparency and Accountability Framework for Automated Decision-Making. The International Red Cross calls for new international rules on how governments use autonomous weapons. Senators introduce two AI bills to improve the US’s AI readiness, with the AI Capabilities and Transparency Act and the AI for the Military Act. Defense Secretary Lloyd Austin lays out his vision for the Department of Defense in his first major speech, stressing the importantance of emerging technology and rapid increases in computing power. A report from the Allen Institute for AI shows that China is closing in on the US in AI research, expecting to become the leader in the top 1% of most-cited papers in 2023. In research, Ziming Liu and Max Tegmark introduce AI Poincaré, an algorithm that auto-discovers conserved quantities using trajectory data from unknown dynamics systems. Researchers enable a paralyzed man to “text with his thoughts,” reaching 16 words per minute. The Stimson Center publishes A New Agenda for US Drone Policy and the Use of Lethal Force. The Onlife Manifesto: Being Human in a Hyperconnected Era, first published in 2015, is available for open access. And Cade Metz publishes Genius Makers, with stories of the pioneers behind AI.
- 4-31 Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force’s Skyborg autonomy core system. The UK Office for AI publishes a new set of guidance on automated decision-making in government, with Ethics, Transparency and Accountability Framework for Automated Decision-Making. The International Red Cross calls for new international rules on how governments use autonomous weapons. Senators introduce two AI bills to improve the US’s AI readiness, with the AI Capabilities and Transparency Act and the AI for the Military Act. Defense
- ai with ai: The GPT Blob
- /our-media/podcasts/ai-with-ai/season-3/3-32
- In this week's COVID-related AI news, Andy and Dave discuss "SciFact" from the Allen Institute for AI, which built on neural network VeriSci and can link to supporting or refuting materials for claims about COVID-19. Berkeley Labs releases COVIDScholar, which uses natural language processing text-mining to search over 60,000 papers and draw insights and connections. Berekely Labs also announces plans to use machine learning to estimate COVID-19's seasonal cycle. In non-COVID AI news, Google publishes a response to the European Commission's white paper on AI, cautioning that their definition of AI is far too broad and risks stifling innovation. CSET maps where AI talent is produced in the U.S., where it gets concentrated, and where AI funding equity goes. In research, OpenAI releases GPT-3, a 175B parameter NLP model, and shows that massively scaling up the language model greatly improves task-agnostic few-shot performance. A report from the European Parliament's Panel for the Future of Science and Technology shows the ethics initiatives of nations around the globe. A review paper in Science suggests that progress in AI has stalled (perhaps as much as 10 years) in some fields. Abbass, Scholz, and Reid publish Foundations of Trusted Autonomy, a collection of essays and reports on trustworthiness and autonomy. And in the video of the week, CSIS sponsored a conversation with (now retired) JAIC Director, Lt Gen Shanahan.
- for the Future of Science and Technology shows the ethics initiatives of nations around the globe. A review paper in Science suggests that progress in AI has stalled (perhaps as much as 10 years) in some ... , May-June 2015) Report of the Week The Ethics of Artificial Intelligence: Issues and Initiatives 128 page report Review Paper(s) of the Week Core progress in AI has stalled in some ... 3-32 In this week's COVID-related AI news, Andy and Dave discuss "SciFact" from the Allen Institute for AI, which built on neural network VeriSci and can link to supporting or refuting materials
- ai with ai: Beauty Is in the AI of the Perceiver
- /our-media/podcasts/ai-with-ai/season-4/4-40
- Andy and Dave discuss the latest in AI news, including an upgraded version of OpenAI’s CoPilot, called, Codex, which can not only complete code but creates it as well (based on natural language inputs from its users). The National Science Foundation is providing $220 million in grants to 11 new National AI Research Institutes (including two fully funded by the NSF). A new DARPA program seeks to explore how AI systems can share their experiences with each other, in Shared-Experience Lifelong Learning (ShELL). The Senate Committee on Homeland Security and Governmental Affairs introduces two AI-related bills: the AI Training Act (to establish a training program to educate the federal acquisition workforce), and the Deepfake Task Force Act (to task DHS to produce a coordinated plan on how a “digital content provenance” standard might assist with decreasing the spread of deepfakes). And the Inspectors General of the NSA and DoD partner to conduct a joint evaluation of NSA’s integration of AI into signals intelligence efforts. In research, DeepMind creates the Perceiver IO architecture, which works across a wide variety of input and output spaces, challenging the idea that different kinds of data need different neural network architectures. DeepMind also publishes PonderNet, which learns to adapt the amount of computation based on the complexity of the problem (rather than the size of the inputs). Research from MIT uses the corpus of US patents to predict the rate of technological improvements for all technologies. The European Parliamentary Research Service publishes a report on Innovative Technologies Shaping the 2040 Battlefield. Quanta Magazine publishes an interview with Melanie Mitchell, which includes a deeper discussion on her research in analogies. And Springer-Verlag makes available for free An Introduction to Ethics in Robotics and AI (by Christoph Bartneck, Christoph Lütge, Alan Wagner, and Sean Welsh).
- . And Springer-Verlag makes available for free An Introduction to Ethics in Robotics and AI (by Christoph Bartneck, Christoph Lütge, Alan Wagner, and Sean Welsh). /images/AI-Posters/AI_4_40.jpg Beauty ... Book of the Week An Introduction to Ethics in Robotics and AI Book – open access ContactName /*/Contact/ContactName ContactTitle /*/Contact/JobTitle ContactEmail /*/Contact ... 4-40 Andy and Dave discuss the latest in AI news, including an upgraded version of OpenAI’s CoPilot, called, Codex, which can not only complete code but creates it as well (based on natural
- ai with ai: Some Pigsel
- /our-media/podcasts/ai-with-ai/season-3/3-44
- In COVID-related AI news, Andy and Dave discuss an effort from Google and Harvard to provide county-level forecasts on COVID-19 for hospitals and first responders. The National Library of Medicine, National Center of Biotechnology Information, and NIH provide COVID-19 literature analysis with interesting data analytic and visualization tools. In regular AI news, Elon Musk demonstrates the latest iteration of Neuralink, complete with pig implantees. The UK attempted a prediction system for Most Serious Violence, but found that it had serious flaws. Amazon awards a $500k “Alexa Prize” to Emory University students for their Emora chatbot, which scored a 3.81 average rating across categories. The Bipartisan Policy Center releases two reports on AI. And Russell Kirsch, inventor of the pixel and other groundbreaking technology, passed away on 11 August at the age of 91. In research, three papers tackle the problem of reconstructing 3D (in some cases, 4D) models of locations based on tourist photos taking from different vantage points and at different times: the NeRF (Neural Radiance Fields) model and the Plenoptic model. The Human Rights Watch releases a report summarizing Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control. Springer-Verlag releases yet-another-freebie with An Introduction to Ethics and Robotics in AI. And the Conference on Computer Vision & Pattern Recognition has posted the papers and videos from its June 2020 session.
- Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control. Springer-Verlag releases yet-another-freebie with An Introduction to Ethics and Robotics in AI. And the Conference ... Musk’s presentation - full A U.K. AI Developed to predict violent crime found to be seriously flawed 12 page Ethics Committee report Amazon Awards $500K “Alexa Prize” to Emory ... and Retaining Human Control 63 page report Book of the Week An Introduction to Ethics and Robotics and AI 124 page book Video of the Week Presentations from CVPR 2020 Available
- ai with ai: Chasing AIMe
- /our-media/podcasts/ai-with-ai/season-4/4-44
- Andy and Dave discuss the latest in AI new and research, including: [1:28] Researchers from several universities in biomedicine establish the AIMe registry, a community-driven reporting platform for providing information and standards of AI research in biomedicine. [4:15] Reuters publishes a report with insight into examples at Google, Microsoft, and IBM, where ethics reviews have curbed or canceled projects. [8:11] Researchers at the University of Tübingen create an AI method for significantly accelerating super-resolution microscopy, which makes heavy use of synthetic training data. [13:21] The US Navy establishes Task Force 59 in the Middle East, which will focus on the incorporation of unmanned and AI systems into naval operations. [15:44] The Department of Commerce establishes the National AI Advisory Committee, in accordance with the National AI Initiative Act of 2020. [19:02] Jess Whittlestone and Jack Clark publish a white paper on Why and How Governments Should Monitor AI Development, with predictions into the types of problems that will occur with inaction. [19:02] The Center for Security and Emerging Technology publishes a series of data-snapshots related to AI research, from over 105 million publications. [23:53] In research, Google Research, Brain Team, and University of Montreal take a broad look at deep reinforcement learning research and find discrepancies between conclusions drawn from point estimates (fewer runs, due to high computational costs) versus more thorough statistical analysis, calling for a change in how to evaluate performance in deep RL. [30:13] Quebec AI Institute publishes a survey of post-hoc interpretability on neural natural language processing. [31:39] MIT Technology Review dedicates its Sep/Oct 2021 issues to The Mind, with articles all about the brain. [32:05] Katy Borner publishes Atlas of Forecasts: Modeling and Mapping Desirable Futures, showing how models, maps, and forecasts inform decision-making in education, science, technology, and policy-making. [33:16] DeepMind in collaboration with University College London offers a comprehensive introduction to modern reinforcement learning, with 13lectures (~1.5 hours each) on the topic.
- 4-44 Andy and Dave discuss the latest in AI new and research, including: [1:28] Researchers from several universities in biomedicine establish the AIMe registry, a community-driven reporting platform for providing information and standards of AI research in biomedicine. [4:15] Reuters publishes a report with insight into examples at Google, Microsoft, and IBM, where ethics reviews have ... Technical paper AIMe Registry homepage Big Tech slams ethics brakes on AI Machine learning improves biological image analysis Nontechnical summary Technical paper Earlier
- AI Safety Navy Action Plan
- /reports/2019/10/ai-safety-navy-action-plan
- In light of the Navy’s stated commitment to using AI, and given the strategic importance of AI safety, we provide the Navy with a first step towards a comprehensive approach to safety. We use a risk management approach to frame our treatment of AI safety risks: identifying risks, analyzing them, and suggesting concrete actions for the Navy to begin addressing them. The first type of safety risk, being technical in nature, will require a collaborative effort with industry and academia to address. The second type of risk, associated with specific military missions, can be addressed in a combination of military experimentation, research, and concept development to find ways to promote effectiveness along with safety. For each types of risk, we use examples to show concrete ways of managing and reducing the risk of AI applications. We then discuss institutional changes that would help promote safety in the Navy’s AI efforts.
- in the DOD AI strategy, with one of the four lines of effort being AI ethics and safety. That said, in current DOD AI efforts and in public discourse about DOD AI applications, safety can appear to be the forgotten stepchild of ethics. Ethics as a term is used to cover the broad set of concerns regarding military use of AI. The DOD’s Defense Innovation Board is developing a set of ethical ... be consistent with American values and principles. While there is a strong focus on ethics, issues of AI safety—including negative operational outcomes such as civilian casualties, fratricide
- ai and autonomy in russia: Issue 43, August 8, 2022
- /our-media/newsletters/ai-and-autonomy-in-russia/issue-43
- and regulation. He began his position as acting head of the organization on July 20, 2022. FIRST REGIONAL CODE OF ETHICS FOR AI WAS SIGNED IN NIZHNY NOVGOROD The first signing of the Code of Ethics ... it a symbolic location for the first signing of the Code. Both Deputy Prime Minister of Russia Chernyshenko and Nizhny Novgorod Governor Nikitin were present. There is also a plan to sign a code of AI ethics in Khanty-Mansiysk, Innopolis, and in the Far East. In total, 20 federal departments have confirmed the intention to sign a code of ethics on AI. As discussed in issue 41 of AI in Russia , the Code
- ai with ai: Oura-boros
- /our-media/podcasts/ai-with-ai/season-3/3-33
- In COVID-related AI news, Andy and Dave discuss an announcement from WVU Rockefeller Neuroscience Institute, WVU Medicine, and Oura Health, with the ability to predict COVID-19 related symptoms up to three days in advance via biometric monitoring. Japan's M3 is teaming with Alibaba's AI Tech to provide CT-scan capability to hospitals that can identify COVID-related pneumonia. The Pentagon taps into the virus-relief CARES Act to use AI for virus cure and vaccine efforts. Rockefeller announces efforts to use GPT-2 to automatically summarize COVID-19 medical research articles, but the results aren’t that great. In regular AI news, IBM announces it is no longer offering general-purpose facial recognition or analysis software, due to concerns about the technology being used to promote racism. And in a related announcement, Amazon places a one-year moratorium on allowing law enforcement to use its Rekognition facial recognition platform. USSOCOM has posted an RFI for potential contractors to provide its Global Analytics Platform, a $300-600M contract that would follow its previous eMAPS contract. And NASA launches its Entrepreneurs Challenge, seeking new ideas for space exploration. In research, from the University of Pennsylvania, UC Berkeley, Google Brain, University of Toronto, Carnegie Mellon University, and Facebook AI, comes a different approach to defining intrinsic motivation for taskless problems, wherein agents seek out future inputs that are expected to be novel. The report of the week comes from the Stanley Center for Peace and Security, with a look at The Militarization of AI. Researchers at Beijing Academy and Cambridge University come together to pen a white paper calling for "cross-cultural cooperation" on AI ethics and governance. Efron, Hastie, and Cambridge University Press provide Computer Age Statistical Inference for free. And DeepMind and the UCL Centre for AI are producing a Deep Learning Lecture Series.
- at The Militarization of AI. Researchers at Beijing Academy and Cambridge University come together to pen a white paper calling for "cross-cultural cooperation" on AI ethics and governance. Efron, Hastie ... Intelligence 32 page report White Paper of the Week An International Call for "Cross-Cultural Cooperation" on AI Ethics and Governance Summary 23 Page Paper Book of the Week ... 3-33 In COVID-related AI news, Andy and Dave discuss an announcement from WVU Rockefeller Neuroscience Institute, WVU Medicine, and Oura Health, with the ability to predict COVID-19 related
- china ai and autonomy report: Issue 5, December 16, 2021
- /our-media/newsletters/china-ai-and-autonomy-report/issue-5
- The China AI and Autonomy Report, issue 5, is a biweekly newsletter published by CNA, on artificial intelligence and autonomy in China.
- and regulations in relation to our business in all material respects in the jurisdictions where we conduct business. Our AI Ethics Council, comprising both internal and external experts, ensures that our business strictly adheres to recognized ethical principles and standards. We have developed a Code of Ethics for AI Sustainable Development, and we collaborate closely with third-party institutions ... Issue 5 The China AI and Autonomy Report, issue 5, is a biweekly newsletter published by CNA, on artificial intelligence and autonomy in China. /Newsletters/China-AI
- china ai and autonomy report: Issue 1, November 2, 2021
- /our-media/newsletters/china-ai-and-autonomy-report/issue-1
- The China AI and Autonomy Report, issue 1, is a biweekly newsletter published by CNA, on artificial intelligence and autonomy in China.
- to support the effort. 21 AI POLICY AND GOVERNANCE On September 26, the PRC published its first guidelines on AI ethics, which emphasize user rights and data control (see original document ... Intelligence Ethics Code’ Released” (《新一代人工智能伦理规范》发布), Ministry of Science and Technology , Sept. 26, 2021, http://www.most.gov.cn/kjbgz/202109/t20210926_177063.html . 23 Xinmei Shen, “Chinese AI ... Issue 1 The China AI and Autonomy Report, issue 1, is a biweekly newsletter published by CNA, on artificial intelligence and autonomy in China. /Newsletters/China-AI