skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap Newsletter

Search Results

Your search for found 2049 results.

ai with ai: The Shadow of What Is Going to Be (Part 1)
/our-media/podcasts/ai-with-ai/season-2/2-35
Andy and Dave discuss a scathing report on Scotland Yard’s facial recognition software, which researchers at the University of Essex found to have an 81% error rate (but that the Met Police say has an error rate of 0.1%). In related news, Axon announced that it will ban the use of facial recognition systems on its devices; Axon supplies 47 of the 69 largest police agencies in the U.S. with body cameras and software. DARPA announces IDAS, the Intent-Defined Adaptive Software (IDAS), in an attempt to reduce the need for manual software modifications. NIST posts the first draft guideline for developing AI technical standards. Elon Musk says that its Neuralink is almost ready for the first human volunteers; Neuralink uses ultra-fine threads that can be implanted into the brain to detect the activity of neurons. And the Bank of England announced that Alan Turing will be on the new Fifty Pounds note. In research, Andy and Dave discuss Pluribus, the latest AI for multiplayer poker from CMU and Facebook AI, which won during a 12-day poker marathon in 6-player no-limit Texas hold’em; the AI runs on two Intel processors and a “modest” 128GB during play.
ai with ai: The Fake That Launched 1,000 Clips (Part 2)
/our-media/podcasts/ai-with-ai/season-2/2-34b
More research from Berkeley and also the University of Southern California creates a method to “protect” world leaders against deep fakes, by identifying, among other things, 17 Facial Action Units (such as subtle movements of eyebrows, cheeks, nose, etc, during speech). And research from MIT can take an audio clip and convert it to a generic human face. A report from RAND looks at Ethics in Scientific Research. Deakin University and Harvard provide a survey of deep reinforcement learning in cyber security. Another survey from Dublin University and Intel Labs looks at Generative Adversarial Networks and their taxonomy. Vishal Maini and Samer Sabri provide Machine Learning for Humans. Andy recommends Ludwig von Bertalanffy’s General System Theory from 1968. Matt Turek takes a look at the history of media forensics. The House Homeland Security Subcommittee on Intelligence and Counterterrorism holds a hearing on AI and Counterterrorism. And the Computer Vision and Pattern Recognition 2019 conference begins to post its tutorials, workshops, and its 80-page program guide.
ai with ai: The Fake That Launched 1,000 Clips (Part 1)
/our-media/podcasts/ai-with-ai/season-2/2-34
Andy and Dave discuss the update to the US National AI Research and Development Strategic Plan, which establishes 8 objectives for federally funded AI research. Meanwhile, the European Commission starts its pilot phase for ethics guidelines for trustworthy AI, with the first AI Alliance Assembly meeting in Brussels and the High-Level Expert Group of AI (AI HLEG). The Joint AI Center, in conjunction with CMU, CrowdAI, and DIU, plans to make available xBD (x-Building-Damage), an open-source labeled data set of satellite imagery of some of the largest natural disasters in the past decade; it will contain ~700k building annotations across over 5,000 km^2 of imagery from 15 countries. The JAIC also announced a partnership with Singapore’s Defence Science and Technology Agency to collaborate on AI in humanitarian assistance and disaster relief. A white paper by Pactera suggests that 85% of AI projects fail. A new DARPA program, Virtual Intelligence Processing (VIP) aims to explore “brain-inspired” methods for dealing with incomplete, spare, and noisy data. Facebook releases AI Habitat, an open-source environment for training and testing AI agents. And NIST’s RFI on AI Standards receives nearly 100 respondents. Researchers at Adobe Research and Berkeley use AI to detect facial image manipulations that were done by Photoshop’s “Face-Aware Liquify” feature; while humans were able to judge an altered face 53% of the time, the Convolutional Neural Network tool achieved results as high as 99%.
ai with ai: For Your AIs Only
/our-media/podcasts/ai-with-ai/season-2/2-33
Russia expert Sam Bendett joins Andy and Dave for a discussion and update on Russia’s latest developments and efforts in AI and autonomy. The group discusses a 30 May meeting, in which Russian President Vladimir Putin outlined the national AI priorities; the Russian AI strategy, originally expected in June, is now expected in the June-to-October timeframe. They also discuss the growing AI infrastructure, and the opening of AI centers across the country, with a mindset similar to a “startup culture,” with Russian AI developers getting international recognition. The group touches on relations between Russia and China, particularly in the wake of the Huawei issues. The “Army-2019” military expo in June should also provide useful insights about the Russian military development and employment of AI and related capabilities.
ai with ai: Who Manipulates the Manipulators? (Part 2)
/our-media/podcasts/ai-with-ai/season-2/2-32b
Researchers at the University of Tubingen demonstrate that virtual neurons spontaneously develop a “number sense” when assessing the number of visual items (such as dots) in a set. The Allen Institute for AI create Grover, a neural network that can generate fake news, but that can also detect NN-generated fake news; Grover uses the same architecture as GPT-2 (the previous “unreleasable for the safety of humanity” algorithm), but these researchers highlight the importance of making available such generators. In related news, Witness Media Lab releases a report on the current state of deepfake tech; a CNN report looks at how Finland is fighting fake news, and a NY Times article examines the “weaponization” of AI-generated disinformation. A Mashable article from Marcus Gilmer looks at the state of software that attempts to identify deepfakes. The International Committee of the Red Cross releases a report a “human-centered approach” to AI and machine learning in armed conflict. A paper from Springer-Verlag provides a history and references for the “neural-symbolic debate.” Hiroki Sayama at SUNY Binghamton makes available “Introduction to the Modeling and Analysis of Complex Systems.” The US-China Commission releases testimony on a day-long session, with testimony from experts on three topics, including the US-China Competition in AI. The Allen Institute brain atlases available for exploring online. The 36th International Conference on Machine Learning meets in Long Beach, CA, with over 6,000 participants. Meanwhile, CogX meets in King’s Cross, London. And former Secretary of Defense Ash Carter pens a “letter to a young Googler” on the morality of defending America.
ai with ai: Who Manipulates the Manipulators? (Part 1)
/our-media/podcasts/ai-with-ai/season-2/2-32
Andy and Dave discuss early thoughts from the House Intelligence Committee hearing on deep fakes, manipulated media, and AI; artists take a shot at Mark Zuckerberg to demonstrate the power of fake videos; the House Armed Services Committee doubles Joint AI funding; Google AI releases the Google Research Football Environment; a study examines the amount of CO2 released when training AI models; Microsoft provides an AI curriculum for government decision-makers; Microsoft also removes access to a database with 10 million “celebrity” images; and Rodney Brooks and Gary Marcus launch startup Robust.AI, which aims to build the first industrial-grade cognitive platform for robots. Research from CMU, Google AI, and Stanford “peeks into the future” by predicting the future activities and locations of people in videos.
ai with ai: 52 Views of HOListic Imagination
/our-media/podcasts/ai-with-ai/season-2/2-31
In news items, Andy and Dave discuss China’s call for international cooperation on a code of ethics for AI. The Organisation for Economic Co-operation and Development (OECD) unveils the first intergovernmental standards for AI policies, with support from 42 countries. The US Army has invited the design of prototypes for the Next-Generation Squad Weapon, which may include wind-sensing and even facial-recognition technology. DARPA’s Spectrum Collaboration Challenge (SC2) presents an essay at IEEE Spectrum, which describes the challenges of making the most out of an increasingly crowded electromagnetic spectrum, including running contests for better spectrum management, and using Colosseum as the testing ground. Google announces the ‘AI Workshop,’ which offers early access to AI capabilities and experiments. In research, Google DeepMind announces an AI that has achieved human-level performance in Quake III Arena Capture the Flag mode; among other things, human players rated the AI as “more collaborative than other humans” (though had mixed reaction to the AI as their teammates). Google Research presents HOList, an environment for machine learning of higher-order theorem proving. Research from Oxford University creates a model for human-like machine thinking by mimicking the prefrontal cortex for language-guided imagination. A paper from Jeff Cline at Uber AI Labs suggests a different approach to Artificial General Intelligence, by means of AI-generating algorithms that learn how to produce AgI. MacroPolo produces a series of 6 charts on Chinese AI talent. CBInsights compiles the view of 52 “experts” on “How AI Will Go Out of Control.” Blum, Kopcroft, Kannan, and Microsoft release Foundations of Data Science; Hutter, Kotthoff, Vanschoren, and Springer-Verlag make Automated Machine Learning available. The Purdue Symposium on Ethics, Technology, and the Future of War and Security release a video on the Ethical, Legal, and Social Implications of Autonomy and AI in Warfare. The University of Colorado Boulder creates an Index of Complex Networks (ICON). And Alexander Reben creates a repository of 1 million fake AI-generated faces.
ai with ai: We All Live in a Neuro Subroutine (Side B)
/our-media/podcasts/ai-with-ai/season-2/2-30b
Continuing in research topics, Andy and Dave discuss research from MIT that treats image classification adversarial examples not as bugs, but as features – and intentionally mislabeled pictures; the approach adds robustness to vulnerability and provides evidence that adversarial vulnerability is caused by non-robust features and is not inherently tied to the standard training framework. The Bulletin of the Atomic Scientists releases The Global Competition for AI Dominance in its May 2019 issue. Isaac Godfrie provides a summary of “few shot” learning papers that were presented at ICLR 2019. A research paper shows the interface between machine learning and the physical sciences. A new survey from Alegion and Dimensional Research examines the data issues impacting AI/ML research (for example, 96% of companies surveyed said they ran into problems with data quality). Georgios Mastorakis examines issues that arise from taking a human-like approach to training algorithms. Mohri, Rostamizadeh, and Talwalkar release a graduate-level book on Foundations of Machine Learning through MIT Press. CollegeHumor produces “A Computer Co-Wrote this Sketch,” in which the characters appear to become aware of their situation. And finally, the Genetic and Evolutionary Computation Conference is scheduled for 13-17 July 2019 in Prague, Czech Republic
ai with ai: We All Live in a Neuro Subroutine (Side A)
/our-media/podcasts/ai-with-ai/season-2/2-30
Andy and Dave discuss a new IARPA program, Camera Network Research Data Collection, which intends to identify and track subjects across areas as large as six miles via a security camera footage of varying type and quality. DARPA announces the recipients of its Next-Generation Non-Surgical Neurotechnology (N3) program, which includes efforts to read from and write to the brain. The Joint Artificial Intelligence Center adds two new areas of focus: cybersecurity, and robotic process automation. Roborder, a provider of autonomous swarms of heterogeneous robots for border surveillance, will be running three pilot programs in Europe. Ford announced a team-up with Agility Robotics to launch a self-driving vehicle service by 2021, using Digit to deliver packages to doorsteps. The Computing Community Consortium and the Association for the Advancement of AI have made a request for comments on a draft of a “20-Year Community Roadmap for AI Research in the US.” In research items, Facebook AI, UT Austin, and UC Berkley announced research that uses “active observation completion” to demonstrate the emergence of look-around behaviors. And other research from UT Berkley explores the benefits of self-driving vehicles using “social perception” of the nearby drivers in order to gain additional information.
ai with ai: Elfnark’s Lottery Ticket
/our-media/podcasts/ai-with-ai/season-2/2-29
Andy and Dave take a look at the reintroduction of the "AI in Government Act," a bill that intends to get more AI technical experts into the US Government. San Francisco bans facial recognition software (but leaves the door open in the future), while Moscow announces plans to weave AI facial recognition into its urban surveillance net. Facebook opens up its data to academic researchers for analysis. DARPA announces the Air Combat Evolution (ACE) program, to automate air-to-air combat; DARPA also announces Teaching AI to Leverage Overlooked Residuals (TAILOR), to make soldiers fitter, happier, and more productive. And IARPA announces Trojans in AI (TrojAI), an effort to inspect AI for malicious code. In research, Andy and Dave discuss research from Frankle at MIT that proposes a "Lottery Ticket" hypothesis, which suggests only certain "winning combinations" are necessary for training neural networks, and that researchers have been training neural networks that are much larger than they need to be to increase the chances of includes one of these winning combinations. Leon Bottou at Facebook AI proposes a method for using AI to identify causal relationships in data (and which goes against the common modern practice of combining data sets into one giant dataset). And research from Cambridge, George IT, and the University of Pennsylvania demonstrates that Magic: the Gathering is officially the world’s most complicated game (and is Turing complete). In reports of the week, the Stockholm International Peace Research Institute releases the Impact of AI on Strategic Stability and Nuclear Risk. IKV and Pax Christi release The State of AI. Analytics Vedhya has compiled a list of 25 open datasets for deep learning. Benedek Rozemberczki has curated a list of decision tree research papers. The IEEE Spectrum releases a report on Accelerating Autonomous Vehicle Technology. The May 2019 issue of The Scientist contains 15 articles on how Biology is tackling AI. David Kriesel provides A Brief Introduction to Neural Networks. COL Jasper Jeffers wins the 2019 Sci-Fi Writing Contest with AN41. The ICLR 2019 provides video on four talks, including Frankle’s Lottery Ticket hypothesis, and Bottou’s Casual Invariance. Melanie Mitchell gives a Ted Talk on the Collapse of AI and the possibility of an AI winter. And the National Academies-Royal Society Public Symposium will be meeting in DC on 24 May for an International Dialogue on AI.