Search Results
Your search for found 2049 results.
- ai with ai: Eleventh Voyage into Morphospace
- /our-media/podcasts/ai-with-ai/season-2/2-9
- The Joint Artificial Intelligence Center is up and running, and Andy and Dave discuss some of the newer revealed details. And the rebranded NeurIPS (originally NIPS), the largest machine learning conference of the year, holds its 32nd annual conference in Montreal, Canada, with a keynote discussion on “What Bodies Think About” by Michael Levin. And a group of graduate students have create a community-driven database to provide links to tasks, data, metrics, and results on the “state of the art” for AI. In other news, one of the “best paper” awards at NeurIPS goes to Neural Ordinary Differential Equations, research from University of Toronto that replaces the nodes and connections of typical neural networks with one continuous computation of differential equations. DeepMind publishes its paper on AlphaZero, which details the announcements made last year on the ability of the neural network to play chess, shogi, and go “from scratch.” And AlphaFold from DeepMind brings machine learning methods to a protein folding competition. In reports of the week, the AI Now Institute at New York University releases a 3rd annual report on understanding social implications of AI. With a blend of technology and philosophy, Arsiwalla and co-workers break up the complex “morphospace” of consciousness into three categories: computational, autonomy, and social; and they map various examples to this space. For interactive fun of generating images with a GAN, check out the “Ganbreeder,” though maybe not before going to sleep. In videos of the week, “Earworm” tells the tale of an AI that deleted a century; and CIMON, the ISS Robot, interacts with the space crew. And finally, Russia24 joins a long history of people dressing up and pretending to be robots.
- ai with ai: AI with AI: Montezuma’s Regulation
- /our-media/podcasts/ai-with-ai/season-2/2-8
- This week, Andy and Dave discuss the US Department of Commerce’s announcement to consider regulating AI as an export; counter to that idea, Amazon makes freely available 45+ hours of training materials on machine learning, with tailored learning paths; Oren Etzioni proposes ideas for broader regulation of AI research, that attempts to balance the benefits with the potential harms; DARPA tests its CODE program for autonomous drone operations in the presence of GPS and communications jamming; a Chinese researcher announces the use of CRISPR to produce the first gene-edited babies; and the 2018 ACM Gordon Bell Prize goes to Lawrence Berkeley National Lab for achieving the first exa-scale (10^18) application, running on over 27,000 NVIDIA GPUs. Uber’s OpenAI announces advances in exploration and curiosity of an algorithm that help it “win” Montezuma’s Revenge. Research from Facebook AI suggests that pre-training convolutional neural nets may provide fewer benefits over random initialization than previously thought. Google Brain examines how well ImageNet architectures transfer to other tasks. A paper from INDOPACOM describes the exploitation of big data for special operations forces. And Yuxi Li publishes a technical paper on deep reinforcement learning. And a recent paper explores self-organized criticality as a fundamental property of neural systems. Christopher Bishop’s Pattern Recognition and Machine Learning are available online, and the Architects of Intelligence provides one-on-one conversations with 23 AI researchers. Maxim Pozdorovkin releases “The Truth About Killer Robots” on HBO, and finally, a Financial Times article over-hypes (anti-hypes?) a questionable graph on Chinese AI investments.
- ai with ai: AI with AI: It Can Only Be Attributable to Human Error
- /our-media/podcasts/ai-with-ai/season-2/2-7
- In the latest news, Andy and Dave discuss OpenAI releasing “Spinning Up in Deep RL,” an online educational resource; Google AI and the New York Times team up to digitize over 5 million photos and find “untold stories;” China is recruiting its brightest children to develop AI “killer bots;” and China unveils the world’s first AI news anchor; and Douglas Rain, the voice of HAL 9000 has died at age 90. In research topics, Andy and Dave discuss research from MIT, Tegmark, and Wu, that attempts to improve unsupervised machine learning by using a framework that more closely mirrors scientific thought and process. Albrecht and Stone examine the issue of autonomous agents modeling other agents, which leads to an interesting list of open problems for future research. Research from Stanford makes an empirical examination of bias and generalization in deep generative models, and Andy notes striking similarities to previously reported experiments in cognitive psychology. Other research surveys data collection for machine learning, from the perspective of the data. In blog posts of the week, the Mad Scientist Initiative reveals the results from a recent competition, which suggests themes of the impacts of AI on the future battlefield; and Piekniewski follows up his May 2018 “Is an AI Winter On Its Way?” in which he reviews cracks appearing in the AI façade, with particular focus on the arena of self-driving vehicles. And Melanie Mitchell provides some insight about AI hitting the barrier of meaning. CSIS publishes a report on the Importance of the AI Ecosystem. And another paper takes insights from the social sciences to provide insight into AI. Finally, MIT press has updated one of the major sources on Reinforcement Learning with a second edition; AI Superpowers examines the global push toward AI; the Eye of War examines how perceptual technologies have shaped the history of war; SparkCognition publishes HyperWar, a collection of essays from leaders in defense and emerging technology; Major Voke’s entire presentation on AI for C2 of Airpower is now available, and the Bionic Bug Podcast has an interview with CNA’s own Sam Bendett to talk AI and robotics.
- ai with ai: AI with AI: It’s Neurons All the Way Down
- /our-media/podcasts/ai-with-ai/season-2/2-6
- Andy and Dave discuss research from Hasani and colleagues that uses a natural method for growing a neural network, which they use to demonstrate that a 12-neuron network can be trained to steer and park a rover robot to a given spot. Jeff Hawkins and co-workers describe a new theory of intelligence, positing that every part of the human neocortex learns complete models of objects and concepts, resulting in a "thousand brains theory of intelligence." The UK publishes a 2000+ page report on the state of AI industry in the UK. A technical paper asks whether multiagent deep reinforcement is learning the answer or the question. The books of the week include Sejnowski’s The Deep Learning Revolution, and Gerrish’s How Smart Machines Think. And the videos of the week include the Deep Learning Summer School series and Reinforcement Learning Summer School series.
- ai with ai: AI with AI: But Is It Art(ificial)?
- /our-media/podcasts/ai-with-ai/season-2/2-5
- In the latest news, Andy and Dave discuss Microsoft’s announcement that it will sell artificial intelligence and other advanced technology to the Pentagon; Google is giving $25M to projects that use artificial intelligence for humanitarian projects; Stanford announces the Human-Centered AI initiative; AdaNet offers fast and flexible AutoML with “learning guarantees;” and a “human brain” supercomputer (using neuromorphic computing) with 1 million processors is switched online for the first time. In other stories, Andy and Dave discuss the AI-generated portrait that sold at a Christie’s auction for $432,500. MIT Media Lab announces the results of their “Moral Machine” experiment, which asked people around the globe to choose how a self-driving vehicle should behave in different moral dilemmas. And GoogleAI describes its “fluid annotation” method, an exploratory machine language-powered interface for faster image annotation.
- ai with ai: AI with AI: The Society of Mind Your Step
- /our-media/podcasts/ai-with-ai/season-2/2-4
- Deep generative models can generate “spurious” samples (i.e. errors). Researchers from Université Paris-Saclay and PSL Research University explore a basic question, “Is it possible to get rid of all spurious samples [in deep generative models] without sacrificing coverage of a model?” Their research suggests a “Heisenberg Uncertainty”-like the tradeoff between full coverage and spurious objects. DeepMind announces large-scale GAN training for natural image synthesis with high fidelity. And Andy discusses Topaz’s “AI Gigapixel,” an AI-driven software capability that intelligently adds information to photos to increase their resolution/size. In the paper of the work, researchers flip the Turing Test and ask humans what one word would they use to convince a human judge that they’re alive; the results are crappy. On a related note, Andy recalls Brian Christian’s achievement of being The Most Human Human. For books of the week, the UK’s Development, Concepts, and Doctrine Centre publishes the 6 th edition of Global Strategic Trends; papers from the 3 rd conference on the Philosophy and Theory of AI are available in a single publication, and Minsky’s Society of the Mind gets a free hyperlinked online version (with the classic illustrations). In the video of the week, the Center for Technology Innovation asks “Who should answer the ethical questions surrounding AI?” And in the “silliness of the week,” a robot appears at a UK parliamentary meeting and “talks” to MPs about the future of AI in the classroom.
- ai with ai: AI with AI: Common Dents
- /our-media/podcasts/ai-with-ai/season-2/2-3
- Andy and Dave discuss the latest corporate buzz on the Department of Defense’s JEDI contract, in which Microsoft employees publish an open letter and accuse the company of straying from its AI principles; a new DARPA program seeks to codify humans’ basic common sense through computational models and repositories; MIT establishes the Stephen A. Schwarzman College of Computing, a $1B initiative and the single largest by an American academic institution; MIT also announces an Autonomous Vehicle Technology study, a data-driven effort for “safe and enjoyable” human-AI interaction in driving; Wired takes a look at initial data on accidents involving self-driving vehicles; and researchers (at least 23!) publish a complete electron microscopy volume of the brain of the fruit fly. In deeper topics, Andy and Dave discuss research from the University of Louisville that shows the failure of neural networks to understand optical illusions. Researchers from UPenn, ARL, and NYU demonstrate a drone that can be controlled by your eyes. Stocco and colleagues demonstrate BrainNet, a “social network” that allows 3 people to transmit “thoughts” to each other. And researchers at Ecole Centrale de Lyon have created a new framework that may allow robots to autonomously optimize their own hyper-parameters – about which Dave tries to look on the bright side.
- ai with ai: AI with AI: Bots Without Ethics - Safety Dance
- /our-media/podcasts/ai-with-ai/season-2/2-2
- Andy and Dave focus on a variety of big news items, including Google bows out of the bidding for the Pentagon’s “JEDI” cloud contract valued at $10 billion; the Government Accountability Office releases a 50-page report on the poor state of the cybersecurity of U.S. weapons systems; “The Big Hack” makes big news, with Bloomberg reporting that China inserted a tiny chip on hardware in order to infiltrate U.S. networks; the U.S. Department of Transportation looks to rewrite safety rules in order to accommodate fully driverless vehicles on public roads; two leaders in collaborative robots (Rethink and Jibo) close their doors; and DeepMind announces efforts to discuss “Technical AI safety” including the areas of specification (true intentions), robustness (safety upon perturbation), and assurance (understanding and control). The latter topic launches further discussion into ethics-related efforts for AI, including the UK Machine Intelligence Garage Ethics Committee; a paper on the motivations and risks of machine ethics; and research from North Caroline State University shows that the (Association for Computing Machinery) code of ethics does not appear to affect the decisions made by software developers. All the excitement somehow causes Dave to invoke Jean Valjean when he means to say Javert. C’est la vie! Finally, Andy describes a couple of motherlodes of papers; Biostorm by Anthony DeCapite makes the story of the week; ZDNet ranks 36 of the best movies on AI; AutoML is prepping an open access book on AutoML, and Dave goes fanboy over the Automata web series from Penny Arcade.
- ai with ai: AI with AI: Quickly Followed by DARPA’s Counter-Balrog Challenge
- /our-media/podcasts/ai-with-ai/season-2/2-1
- Welcome to Version 2.0 of AI with AI! Dave starts off by trying to explain the weird podcast titles, and he plugs Andy’s ( @ai_ilachinski ) and his ( @crypticnarwhal ) Twitter accounts. Andy and Dave then get down to business discussing Britain’s “successful” trials of using AI (“SAPIENT”) in urban battlefield scanning to identify enemy movements; the IEEE launches an ethics certification program for autonomous and intelligent systems; the U.S. Department of Energy invests $218M in Quantum Information Science; and DARPA announces the Subterranean Challenge, for technologies to augment underground operations, and wherein Dave makes a dire prediction of Tolkien-proportions! Andy and Dave then delve greedily and deeply into a series of topics of counter-AI. They start by discussing Dedrone, which has developed a capability to detect and track swarms (of robots/drones). Researchers in Korea use an AI-enabled drone to herd flocks of birds (diverting them from designated airspace). Researchers at the University of Albany, with GE, demonstrate the ability to attack object detectors (Faster Regional Convolutional Neural Networks) using imperceptible patches on the background; and researchers at the Georgia Institute of Technology, with Intel, announce ShapeShifter, a targeted physical attack on Faster R-CNN object detectors found in “state-of-the-art” detectors (such as the current generation of self-driving vehicles). On the other side, Luca de Alfaro at the University of California, Santa Cruz, published research into creating neural networks with built-in resistance to adversarial attacks, by reducing the neural networks’ “local linearity.” After a quick touch on research from Google Research on simplifying and compacting neural networks (for resource-constrained devices) without floating-point operations or multiplications, Andy recommends a paper on Learning Causality; August Cole’s Angry Trident makes the story of the week; Interpretable Machine Learning (by Molnar) is the book of the week, along with Pattern Classification by Duda, Hart, and Stork; and Christopher Moore explores the Limits of Computation in a two-part video series.
- ai with ai: As Easy As T-B-D!
- /our-media/podcasts/ai-with-ai/season-1/1-50
- Andy and Dave discuss the “Transparency by Design Network” (TbD-net), research from MIT Lincoln Lab that uses a collection of modular neural nets to perform specific image identification subtasks. The resulting output places heat-map blobs over objects in an image, which allows a human analyst to see how a module is interpreting the image (and to use that information to further improve the model’s accuracy). In research from DeepMind and the University of Oxford, researchers attempt to solve the problem that neural nets have in not manipulating numerical information well outside of the range of values encountered during training. Researchers created a Neural Accumulator and a Neural Arithmetic Logic Unit (in essence, representing numerical quantities as individual neurons without a nonlinearity) to allow a system to learn to represent and manipulate numbers in a systematic way. Georgia Tech has developed a machine learning-based method to automate the generation of novel video games, using Super Mario Bros, Mega Man, and Kirby’s Adventure as inputs. And Kate Crawford and Vladan Joler have created a massive visualization of the many processes that make an Amazon Echo work, in the “Anatomy of an AI system.” DARPA celebrates its 60 th anniversary with a 184-page paper that highlights its research over the last 60 years; Google launches a “What-If Tool” for probing datasets at a non-coding level; Neural Networks and Learning Machines (3 rd Edition) by Simon Haykin is available for free. Robin R. Murphy curates information on “Robotics Through Science Fiction” (and more); all of the keynotes and presentations from the Joint Multi-Conference on Human-Level Artificial Intelligence are available online, likely requiring a week of vacation to view them all; and the 11 th International Conference on Swarm Intelligence will be in Rome at the end of October 2018.