skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report Open Quote Storymap Newsletter

Search Results

Your search for found 2049 results.

ai with ai: Beauty Is in the AI of the Perceiver
/our-media/podcasts/ai-with-ai/season-4/4-40
Andy and Dave discuss the latest in AI news, including an upgraded version of OpenAI’s CoPilot, called, Codex, which can not only complete code but creates it as well (based on natural language inputs from its users). The National Science Foundation is providing $220 million in grants to 11 new National AI Research Institutes (including two fully funded by the NSF). A new DARPA program seeks to explore how AI systems can share their experiences with each other, in Shared-Experience Lifelong Learning (ShELL). The Senate Committee on Homeland Security and Governmental Affairs introduces two AI-related bills: the AI Training Act (to establish a training program to educate the federal acquisition workforce), and the Deepfake Task Force Act (to task DHS to produce a coordinated plan on how a “digital content provenance” standard might assist with decreasing the spread of deepfakes). And the Inspectors General of the NSA and DoD partner to conduct a joint evaluation of NSA’s integration of AI into signals intelligence efforts. In research, DeepMind creates the Perceiver IO architecture, which works across a wide variety of input and output spaces, challenging the idea that different kinds of data need different neural network architectures. DeepMind also publishes PonderNet, which learns to adapt the amount of computation based on the complexity of the problem (rather than the size of the inputs). Research from MIT uses the corpus of US patents to predict the rate of technological improvements for all technologies. The European Parliamentary Research Service publishes a report on Innovative Technologies Shaping the 2040 Battlefield. Quanta Magazine publishes an interview with Melanie Mitchell, which includes a deeper discussion on her research in analogies. And Springer-Verlag makes available for free An Introduction to Ethics in Robotics and AI (by Christoph Bartneck, Christoph Lütge, Alan Wagner, and Sean Welsh).
ai with ai: AI Today, Tomorrow, & Forever
/our-media/podcasts/ai-with-ai/season-4/4-39
Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education.
ai with ai: XLand, Simulation of Sweet Adventures
/our-media/podcasts/ai-with-ai/season-4/4-38
Andy and Dave discuss the latest in AI news, including a story from MIT Technology Review (which echoes observations made previously on AI with AI) that “hundreds of AI tools have been built to catch COVID. None of them helped.” DeepMind has used its AlphaFold program to identify the structure for 98.5 percent of roughly 20,000 human proteins, and will make the information publicly available. The Pentagon makes use of machine learning algorithms to create decision space in the latest of Global Information Dominance Experiments. An Australian court rules that AI systems can be “inventors” under patent law (but not “owners”), and South Africa issues the world’s first patent to an “AI System.” The United States Special Operations Command put 300 of its personnel through a unique six-week crash course in AI, including leaders such as Google CEO Eric Schmidt and former Defense Secretary Ash Carter. And President Biden nominates Stanford professor Ramin Toloui, who has experience with AI technologies and impacts, as an Assistant Secretary of State for business. In research, DeepMind develop agents capable of “open-ended learning” in XLand, an environment with diverse tasks and challenges. A survey from the Journal of AI Research finds that AI researchers have varying amounts of trust in different organizations, companies, and governments. The Journal of Strategic Studies dedicates an issue to Emerging Technologies, with free access. Mine Cetinkaya-Rundel and Johanna Hardin make an Introduction to Modern Statistics open access with an option (or with proceeds going to OpenIntro, a US-based nonprofit). And Iyad Rahwan curates a collection of evil AI cartoons.
ai with ai: The AI Is Smarter on the Other Side of the FENCE
/our-media/podcasts/ai-with-ai/season-4/4-37
Andy and Dave discuss the latest in AI news and research, including the new DARPA FENCE program (Fast Event-based Neuromorphic Camera and Electronics), which seeks to create event-based cameras that only focus on pixels that have changed in a scene. NIST proposed an approach for reducing the risk of bias in AI and has invited the public to comment and help improve it. Researchers from the University of Colorado, Boulder use a machine learning model to learn physical properties in electronics building blocks (such as clumps of silicon and germanium atoms), as a way to predict how larger electronics components will work or fail. Researchers in South Korea create an artificial skin that mimics human tactile recognition, and couple it with a deep learning algorithm to classify surface structures (with an accuracy of 99.1%). A survey from IE University shows, among other things, that 75% of people surveys in China support replacing parliamentarians with AI, while in the US, 60% were opposed to it. A scientist with uses machine learning to learn Rembrandt's style and then recreate missing pieces of the painter's "The Night Watch." Researchers at Harvard, San Diego, Fujitsu, and MIT present methodical research on demonstrating how classification neural networks are susceptible to small 2D transformations and shifts, image crops, and changes in object colors. The GAO releases a report on Facial Recognition Technology, surveying 42 federal agencies, and finds a general lack of accountability in the use of the technology. The WHO releases a report on Ethics and Governance of AI for Health. In rebuttal to DeepMind's "Reward is enough" paper, Roitblat and Byrnes pens separate essays on why "Reward is not enough." An open-access book by Wang and Barabasi looks at the Science of Science. Julia Schneider and Lena Ziyal join forces to provide a comical essay on AI: We Need to Talk, AI. And the National Security Commission on AI holds an all-day summary on Global Emerging Technology.
ai with ai: GPT Is My CoPilot
/our-media/podcasts/ai-with-ai/season-4/4-36
Andy and Dave discuss the latest in AI news, including a report that the Israel Defense Forces used a swarm of small drones in mid-May in Gaza to locate, identify, and attack Hamas militants, using Thor, a 9-kilogram quadrotor drone. A paper in the Journal of American Medical Association examines an early warning system for sepsis and finds that it misses out on most instances (67%) of cases, and frequently issued false alarms (to which the developer contests the results). A new bill, the Consumer Safety Technology Act, directs the US Consumer Product Safety Commission to run a pilot program to use AI to help in safety inspections. A survey from FICO on The State of Responsible AI (2021) shows, among other things, a disinterest in the ethical and responsible use of AI among business leaders (with 65% of companies saying that can’t explain how specific AI model predictions are made, and 22% of companies have an AI ethics board to consider questions on AI ethics and fairness). In a similar vein, a survey from the Pew Research Center and Elon University’s Imagining the Internet Center found that 68% of respondents (from across 602 leaders in the AI field) believe that AI ethical principles will NOT be employed by most AI systems within the next decade; the survey includes a summary of the respondents’ worries and hopes, as well as some additional commentary. GitHub partners with OpenAI to launch CoPilot, a “Programming Partner” that uses contextual cues to suggest new code. Researchers from Stanford University, UC San Diego, and MIT research Physician, a visual and physical prediction benchmark to measure predictions about commonplace real-world physical events (such as when objects: collide, drop, roll, domino, etc). CSET releases a report on Machine Learning and Cybersecurity: Hype and Reality, finding that it is unlikely that machine learning will fundamentally transform cyber-defense. Bengio, Lecun, and Hinton join together to pen a white paper on the role of deep learning in AI, not surprisingly eschewing the need for symbolic systems. Aston Zhang and Zack C. Lipton, and Alex J Smola release the latest version of Dive into Deep Learning, now over 1000 pages, and living only as an online version.
ai with ai: Journey to the Cause of Reason
/our-media/podcasts/ai-with-ai/season-4/4-35
Andy and Dave discuss the latest in AI news, including research from the San Diego School of Medicine, which used an AI algorithm to analyze terabytes of gene expression data in response to viral infections, identifying 20 genes that predict the severity of a patient’s response (across many different viruses). Deputy Secretary of Defense Kathleen Hicks announces a new AI and Data Acceleration initiative, which includes operational data teams and flyaway technical experts. China says it has AI fighter jet pilots that can beat human pilots in simulated dogfights. A study from Stanford estimates the density of CCTV cameras in large cities around the globe (by using computer vision algorithms on street view image data). NIST held a workshop on AI Measurement and Evaluation, with an interesting 22-page read ahead document. Appen updates its State of AI and Machine Learning report, examining various business-related views and metrics on AI, showing a general maturing of the AI market. Researchers from Tubingen and Max Planck show that the behavioral difference between human and machine vision is narrowing, but still has room for improvement (particularly with out of distribution data). Researchers from Stanford, University of College London and MIT develop a counterfactual simulation model to provide quantitative predictions on how people think about causation, possibly serving as a bridge between psychology and AI. Adam Wagner uses a reinforcement learning approach to search for examples that would disprove conjectures in graph theory, and finds examples that disprove five such conjectures. Justin Solomon’s Numerical Algorithms provides the core methods for machine learning. And Budiansky publishes a look at the life of Kurt Gödel, in Journey to the Edge of Reason.
ai with ai: Reward of the Coprophages
/our-media/podcasts/ai-with-ai/season-4/4-34
Andy and Dave discuss the latest in AI news, including the launch of the National AI Research Resource Task Force, which will serve as a federal advisory committee and produce at least two reports to Congress (a roadmap and implementation plan) by November 2022. Google and Harvard University release a 1.4 PB reconstruction of a cubic millimeter of human brain tissue. Google reports a deep reinforcement-learning system that outperforms humans in designing floorplans for microchips, both in time and inefficiency. Researchers from the UK, Germany, and China fuse electronics to the Madagascar hissing cockroach to create an insect-computer hybrid for autonomous search and rescue. The Navy’s MQ-25 tanker drone refuels a manned aircraft for the first time. Researchers use large-scale experiments and machine learning to discover a greater hierarchy of theories of human decision-making. OpenAI introduces a Process for Adapting Language Models to Society (PALMS) as a way to try to mitigate bias in transformer models such as GPT-3. A concept paper from DeepMind examines why reward systems are enough to constitute a solution to artificial general intelligence. And Richard Sutton and Andrew Barto publish the second edition of Reinforcement Learning: An Introduction.
ai with ai: No Time to AI
/our-media/podcasts/ai-with-ai/season-4/4-33
Andy and Dave discuss the latest in AI news, starting with the US Consumer Products Safety Commission report on AI and ML. The Deputy Secretary of Defense outlines Responsible AI Tenets, along with mandating the JAIC to start work on four activities for developing a responsible AI ecosystem. The Director of the US Chamber of Commerce’s Center for Global Regulatory Cooperation outlines concerns with the European Commission’s newly drafted rules on regulating AI. Amnesty International crowd-sources an effort to identify surveillance cameras that the New York City Police Department have in use, resulting in a map of over 15,000 camera locations. The Royal Navy uses AI for the first time at sea against live supersonic missiles. And the Ghost Fleet Overlord unmanned surface vessel program completes its second autonomous transit from the Gulf Coast, through the Panama Canal, and to the West Coast. Finally, CNA Russia Program team members Sam Bendett and Jeff Edmonds join Andy and Dave for a discussion on their latest report, which takes a comprehensive look at the ecosystem of AI in Russia, including its policies, resourcing, infrastructure, and activities.
ai with ai: Someday My ‘Nets Will Code
/our-media/podcasts/ai-with-ai/season-4/4-32
Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council’s Panel of Experts, which notes the March 2020 use of the “fully autonomous” Kargu-2 to engage retreating forces; it’s unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses a 825GB data “Pile” to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as “code generators,” creating a benchmark (the Automated Programming Progress Standard) to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with “Someday.” And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally Diana Gehlhaus from Georgetown’s Center for Security and Emerging Technology (CSET), joins Andy and Dave to preview an upcoming event, “Requirements for Leveraging AI.” The interview with Diana Gehlhaus begins at 33:32
ai with ai: Just the Tip of the Skyborg
/our-media/podcasts/ai-with-ai/season-4/4-31
Andy and Dave discuss the latest in AI news, including the first flight of a drone equipped with the Air Force’s Skyborg autonomy core system. The UK Office for AI publishes a new set of guidance on automated decision-making in government, with Ethics, Transparency and Accountability Framework for Automated Decision-Making. The International Red Cross calls for new international rules on how governments use autonomous weapons. Senators introduce two AI bills to improve the US’s AI readiness, with the AI Capabilities and Transparency Act and the AI for the Military Act. Defense Secretary Lloyd Austin lays out his vision for the Department of Defense in his first major speech, stressing the importantance of emerging technology and rapid increases in computing power. A report from the Allen Institute for AI shows that China is closing in on the US in AI research, expecting to become the leader in the top 1% of most-cited papers in 2023. In research, Ziming Liu and Max Tegmark introduce AI Poincaré, an algorithm that auto-discovers conserved quantities using trajectory data from unknown dynamics systems. Researchers enable a paralyzed man to “text with his thoughts,” reaching 16 words per minute. The Stimson Center publishes A New Agenda for US Drone Policy and the Use of Lethal Force. The Onlife Manifesto: Being Human in a Hyperconnected Era, first published in 2015, is available for open access. And Cade Metz publishes Genius Makers, with stories of the pioneers behind AI.