Search Results
Your search for found 2049 results.
- ai with ai: the social bot network
- /our-media/podcasts/ai-with-ai/season-4/4-1
- Andy and Dave kick off Season 4.0 of AI with AI with a discussion on social media bots. CNA colleagues Meg McBride and Kasey Stricklin join to discuss the results of their recent research efforts, in which they explored the national security implications of social media bots. They describe the types of activities that social media bots engage in (distributing, amplifying, distorting, hijacking, flooding, and fracturing), how these activities might evolve in the near future, the legal frameworks (or lack thereof), and the implications for US special operations forces and the broader national security community.
- ai with ai: CONSORTing with the GPT
- /our-media/podcasts/ai-with-ai/season-3/3-46
- In COVID-related AI news, another concerning report, this time in Nature Medicine, found “serious concerns” with 20,000 studies on AI systems in clinical trials, with many reporting only the best-case scenarios; in response, an international consortium has developed CONSORT-AI, reporting guidelines for clinical trials involving AI. In Nature, an open dataset provides a collection and overview of governmental interventions in response to COVID-19. In regular AI news, the DoD wraps up its 2020 AI Symposium. And the White House nominates USMC Maj. Gen. Groen to lead the JAIC. The latest report from the NIST shows that facial recognition technology still struggles to identify people of color. Portland, Oregon passes the toughest ban on facial recognition technology in the US. And The Guardian uses GPT-3 to generate some hype. In research, OpenAI demonstrates the ability to apply transformer-based language models to the task of automated theorem proving. Research from Berkeley, Columbia, and Chicago proposes a new test to measure a text model’s multitask accuracy, with 16,000 multiple choice questions across 57 task areas. A report from AI Now takes a look at regulating biometrics, which includes tech such as facial recognition. And the 37th International Conference on Machine Learning makes its proceedings available online.
- ai with ai: [Abstraction Intensifies]
- /our-media/podcasts/ai-with-ai/season-3/3-45
- In COVID-related AI news, a report from Cambridge University and the University of Manchester examines recent studies on using chest x-rays and CTs scans to detect and diagnose COVID, and finds that only 29 of 168 studies had reproducible results; the report further found that all of the studies had high or unclear risk of bias, such that none of the studies had value for use in clinics. CSET provides an overview of how China has used AI in its COVID-19 response. In non-COVID AI news, a GAO report finds systemic problems with facial recognition technology at U.S. airports. The University College of London provides an overview of AI’s use in crime, with deepfakes ranked as the most concerning. Researchers at the University of Warwick and the Alan Turing Institute develop a machine-learning algorithm to identify potential planets from astronomy data. And NASA uses an algorithm to predict more accurately when hurricanes will rapidly intensify. In research, MIT, MIT-IBM Watson AI Lab, and Columbia University present a machine learning model to abstract relations in videos about everyday actions. Researchers in the Netherlands demonstrate that (large!) adversarial patches can work for surveillance imagery of military assets on the ground. The UN Interregional Crime and Justice Research Institute releases a Special Collection on AI. Researchers in Germany and Korea provide a view of continual and open-world learning. And Georgia Tech provides the People Map as a way to discover research expertise at an institution.
- ai with ai: Some Pigsel
- /our-media/podcasts/ai-with-ai/season-3/3-44
- In COVID-related AI news, Andy and Dave discuss an effort from Google and Harvard to provide county-level forecasts on COVID-19 for hospitals and first responders. The National Library of Medicine, National Center of Biotechnology Information, and NIH provide COVID-19 literature analysis with interesting data analytic and visualization tools. In regular AI news, Elon Musk demonstrates the latest iteration of Neuralink, complete with pig implantees. The UK attempted a prediction system for Most Serious Violence, but found that it had serious flaws. Amazon awards a $500k “Alexa Prize” to Emory University students for their Emora chatbot, which scored a 3.81 average rating across categories. The Bipartisan Policy Center releases two reports on AI. And Russell Kirsch, inventor of the pixel and other groundbreaking technology, passed away on 11 August at the age of 91. In research, three papers tackle the problem of reconstructing 3D (in some cases, 4D) models of locations based on tourist photos taking from different vantage points and at different times: the NeRF (Neural Radiance Fields) model and the Plenoptic model. The Human Rights Watch releases a report summarizing Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control. Springer-Verlag releases yet-another-freebie with An Introduction to Ethics and Robotics in AI. And the Conference on Computer Vision & Pattern Recognition has posted the papers and videos from its June 2020 session.
- ai with ai: Highway to the Danger Zone
- /our-media/podcasts/ai-with-ai/season-3/3-43
- With Season 3 drawing to a close, Andy and Dave decided to focus this discussion entirely on the latest results from DARPA’s Air Combat Evolution (ACE) program. On 20 August, DARPA held a contest between 8 competitors, and pitted their AI agents in simulated combat against each other, and against a human pilot (who used a VR system). Heron Systems won the event, beating out the other AI agents, and also not allowing the human pilot to attain a valid targeting solution. Andy and Dave discuss the results, the limitations, and the broader context of these results in light of other research and announcements.
- ai with ai: Elementary, Dear GPT
- /our-media/podcasts/ai-with-ai/season-3/3-42
- In COVID-related AI news, Andy and Dave discuss survey from Amazon Web Surveys that examines the current status of Internet of Things applications related to COVID-19, include scenarios that might help to reduce the severity of an outbreak. MIT publishes an combinatorial machine learning method to maximize the coverage of a COVID-19 vaccine. In "quick takes" on research, Andy and Dave discuss research from Microsoft, University of Washington, and UC Irvine, which provides a checklist to help identify bugs in natural language processing algorithms. A paper from Element AI and Stanford examines whether benchmarks for natural language systems actually correspond to how we use those systems. University of Illinois at Urbana-Champaign, Columbia University, and US Army Research Lab introduce GAIA, which processes unstructured and heterogeneous multimedia data and creates a coherent knowledge base, and allows for text queries. Research published in Nature Neuroscience examines the brain connectivity of 130 mammalian species and finds efficiency of information transfer through the brain does not depend on the size or structure of any specific brain. And finally, Andy and Dave spend some time talking about the broader implications of GPT-3, the experiments that people are conducting with it, and how it is not an AGI. Dave concludes with an analogy from Star Trek: the Next Generation, that he gets mostly correct, though he misattributes Geordi La Forge's action to Dr. Pulaski. If only he had a positronic matrix!
- ai with ai: Remember, Remember, the Fakes of November
- /our-media/podcasts/ai-with-ai/season-3/3-41
- In COVID-related AI news, Andy and Dave discuss an article from Wired that describes how COVID confounded most predictive models (such as finance). And NIST investigates the effect of face masks on facial recognition software. In regular-AI news, CSET and the Bipartisan Policy Center released a report on “AI and National Security,” the first of four “meant to be a roadmap for Washington’s future efforts on AI.” The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce “Fawkes,” a way to “cloak” images and befuddle facial recognition software. In research, OpenAI demonstrates that GPT-2, a generator designed for text, can also generate pixels (instead of words) to fill out 2D pictures. Researchers at Texas A&M, University of S&T of China, and MIT-IBM Watson AI Lab create a 3D adversarial logo to cloak people from facial recognition. And other research explores how the brain rewires when given an additional thumb. CSET publishes a Deepfakes: a Grounded Threat Assessment. And MyHeritage provides a "photo enhancer" that uses machine learning to restore old photos.
- ai with ai: Bots Behaving Badly
- /our-media/podcasts/ai-with-ai/season-3/3-40
- In COVID-related AI news, Tencent AI Labs publishes a "machine learning" model that can predict the risk of a coronavirus patient developing severe illness. Unsupervised machine learning on data from the U.K.'s COVID Symptom Tracker, which has more than 4 million users, suggests patients cluster into roughly 6 different symptom types. Amazon Web Services releases its version of a scientific literature search on COVID-19. Aminer.org offers an open access knowledge graph of COVID-19. And "Digital Contact Tracing for Pandemic Response" takes a look at global approaches and results with implementing contact tracing. In regular AI news, the National Security Commission on AI releases its latest quarterly report, with 35 recommendations. The latest Congressional Research Service Report covers Emerging Military Technologies, including AI and LAWS. Facebook rolls out a "bot army" to simulate "bad behavior" on a parallel version of its platform, in an effort to understand and combat online abuse. In research, DeepMind publishes findings on reinforcement learning, with a meta-learning approach that discovers an update rule that includes "what to predict" as well as "how to learn from it." Research from Berkeley, DeepMind, and MIT explores exploration by comparing how children learn with reinforcement learning agents in a unified environment. Military Review publishes an article by Courtney Crosby, which describes a framework for operationalizing AI for algorithmic warfare. DeepMind and University College London examines deep reinforcement learning and its implications for neuroscience. And MIT makes available online a full lecture series by Marvin Minsky on "The Society of Mind."
- ai with ai: [Abstraction Intensifies]
- /our-media/podcasts/ai-with-ai/season-3/3-39
- In COVID-related AI news, Andy and Dave discuss research that provides a comprehensive survey on applications of AI in fighting COVID-19. The Stanford Institute for Human-Centered AI and the AI Initiative at the Future Society launch a global alliance: Collective and Augmented Intelligence against COVID-19 (CAIAC). MIT and the IBM Watson AI Lab publish a paper that suggests a computational limit to progress in deep learning. The Atlas of Surveillance provides an open-source look at technologies that law enforcement are using across the US, to include facial recognition and drones. Similarly, Surfshark has compiled information on the status of facial recognition technology around the globe, along with additional useful information. MIT finds systematic shortcomings in the ImageNet dataset, with an observation that the crowdsourcing data collection pipeline can cause "misalignments." Research from Google Brain shows that "self-attention" can allow agents to identify task-critical visual hints, and ignore task-irrelevant elements. UC Berkeley, Google, CMU, and Facebook demonstrate "one policy to rule them all," where they use one global policy to control the movement of a wide variety of agent morphologies (which would normally require training and tuning for each separate morphology). The Army's Cyber Institute releases the "Invisible Force" graphic novel, which examines potential uses of AI technology in a future fictional scenario. Alife 2020 makes a compilation of its July conference available, clocking in a nearly 800 pages. And Gwern examines the creative side of GPT-3 through poetry, humor, and other probing interactions.
- ai with ai: Life Is Like a Box of Matrices
- /our-media/podcasts/ai-with-ai/season-3/3-38
- Andy and Dave start with COVID-related AI news, and efforts from the Roche Data Science Coalition for UNCOVER (the United Network for COVID-19 Data Exploration and Research), which includes a dataset of a curated collection of over 200 publicly available COVID-19 related datasets; efforts from Akai Kaeru are included. The Biomedical Engineering Society publishes an overview of emerging technologies to combat COVID-19. Zetane Systems uses machine learning to search the DrugVirus database and information from the National Center for Biotechnology to identify existing drugs that might be effective against COVID. And researchers at the Walter Reed Army Institute of Research are using machine learning to narrow down a space of 41 million compounds to identify candidates for further testing. And the IEEE hosted a conference on 9 July, "Does your COVID-19 tracing app follow you forever?" In non-COVID-related AI news, MIT takes offline the TinyImages dataset, due to its inclusion of derogatory terms and images. The second (actually first) wrongful arrest from facial recognition technology (again by the Detroit Police Department) comes to light. Appen Limited releases its annual "State of AI and ML" report, with a look at how businesses are (or aren’t) considering AI technologies. Anaconda releases its 2020 State of Data Science survey results. And the International Baccalaureate Educational Foundation turn to machine learning algorithms to predict student grades, due to COVID-related cancelations of actual testing, and much to the frustration of numerous students and parents. Research from the Vector Institute and the University of Toronto tackles analogy and the Raven Progressive Matrices with an ensemble of three neural networks for objects, attributes, and relationships. Researchers at the University of Sydney and the Imperial College London have established CompEngine, a collection of time-series data (over 24,000 initially) from a variety of fields, and have placed them into a common feature space; CompEngine then self-organizes the information based on empirical properties. Garfinkel, Shevtsov, and Guo make Modeling Life available for free. Meanwhile, Russell and Norvig release the not-so-free 4th Edition of AI: A Modern Approach. Lex Fridman interviews Norvig in a video podcast. And Elias Henriksen creates the Computer Prophet, which generates metaphors from a database of collected sayings.