skip to main content
ArticlePodcastReportSummary Video Newsfeed Facebook LinkedIn YouTube

AI with AI

Episode 3.41: Remember, Remember, the Fakes of November

In COVID-related AI news, Andy and Dave discuss an article from Wired that describes how COVID confounded most predictive models (such as finance). And NIST investigates the effect of face masks on facial recognition software. In regular-AI news, CSET and the Bipartisan Policy Center released a report on “AI and National Security,” the first of four “meant to be a roadmap for Washington’s future efforts on AI.” The Intelligence Community releases its AI Ethics Principles and AI Ethics Framework. Researchers from the University of Chicago announce “Fawkes,” a way to “cloak” images and befuddle facial recognition software. In research, OpenAI demonstrates that GPT-2, a generator designed for text, can also generate pixels (instead of words) to fill out 2D pictures. Researchers at Texas A&M, University of S&T of China, and MIT-IBM Watson AI Lab create a 3D adversarial logo to cloak people from facial recognition. And other research explores how the brain rewires when given an additional thumb. CSET publishes a Deepfakes: a Grounded Threat Assessment. And MyHeritage provides a "photo enhancer" that uses machine learning to restore old photos.

CNA Office of Communications

John Stimpson, Communications Associate