AI with AI
Episode 2.32: Who Manipulates the Manipulators? (Part 2)
Researchers at the University of Tubingen demonstrate that virtual neurons spontaneously develop a “number sense” when assessing the number of visual items (such as dots) in a set. The Allen Institute for AI create Grover, a neural network that can generate fake news, but that can also detect NN-generated fake news; Grover uses the same architecture as GPT-2 (the previous “unreleasable for the safety of humanity” algorithm), but these researchers highlight the importance of making available such generators. In related news, Witness Media Lab releases a report on the current state of deepfake tech; a CNN report looks at how Finland is fighting fake news, and a NY Times article examines the “weaponization” of AI-generated disinformation. A Mashable article from Marcus Gilmer looks at the state of software that attempts to identify deepfakes. The International Committee of the Red Cross releases a report a “human-centered approach” to AI and machine learning in armed conflict. A paper from Springer-Verlag provides a history and references for the “neural-symbolic debate.” Hiroki Sayama at SUNY Binghamton makes available “Introduction to the Modeling and Analysis of Complex Systems.” The US-China Commission releases testimony on a day-long session, with testimony from experts on three topics, including the US-China Competition in AI. The Allen Institute brain atlases available for exploring online. The 36th International Conference on Machine Learning meets in Long Beach, CA, with over 6,000 participants. Meanwhile, CogX meets in King’s Cross, London. And former Secretary of Defense Ash Carter pens a “letter to a young Googler” on the morality of defending America.