Intro to Cybersecurity Futures 2025 Scenarios

Scenario 1 — Quantum Leap

The year is 2025, and the first countries to achieve practical quantum computing capabilities have spent the past several years trying to construct a non-proliferation regime that would preserve the economic, strategic and military advantages the technology has begun to generate. But other countries – and even large cities – that are behind in the race have resisted the offer to access watered-down quantum services from the few elite providers in return for restraint in development. Instead, many attempt to pursue “quantum autonomy”. Technology development accelerates almost to the exclusion of ethical, economic and other sociopolitical concerns as quantum leaks into the “deviant globalization” sphere of drug cartels and other worldwide criminal networks. Ultimately, the carrots of a restrictive non-proliferation bargain aimed at governments have not been enticing enough (and the sticks not fearsome enough) to hold a regime together, and the model that more or less worked to contain the spread of nuclear weapons in a previous era fails with quantum. In 2025, the Americans and the Chinese in particular are starting to wonder if their next best move is to reverse course and speed up the dissemination of quantum computing to their respective friends and allies, while the deviant sector is racing ahead.

Scenario 2 — The New Wiggle Room

This is a world in which the promise of secure digital technology, the Internet of Things (IoT) and large-scale machine learning (ML) – to transform a range of previously messy human phenomena into precise metrics and predictive algorithms – turns out to be in many respects a poisoned chalice. he fundamental reason is the loss of “wiggle room” in human and social life. In the 2020s, societies confront a problem opposite to the one with which they have grappled for centuries: now, instead of not knowing enough and struggling with imprecision about the world, we know too much, and we know it too accurately. Security has improved to the point where many important digital systems can operate with extremely high confidence, and this creates a new set of dilemmas as precision knowledge takes away the valuable lubricants that made social and economic life manageable. As the costs mount of not being able to look the other way from uncomfortable truths, or make constructively ambiguous agreements, or agree to disagree about “facts” without having to say so, people find themselves seeking a new source of wiggle room. They find it in the manipulation of identity – or multiple and fluid identities. This effort to subtly reintroduce constructive uncertainty and recreate wiggle room overlaps with the emergence of new security concerns and changing competitive dynamics among countries.

Scenario 3 — Barlow’s Revenge

As digital security deteriorates dramatically at the end of the 2010s, a broad coalition of firms and people around the world come to a shared recognition that the patchwork quilt of governments, firms, engineering standards bodies and others that had evolved to try to regulate digital society during the previous decade was no longer tenable. But while there was consensus that partial measures, piecemeal reforms and marginal modifications were not a viable path forward, there was also radical disagreement on what a comprehensive reformulation should look like. Two very different pathways emerged. In some parts of the world, governments have essentially removed themselves from the game and ceded the playing field for the largest firms to manage. This felt like an ironic reprise of the 1996 ideological manifesto of John Perry Barlow, “A Declaration of the Independence in Cyberspace”. In other parts of the world, governments have taken the opposite path and embraced a full-bore internet nationalism in which digital power is treated unabashedly as a source and objective of state power. In 2025, it is at the overlaps and intersections between these two self-consciously distinctive models, existing almost on different planes, that the most challenging tensions but also surprising similarities are emerging.

Scenario 4 — Trust Us

This is a world in which digital insecurity in the late 2010s brings the internet economy close to the brink of collapse, and in doing so, drives companies to take the dramatic step of offloading security functions to an artificial intelligence (AI) mesh network, “SafetyNet”, that is capable of detecting anomalies and intrusions, and patching systems without humans in the loop. Fears that AI would disrupt labour markets are turned on their head as the AI network actually helps the economy claw its way back from the brink, and restores a sense of stability to digital life. But a new class of vulnerabilities is introduced, and while SafetyNet is for many purposes a much less risky place, the security of the AI itself is consistently questioned. In 2025, most people experience the digital environment as a fractured space: an insecure and unreliable internet, and a highly secured but constantly surveilled SafetyNet organized and protected by algorithms. Institutions can breathe a little easier as they segregate their activities into either environment. But many individuals are wondering whether the features of reality that matter to them – the values they see as worth securing – have been trampled along the way.