Skip to main content

Some books you read and file away. Some books you read and find yourself still thinking about three months later, applying the frame to situations you did not expect. Mustafa Suleyman’s The Coming Wave is firmly in the second category. I picked it up expecting a well-informed perspective on AI risk from someone who helped build DeepMind. What I got was a genuinely unsettling reframe of the entire conversation around technology strategy, governance, and leadership responsibility.

Suleyman’s credentials are hard to dismiss. He co-founded DeepMind, which Google acquired for a reported 500 million dollars in 2014, and which has produced some of the most consequential AI research of the past decade. He subsequently founded Inflection AI and, at the time of writing, serves as CEO of Microsoft AI. This is not a theoretician speculating about AI from the outside. This is someone who has been building the wave for fifteen years.

The core argument of the book is what Suleyman calls the containment problem. Every powerful technology in history has eventually proliferated beyond its creators’ control: nuclear weapons, the internet, social media. AI and synthetic biology, he argues, represent a new category where the proliferation dynamic is faster, the capabilities are broader, and the consequences of misuse are potentially civilizational in scale. And unlike previous technologies, these are not constrained by industrial infrastructure. A teenager with a laptop and internet access can engage with AI capabilities that would have required a major research institution five years ago.

The great dilemma

The dilemma at the heart of the book is genuinely difficult: if you restrict AI development too aggressively, you create conditions for authoritarian surveillance states and stifle the potential benefits that could address climate change, disease, and economic inequality. If you allow unconstrained proliferation, you create conditions for catastrophic misuse at unprecedented scale. There is no clean third option. Suleyman is honest about this, which is one of the reasons the book is valuable. He is not selling a solution. He is describing a problem with unusual clarity.

For CIOs, the book’s immediate relevance is not in its macro-scale analysis but in the governance framework it implicitly argues for. The containment problem framing is a useful lens for enterprise AI strategy. Every AI capability you deploy inside your organization creates a version of the same tension: the more powerful the tool, the greater the potential benefit and the greater the potential for harm or misuse. The question is not whether to deploy, but what controls, oversight mechanisms, and accountability structures you put in place alongside it.

 

Why read it now

The updated 2025 paperback edition includes new material on the agentic AI wave that was still nascent when the first edition appeared in 2023. The additions are well-integrated and bring the analysis current in ways that matter for technology leaders trying to understand where the landscape is heading.

Read alongside Co-Intelligence, The Coming Wave forms a productive pairing. Mollick gives you the practitioner’s guide to working productively with AI today. Suleyman gives you the strategic context for why the decisions you make about AI governance today are more consequential than they might appear. One book tells you how to act; the other reminds you why the stakes are high. Every CIO library should have both.

My one honest caveat: parts of the book are dense, and Suleyman occasionally retreats into a level of abstraction that requires patience. Push through it. The argument rewards the effort, and the chapters on what he calls the narrow path through responsible AI development are among the most thoughtful policy frameworks I have read from a technology practitioner. This is not comfortable reading. It is important reading.


Discover more from In-Movement

Subscribe to get the latest posts sent to your email.