The order matters. If you start with the doom books, you’ll panic. If you start with the optimism books, you’ll be complacent. Neither state is useful. Both feel like knowledge but function as paralysis.
I’ve read most of the major AI books published in the last three years, and the single biggest mistake I see people make is reading them in the wrong sequence. They pick up whatever’s trending, get an incomplete picture, and form opinions based on whichever author got to them first.
Here’s the reading order I’d recommend. Seven books, four stages. Each stage builds on the one before it, so by the time you reach the end, you have something more valuable than any single book can give you: a framework for thinking about all of this on your own terms.
Stage 1: Understand what’s happening
Before you form opinions, you need to know what’s actually going on — both the real capabilities and the real hype. These two books, read back to back, give you that foundation.
The Coming Wave
Start here. Suleyman co-founded DeepMind and now runs Microsoft AI, so he’s seen this from the inside. His central argument is what he calls the “containment problem”: we’re building technologies (AI, synthetic biology, quantum computing) that are simultaneously too powerful to release and too powerful to suppress. There is no historical precedent for this kind of dilemma.
What makes this the right starting point is tone. Suleyman isn’t trying to scare you or sell you anything. He’s laying out a structural problem with clear eyes. Published in 2023, several of his warnings have already come true ahead of schedule, which says something about the quality of his thinking.
What you’ll take away: A clear-eyed understanding of the problem’s scale. You’ll stop thinking of AI as a product and start thinking of it as a force.
AI Snake Oil
Now that Suleyman has given you the big picture, Narayanan and Kapoor bring you back to earth. They’re Princeton computer scientists who have spent years cataloging the gap between what AI companies claim and what their systems actually do. Predictive policing, hiring algorithms, content moderation — they take apart each one with the patience of people who genuinely enjoy proving things wrong.
This is the corrective. After reading The Coming Wave, you’ll understand the scope of the problem. After reading AI Snake Oil, you’ll know what AI actually can and can’t do. That combination — respecting the power without buying the hype — is the foundation everything else rests on.
What you’ll take away: A reliable filter for separating real capability from marketing. You’ll stop being impressed by the wrong things.
Stage 2: See the human side
Now that you understand the technology, the next question is: what does this mean for you? Not abstractly. Personally. These two books approach that question from opposite ends — one practical, one philosophical — and together they cover the full range.
Co-Intelligence: Living and Working with AI
Mollick is a Wharton professor who has logged thousands of hours working directly with AI systems in his classroom and research. This isn’t someone speculating from the sidelines — these are field notes from someone who has been in the trenches since GPT-4 launched.
His argument is that AI works best as a collaborator, not a replacement. He calls it a “co-intelligence.” The book is full of specific, tested approaches to working alongside these systems without losing your own judgment in the process. If you need to integrate AI into your work life tomorrow, this is the manual.
What you’ll take away: Practical strategies for collaboration. You’ll know how to use AI without becoming dependent on it.
The Last Skill: What AI Will Never Own
Full disclosure: I wrote this one. But it exists because none of the other books on this list did the thing I needed most — they didn’t start with the fear.
Forty-one percent of workers are scared AI will take their jobs. Therapists report a surge in what they call “FOBO” — fear of becoming obsolete. The Last Skill takes that fear seriously instead of waving it away, and then moves through it toward something I believe is true: there are four proofs of human irreplaceability.
Creativity — genuine novelty, not recombination of existing patterns. Governance — choosing the value hierarchy when values conflict. Decision-making — absorbing the real downside when the call goes wrong. Reputation — the externally verified trail of all three, built over time. Together they point to what I call “agency under consequence” — the willingness to be the one who answers for it. Machines can process, predict, and generate. They cannot stake themselves on an outcome.
Where Mollick gives you the practical collaboration guide, The Last Skill gives you the identity guide — the philosophical foundation for why you still matter. Part III lays out what I call the Freedom Architecture: a concrete framework for applying these four proofs to your own work and life.
What you’ll take away: A clear answer to the question “What am I still for?” — and a practical architecture for living it.
Available on Amazon Kindle →Stage 3: Get the big picture
You understand the tech. You’ve worked through the personal implications. Now zoom out. These two books place AI inside the larger story of human civilization — one through history, one through power.
Nexus: A Brief History of Information Networks from the Stone Age to AI
Harari does what he does best: zooms out until the thing you thought was unprecedented looks like the latest chapter in a very old story. Nexus argues that AI is not a technological revolution but an information revolution — and we’ve survived those before. Writing, printing, the internet. Each one reshaped power. Each one created new forms of manipulation alongside new forms of progress.
The uncomfortable insight is that AI will be no different. Reading this after the first four books gives you something powerful: you’ll see the current moment not as chaos but as pattern.
What you’ll take away: Historical perspective. AI stops feeling like an alien invasion and starts feeling like a chapter you can read.
Empire of AI
Hao is an investigative journalist who got closer to the inside of OpenAI than almost anyone outside the company. Where Harari gives you the thousand-year view, Hao gives you the view from last Tuesday — the boardroom fights, the safety compromises, the genuine tensions between speed and responsibility.
This was the most-rated AI book of 2025 on both Amazon and Goodreads. The reporting fills in gaps you didn’t know you had. By this point in the reading list, you’ll have the context to understand not just what Hao is describing but why it matters.
What you’ll take away: A clear map of who is building AI, what motivates them, and where the pressure points are.
Stage 4: Choose your path
Here’s where the reading list branches. You’ve done the work — you understand the technology, you’ve reckoned with the personal stakes, you’ve seen the big picture. Now pick the book that matches where your head is at.
Superagency: What Could Go Right with AI
Pick this if you’re feeling optimistic. Hoffman is a LinkedIn co-founder and one of the most prominent AI investors in Silicon Valley. His argument is that AI will amplify human capability rather than replace it. Yes, he has a financial stake in that being true. But his optimism is more nuanced than the title suggests, and after the five books you’ve already read, you’ll have the critical framework to take what’s useful and leave what’s convenient.
What you’ll take away: A credible case for the upside — read with the skepticism you’ve earned.
If Anyone Builds It, Everyone Dies
Pick this if you need to face the worst case. This is the serious existential risk argument — not sensationalized, not clickbait, but a rigorous examination of the scenarios where advanced AI goes genuinely wrong. It’s not comfortable reading. It’s not supposed to be.
The reason I put this last and not first is precisely because it’s so potent. Read without the foundation of the previous books, it can tip into despair. Read with that foundation, it becomes what it should be: the most important stress test for your thinking.
What you’ll take away: An honest reckoning with the tail risks — and enough context to carry that knowledge without being crushed by it.
Why this order works
Each stage prepares you for the next. You can’t properly evaluate the human implications (Stage 2) without first understanding what the technology actually does (Stage 1). You can’t place AI in historical and institutional context (Stage 3) without first knowing what it means for you personally. And you can’t responsibly choose optimism or pessimism (Stage 4) without all three preceding stages as ballast.
Most people I talk to have read one or two books from this list in isolation. That’s fine, but it’s like reading a single chapter of a novel. You get a scene. You don’t get the arc.
Seven books. That’s a few months of reading. By the end, you won’t have all the answers — nobody does, and anyone who claims to is selling something. But you’ll have the framework to find your own. And in a moment when the ground is shifting this fast, your own framework is the most valuable thing you can build.
Related reading
- A Reader’s Guide to the AI Book Boom of 2025–2026
- The 10 Best Books About AI and What It Means to Be Human (2026)
- What to Read After ChatGPT Changed Everything
Juan C. Guerrero is the founder of Anthropic Press and the author of The Last Skill: What AI Will Never Own. He writes from Costa Rica, where the monkeys are loud and the Wi-Fi is surprisingly good.