Pile of books scattered on a wooden table seen from above

I read 15 AI books in 2025. Most of them blurred together.

This isn’t a complaint, exactly. They were mostly good. Some were excellent. But after the third or fourth book explaining how large language models predict the next token, I started getting a familiar feeling — the one you get when you’ve been at a party too long and every conversation starts sounding the same.

The AI book market in 2025 did what book markets always do when a topic gets hot: it flooded. Publishers rushed to fill shelves with anything that had “AI” in the title. Some of those books will be read for decades. Most will be forgotten by 2027. What surprised me wasn’t how many bad books there were — there weren’t that many. What surprised me was how many good books managed to say almost exactly the same thing.


The four books every list includes

You know the ones. If you’ve read any “best AI books” roundup in the past year, you’ve seen them: Harari’s Nexus, Mollick’s Co-Intelligence, Suleyman’s The Coming Wave, Hao’s Empire of AI.

They deserve their spots. Harari puts AI in a ten-thousand-year context that makes you feel less hysterical about the whole thing. Mollick gives you the most practical and honest field report on working with AI that exists. Suleyman lays out the containment problem with the clarity of someone who helped build what he’s warning about. And Hao’s investigative reporting on the power structures behind AI is the kind of journalism that actually matters.

I recommend all four. I’m not going to pretend otherwise. But after reading them — along with eleven others — I was left with a strange hollowness. Like I’d learned a lot about the machine and almost nothing about myself.


What most AI books miss

Here’s the pattern I noticed across those fifteen books. Almost all of them frame AI as one of three kinds of problem: a technology problem (how does it work?), a policy problem (how do we regulate it?), or an economic problem (who wins and who loses?).

These are real problems. They matter. But they’re not the problem that keeps people awake at three in the morning.

The problem that keeps people awake is an identity problem. “Will AI take my job?” sounds like an economic question, but listen to how people actually ask it and you hear something different. What they’re really asking is: Am I still worth something? If a machine can write what I write, design what I design, analyze what I analyze — then what exactly is the point of me?

Forty-one percent of workers globally fear AI will make them obsolete. Therapists report a surge in what they’re calling “FOBO” — fear of becoming obsolete. This is not a policy problem. It’s an existential one. And almost none of the books I read in 2025 had the vocabulary for it.

The technology books couldn’t touch it because they were busy explaining architectures. The policy books couldn’t touch it because existential dread doesn’t have a regulatory framework. The business books couldn’t touch it because the answer to “Am I still worth something?” is not “Learn to prompt better.”


The one that changed my thinking

The book that finally broke the pattern wasn’t from 2025 at all. It was Brian Christian’s The Most Human Human, published in 2011 — before deep learning, before GPT, before any of this.

Christian entered the Loebner Prize, a real-life Turing test where human “confederates” chat with judges alongside chatbots. The judges try to figure out who’s human. The confederate who gets identified as human most often wins the title of “Most Human Human.” Christian wanted to win.

What makes the book extraordinary is the question it forces: if you had to prove you were human in a conversation, how would you do it? What would you say? What would you be?

Christian doesn’t start from the AI side and work toward the human. He starts from the human side. He reads poetry, studies conversation theory, talks to linguists and philosophers and chess grandmasters, all to answer one question: what does a human do that a machine can’t? And his answer — arrived at long before ChatGPT existed — turns out to be more precise and more useful than anything written in the current wave.

His answer, roughly: humans are at their most human when they’re most present. When they surprise themselves. When they respond to the actual moment instead of producing the statistically expected output. When they’re weird, and specific, and uncertain, and alive in a way that no optimization function can replicate.

I read that book in October 2025, fourteen AI books deep, and felt like someone had opened a window in a room I didn’t realize was stuffy.


Why I wrote my own

After those fifteen books, I knew what was missing. Not more analysis of the technology. Not more policy proposals. Not more breathless predictions about AGI timelines. What was missing was a book that began where most people actually are: scared.

Not scared of Terminator scenarios. Scared of irrelevance. Scared of waking up one morning and realizing that the thing they spent twenty years getting good at can now be done in seconds by something that doesn’t even understand what it’s doing.

That’s why I wrote The Last Skill. Not to add another technology book to the pile, but to write the book I couldn’t find — one that started with the fear and didn’t flinch from it, then moved through it toward something real. Not career advice. Not a five-step framework. An answer to the identity question: what are you worth when the machines can do the work?

My answer is that there are four proofs of human irreplaceability — creativity (genuine novelty, not recombination), governance (choosing the value hierarchy), decision-making (absorbing the real downside of the cut you make), and reputation (the externally verified trail of all three). The book calls their sum “agency under consequence” — the willingness to be the one who answers for it. These aren’t skills you can learn from a course. They’re what you already are. AI just made them visible by contrast.

Christian’s book gave me the philosophy. The fifteen books gave me the context. But the feeling — the one where you realize you’re not obsolete, you’re just standing at the edge of something that demands you become more fully yourself — that’s what I tried to put on the page.


The best AI books aren’t really about AI

Looking back across all fifteen, the ones that stayed with me weren’t the ones with the best technical explanations or the scariest predictions. They were the ones that used AI as a mirror — held it up and said, look, here’s what you are by contrast with what this is.

That’s what Christian did in 2011. It’s what I tried to do in The Last Skill. And it’s what I think the next great wave of AI books will have to do, because the technology questions are getting answered and the policy questions are getting debated, but the identity question is still sitting there, mostly untouched, keeping people up at night.

If you read one AI book this year, make it the one that scares you a little — not about the machines, but about yourself. Make it the one that asks what you’re for, now that the machines can do so much of what you used to do.

That question doesn’t have a technical answer. Which is exactly why it matters.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.