Warm light on a group of books with earth-toned covers

If you’ve been told humans are about to become obsolete, read these eight books. Then decide for yourself.

The narrative has become oddly uniform. Every week, a new headline announces that AI has mastered another domain — writing, coding, diagnosing, composing, lawyering, designing. The implication is always the same: the machines are gaining ground, and we are losing it. If you read enough of these stories, a quiet dread sets in. Maybe you’ve felt it. Maybe you’re feeling it right now.

I spent the better part of two years chasing that feeling down — through research papers, through interviews with therapists who treat it, through my own long nights staring at a blinking cursor. What I found is that the obsolescence narrative is loud, but it’s also thin. It falls apart under scrutiny. It collapses the moment you ask it to account for agency, consequence, moral weight, or the stubborn fact that someone has to be responsible when things go wrong.

These eight books are the ones that helped me understand why. They come from different angles — philosophy, cognitive science, Silicon Valley optimism, historical sweep, computer science. They don’t all agree with each other. But every one of them, in its own way, arrives at the same conclusion: human beings are not spare parts waiting to be swapped out.


01

The Last Skill: What AI Will Never Own

Full disclosure: I wrote this one. I wrote it because I went looking for the book that would answer the fear honestly, and it didn’t exist yet.

The Last Skill opens with the fear itself — the 41% of workers who say AI will take their jobs, the therapists reporting a wave of what they call “FOBO” (fear of becoming obsolete), the quiet crisis of people who feel their skills shrinking in real time. The book sits with that fear. It doesn’t dismiss it.

Then it moves through it. At the core are four proofs of human irreplaceability: creativity (generating genuine novelty, not recombining existing patterns), governance (choosing which values sit at the top of the hierarchy), decision-making (absorbing the real downside when the cut is made), and reputation (the externally verified trail that binds the first three together). The framework they point to is what I call agency under consequence — the willingness to be the one who answers for it. Machines process. Humans answer.

The book also introduces the concept of Proof of Human — a way of thinking about authenticity and authorship in an era when AI can generate anything. And it draws a hard line between what is useful and what is irreplaceable. AI is enormously useful. That does not make it a replacement for the person who carries the weight of the outcome.

Read this if: you want a book that starts where you actually are — somewhere between awe and dread — and builds a framework for understanding why you still matter.

Available on Amazon Kindle →
02

Co-Intelligence: Living and Working with AI

Mollick is a Wharton professor who spent thousands of hours working alongside GPT-4 and its successors in his own classroom. The result is the best practical book about AI collaboration written to date — and also, quietly, one of the strongest arguments that humans remain central to any system that matters.

His premise is that AI should be treated as a “co-intelligence,” a collaborator rather than either a threat or a servant. What makes the book land is how specific he gets. Mollick doesn’t deal in abstractions. He shows you what happens when a human and an AI work on the same problem, and where each one breaks down without the other.

The quiet message underneath the practical advice: AI without human judgment is aimless. The human provides direction, context, and the willingness to care about whether the output is actually good. Remove the human and you get volume. You do not get value.

Read this if: you want to understand how to work with AI in a way that amplifies what you bring to the table, rather than erasing it.

03

Superagency: What Could Go Right with AI

Hoffman is a LinkedIn co-founder and one of the most visible AI investors in the world. His bias is obvious. He has money on AI succeeding. And yet Superagency makes an argument worth taking seriously: that AI’s greatest potential is to expand human capability, to give individuals powers that were previously reserved for institutions.

The word “superagency” is the tell. Hoffman is arguing that AI gives humans more agency, not less. A teacher with AI can do what a department used to do. A solo founder with AI can build what used to require a team. The technology amplifies the person — it does not replace the person.

There are moments where the book oversells this thesis. Not every AI-enabled person will become a one-person institution, and Hoffman underestimates the transition costs. But as a counterargument to the doom crowd, it carries weight precisely because it keeps the human at the center of the story.

Read this if: the doom narratives have gotten too loud and you need a well-argued reminder that amplification is not the same as replacement.

04

The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive

This book was published fifteen years ago, before deep learning had its moment, before ChatGPT existed, before any of the current panic had started. It remains one of the most profound things ever written about what separates us from machines.

Christian competed in the Loebner Prize — a live Turing test where human judges try to distinguish between humans and chatbots. He set out to win the “Most Human Human” award (given to the human contestant the judges found most convincingly not a machine), and he used the experience to explore what it means to be present, authentic, and genuinely engaged in conversation.

What makes this book so valuable now — more than when it was first published — is that the machines have gotten dramatically better since 2011, and Christian’s argument still holds. The things that made him most convincingly human were not speed, knowledge, or fluency. They were surprise, vulnerability, the willingness to say something that didn’t optimize for anything. Machines are better at conversation than ever. The qualities Christian identified are still the ones they cannot reach.

Read this if: you want beautiful, rigorous writing about the irreducible strangeness of being a person — and you want to understand why that strangeness is a feature, not a bug.

05

The Next Renaissance: How AI Unlocks Human Creativity

Kass makes a historical argument that reframes the entire AI-vs-human conversation. His thesis: every major technological disruption in history — the printing press, the camera, the synthesizer — was supposed to kill human creativity, and every single one ended up expanding it instead. He calls AI the catalyst for a new renaissance.

The book is full of examples drawn from art, music, architecture, and design. Kass shows how the photograph didn’t kill painting — it freed painters from the obligation to represent reality and launched impressionism, expressionism, and abstraction. He argues AI will do something similar: handle the mechanical aspects of creation, which will push human creators toward the work that requires genuine imagination, taste, and the courage to make something strange.

This is the most optimistic book on this list, and it earns its optimism by grounding it in history rather than speculation. Kass doesn’t deny that the transition will be painful. He argues that the outcome, for human creativity, will be extraordinary.

Read this if: you’re a creative person who has been rattled by AI image generators and AI writing tools, and you need a historically grounded reason to believe your work will survive.

06

Life 3.0: Being Human in the Age of Artificial Intelligence

Tegmark is an MIT physicist who co-founded the Future of Life Institute. Life 3.0 is the big-picture book — the one that asks what happens to the meaning of human existence when intelligence is no longer bound to biology.

Why include it on a list about human importance? Because Tegmark, for all his willingness to entertain far-future scenarios, keeps returning to a central question: What kind of future do we want? The answer, he argues, cannot come from machines. It has to come from us. The choice of values, the vision of what a good civilization looks like, the willingness to decide what intelligence should be for — all of that is human work. AI can build the future. Only humans can choose which future to build.

Some of the specific predictions in the book have aged unevenly (it was published in 2017), but the philosophical framework is sharper than ever. The longer AI advances, the more urgent Tegmark’s central question becomes.

Read this if: you want to think about AI on the longest timescale — not next quarter, but next century — and you want to understand why human choice is the variable that matters most.

07

Nexus: A Brief History of Information Networks from the Stone Age to AI

Harari’s talent is the long zoom. Nexus places AI in a lineage that includes writing, the printing press, and the internet — each one an information revolution that reshaped civilization, created new forms of power, and forced humanity to adapt or fracture.

The relevance to this list: Harari’s history shows that humans have survived every previous information revolution, and that each one ultimately made human judgment more important, not less. The printing press didn’t replace thinkers; it created the conditions under which thinking became essential. Harari argues that AI follows the same pattern — it raises the stakes, which raises the importance of the people who have to navigate those stakes.

Harari doesn’t downplay the dangers. He is unflinching about what happens when information networks are controlled by the wrong people. But the through-line is clear: every revolution in information has increased humanity’s need for wisdom. AI will be no different.

Read this if: you want to understand AI as a chapter in a much longer story, and you want evidence that humans have always risen to meet the challenge — even when the challenge looked impossible.

08

Human Compatible: Artificial Intelligence and the Problem of Control

Russell is one of the most respected AI researchers alive. He wrote the textbook (Artificial Intelligence: A Modern Approach) that trained a generation of computer scientists. When he says we need to rethink how we build AI, it carries weight.

Human Compatible argues that the current paradigm of AI — optimize for a fixed objective — is fundamentally broken. Machines that pursue fixed goals with maximum efficiency are dangerous precisely because they don’t understand what we actually want. Russell proposes a new model: AI systems that are uncertain about human preferences and defer to us when the stakes are high.

The argument for human importance here is structural, not sentimental. Russell shows that any safe AI system must be designed to treat human values as the ground truth — not because humans are always right, but because we are the only entities that bear the consequences of getting it wrong. The human is the anchor. Remove the anchor and the system drifts into territory nobody intended.

Read this if: you want the technical argument for why AI alignment requires keeping humans at the center, written by someone who has spent a career at the frontier of AI research.


The pattern across all eight

These books were written across fifteen years by people working in different fields, with different temperaments and different stakes in the outcome. They do not all agree on the details. Some are optimistic, others cautious, a few genuinely worried. But read together, they converge on a single idea that the doomerism crowd keeps missing.

Machines are powerful. They are getting more powerful. And none of that power changes the fact that someone has to decide what the power is for. Someone has to choose the values. Someone has to bear the consequences. Someone has to be responsible when the system fails — and systems always, eventually, fail.

That someone is not a model. It is a person. It is you.

The real danger of the obsolescence narrative is not that it’s accurate — it isn’t — but that it erodes the confidence people need to stay engaged. If you believe you’re about to become irrelevant, you stop investing in yourself. You stop making hard decisions. You defer to the machine. And that’s when the machine actually becomes dangerous — not because it grew too powerful, but because the humans around it stopped exercising the judgment it was never designed to have.

These eight books will not make the fear disappear. What they will do is give you reasons — grounded in evidence, philosophy, history, and lived experience — to believe that the fear is wrong. Human beings are not spare parts. We are the point of the entire exercise.

Read them. Then get back to the work that only you can do.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.