Single unlit match standing upright on a wooden surface

“AI will surpass human intelligence within a decade, and once it does, it will render most of humanity economically useless. There is no plan for what happens next.”

You’ve heard some version of this. Maybe from a tech CEO hedging his bets on a podcast, maybe from a researcher at a congressional hearing, maybe from that one friend who reads LessWrong and won’t shut up about p(doom). The details vary. The structure is always the same: AI goes up, humans stay flat, disaster follows.

There’s one thing this gets wrong. And it’s not a minor thing. It’s the load-bearing assumption underneath the entire argument.

The static human fallacy

Every doomsday prediction I’ve read — and I’ve read a lot of them — relies on the same mental model. Draw two lines on a graph. One line represents AI capability: it curves upward, steep and accelerating. The other line represents human capability: it’s flat. A horizontal rule. Fixed.

The prediction writes itself. The lines cross. Humans lose.

This is a one-variable model of a two-variable system. It gets AI roughly right (capability is increasing fast) and humans completely wrong (we are not a fixed quantity). It treats seven billion people like a physical constant — like the speed of light or the charge of an electron. Something that just is and doesn’t respond to changed circumstances.

But that’s never been true. Not once in the history of our species.

Humans are dynamic. When the environment changes, we reorganize. We shift what we value. We find new things to be good at, new capacities to develop, new ways to matter. We’ve been doing this for ten thousand years, and we’re not going to stop because a language model learned to write passable legal briefs.

The doomers aren’t wrong that AI is powerful. They’re wrong that we’ll sit still while it gets more powerful. That mistake changes everything about the prediction.

The historical pattern no one wants to talk about

The calculator didn’t kill mathematicians. It killed arithmetic.

That’s an important distinction, and it repeats with eerie consistency across every technological disruption worth studying. The technology doesn’t eliminate the human role. It eliminates the specific version of the human role that the technology can replicate, and humans reorganize around what’s left.

ATMs were supposed to kill bank tellers. Every serious forecast in the 1980s said so. Instead, ATMs made it cheaper to operate bank branches, so banks opened more of them, and the number of tellers actually increased. What changed was the job. Tellers stopped counting cash and started selling financial products, handling complex transactions, building relationships. The rote part of the job evaporated. The human part expanded.

The internet was supposed to kill journalism. Instead, it killed the specific distribution model that newspapers depended on — classified ads, physical delivery, regional monopolies. Journalism itself reorganized. Some of the reorganization was ugly (we lost a lot of local newsrooms, and that matters). But the practice of journalism — investigating, interviewing, making sense of the world for an audience — didn’t go anywhere. It changed shape.

The printing press was supposed to destroy memory. Socrates (well, Plato’s Socrates) made the same argument about writing itself — that it would rot our ability to remember. He was right, in a narrow sense. We don’t memorize epic poems anymore. But we freed up cognitive capacity for other things, and what we built with that freed capacity turned out to be more interesting than what we lost.

The pattern is so consistent it should be boring by now: technology absorbs the predictable part of a human activity, and humans migrate to the unpredictable part. Every time. Without exception.

So when someone tells you AI will “replace” writers, or lawyers, or doctors, or programmers, ask them: replace which part? The part that’s already routine and pattern-matchable? Probably, yes. The part that requires judgment under uncertainty, creative recombination, or the ability to sit across from another human being and actually understand what they need? No. That part is about to become more valuable, not less.

What AI doom gets right

I don’t want to be glib about this. The doomers are wrong about the endpoint, but they’re right about the pain.

Transitions hurt. The fact that bank tellers eventually found new roles doesn’t help the specific teller who was forty-seven years old in 1985 and couldn’t afford to retrain. The fact that journalism reorganized doesn’t bring back the Albuquerque Tribune. History’s long arc may bend toward adaptation, but individual humans live in the short arc, and the short arc can be brutal.

The speed of this particular transition is also genuinely new. Previous technological disruptions played out over decades. The steam engine took a century to fully reshape the labor market. The internet took twenty years. AI is compressing that timeline into something closer to five. When I started writing about this topic two years ago, AI couldn’t generate a coherent paragraph. Now it can draft legal contracts and debug production code. That acceleration is real, and dismissing it is as dishonest as claiming it leads to extinction.

There’s also a distribution problem. The benefits of AI are accruing to people who already have capital and technical fluency. The costs are falling on people who don’t. That’s not an AI problem specifically — it’s the same pattern we saw with globalization and the internet — but the speed and scale make the inequality sharper.

So the real risks aren’t existential. They’re distributional. The danger isn’t that AI destroys humanity. It’s that the transition period is so fast and so uneven that millions of people get crushed in the gears of an adjustment that, in the long run, will probably be fine. “Probably fine in the long run” is cold comfort if you’re the one getting crushed right now.

Anyone who dismisses these concerns isn’t paying attention. Anyone who inflates them into species-level extinction isn’t being honest.

A better mental model

Here’s what I’d replace the doomsday graph with.

Stop thinking “AI vs. Humans.” That framing is zero-sum, and zero-sum framing makes you stupid. It pushes you toward either panic (they win, we lose) or denial (we win, they lose), and both responses are equally useless.

Instead, ask a different question: What do humans do in a world where AI handles the predictable?

That’s a generative question. It opens doors instead of closing them. And the answers are more interesting than the doom crowd admits.

If AI handles the predictable, humans handle the unpredictable. That means judgment in ambiguous situations. It means creative work that isn’t recombination of existing patterns but genuine surprise. It means the parts of medicine, law, teaching, and management that depend on reading a room, carrying emotional weight, making a call when the data is incomplete. It means everything that requires a stake — something on the line, skin in the game, a reason to care about getting it right that goes beyond optimization.

Machines don’t have stakes. They don’t have anything to lose. And it turns out that a staggering amount of what we value in human work — trust, accountability, courage, the willingness to make a hard call and stand behind it — depends on the worker having something to lose.

This is the argument I tried to make in The Last Skill: that the capacities AI can’t replicate aren’t bugs in the human system. They’re features. And they’re about to become the most valuable things we have.

I’m not saying the transition will be easy. I’m saying the destination is not extinction. It’s reorganization. It’s always been reorganization.

The real question

The AI doomsday narrative is comforting in a perverse way. It gives you permission to stop trying. If the machines are going to win no matter what, you might as well scroll Twitter and wait for the end.

The reality is harder. The reality says: the world is changing fast, the change will be uneven and painful, and your job is to figure out where you fit in what comes next. That’s not a story about doom. It’s a story about work — the difficult, unglamorous, human work of adapting.

Every previous generation did that work. The weavers who survived the power loom. The accountants who survived the spreadsheet. The designers who survived Canva. They didn’t survive by being better than the machine at the machine’s game. They survived by changing games entirely.

We will, too. Not because humans are magical or because AI is overhyped. But because adaptation is what we do. It’s what we’ve always done. And the predictions that say otherwise are modeling a species that doesn’t exist — a species that holds still while the world moves around it.

That’s not us. It’s never been us.

The one thing every AI doomsday prediction gets wrong is the variable it doesn’t bother to model: the human response. And that variable, historically, has made all the difference.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.