Hand resting beside a notebook and fountain pen on a wooden desk

Last fall I watched a senior developer at a Fortune 500 company get replaced by three junior developers and a well-tuned Claude workflow. He was good at his job. Fifteen years of experience. Clean code, strong reviews, respected by his team. None of that mattered. The economics shifted underneath him in about six months.

I tell you this because I’m not here to write a comfort piece. AI is replacing people. It’s replacing specific kinds of work at a pace that makes the Industrial Revolution look leisurely. The World Economic Forum’s 2025 Future of Jobs Report estimates that 83 million jobs will be displaced globally by 2030. That number is real. It should make you uncomfortable.

But here’s what I keep coming back to, after two years of research and writing about this stuff (including The Last Skill, which forced me to confront my own fears about obsolescence head-on): the same research that quantifies the threat also reveals something remarkable. There are specific human capacities that AI cannot replicate. Not “hasn’t yet replicated.” Cannot. The architecture won’t allow it. And the research on why is more rigorous than most people realize.

Here are seven of them.


1. Empathy — The Real Kind, Not the Performed Kind

Let me be precise here, because this is where people get confused. AI can simulate empathetic language. It’s actually pretty good at it. A 2023 study in JAMA Internal Medicine found that ChatGPT’s responses to patient questions were rated as more empathetic than those of actual physicians 78.6% of the time. That sounds like a point for the machines, right?

Wrong. Read the study closely. What the AI produced was empathetic-sounding text. What it did not produce — what it structurally cannot produce — is the shared emotional resonance that makes empathy actually work in a therapeutic, leadership, or caregiving context. Harvard Business School professor Amy Edmondson’s research on psychological safety (published across multiple papers and her 2018 book The Fearless Organization) shows that what makes teams functional isn’t empathetic language — it’s the felt sense that another person genuinely understands your risk. A person who has been fired understands the terror of being fired. A language model that has processed ten thousand firing stories understands the word patterns around the terror of being fired. These are not the same thing.

Think about the last time someone said the right words to you but you knew they didn’t mean it. You felt the emptiness instantly. That’s the gap. And it isn’t closing.

2. Moral Judgment Under Genuine Uncertainty

AI can apply ethical frameworks. It can tell you what a utilitarian would do, what a Kantian would do, what a virtue ethicist would do. What it cannot do is choose between those frameworks when the stakes are real and the answer isn’t obvious.

This is the finding at the heart of a widely cited MIT Sloan Management Review analysis on AI and organizational decision-making. The researchers found that AI systems consistently fail at what they call “moral uncertainty” — situations where reasonable ethical frameworks conflict and someone has to make a call that they’ll have to live with. A hospital administrator deciding whether to allocate a ventilator to a younger patient or an older one. A manager deciding whether to lay off a high performer who’s also a single parent. A journalist deciding whether to publish a story that’s true but will destroy someone’s life.

The key phrase is have to live with. Moral judgment requires moral weight. You have to be the kind of entity that bears consequences — guilt, regret, pride, responsibility. AI has no skin in the game. It never will. And judgment without skin in the game is just calculation.

3. Creative Vision (Not Creative Execution)

This distinction matters enormously, and almost nobody makes it clearly enough.

AI is astonishingly good at creative execution. It can paint in the style of Monet, write sonnets in the meter of Shakespeare, compose music that sounds like it belongs in a Miyazaki film. The execution problem is largely solved. If you need a thing that looks like a creative work, AI will give you one in seconds.

But here’s what it has never done and shows no signs of doing: deciding that something needs to exist in the first place. When Toni Morrison sat down to write Beloved, she wasn’t executing a prompt. She was responding to a specific, personal, historically situated agony about what slavery had done to Black motherhood in America. The vision — the recognition that this story needed to be told this way at this moment — that’s the part AI can’t touch.

The World Economic Forum’s 2025 report ranks “creative thinking” as the single most important skill for workers going forward. But read the fine print: what they mean by creative thinking is the ability to identify which problems are worth solving and to envision outcomes that don’t yet exist. That’s vision. That’s the human part. The execution — the rendering, the drafting, the composing — that’s increasingly the machine’s part. And honestly? Good. Let it be.

4. Physical Presence and Skilled Touch

This one is so obvious that intellectuals keep overlooking it.

We are bodies. We exist in physical space. And an enormous amount of human skill — the kind that actually matters in daily life — depends on the integration of perception, movement, proprioception, and real-time physical adaptation that AI is decades away from matching, if it ever does.

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has been working on robotic dexterity for years. Their findings are humbling. As of the latest published research, robots still struggle with tasks that a three-year-old handles effortlessly — picking up an egg without breaking it, tying a shoe, folding a towel that isn’t perfectly flat. Moravec’s Paradox, named after the roboticist Hans Moravec, states it perfectly: what’s hard for humans (chess, calculus, pattern recognition in massive datasets) is easy for machines, and what’s easy for humans (walking on uneven ground, catching a ball, reading a room when you walk in) is absurdly hard for machines.

Think about a physical therapist working with a stroke patient. They’re reading muscle resistance in real time. They’re adjusting pressure based on a wince the patient didn’t even know they made. They’re doing a hundred calculations per second that live in their hands, not their heads. No algorithm replicates that, and the gap between current robotics and that level of embodied intelligence is not a gap that’s shrinking fast.

5. Humor and Irony

AI can tell jokes. Some of them are even funny. But here’s the thing about humor — real humor, the kind that makes you laugh so hard you cry, the kind that makes a room full of strangers suddenly feel like co-conspirators — it requires shared vulnerability. It requires the possibility of bombing.

A 2024 study published in the Proceedings of the National Academy of Sciences (PNAS) tested AI-generated humor against human-generated humor across several thousand participants. The AI performed competently on puns and wordplay. It failed spectacularly on irony, self-deprecation, and contextual humor — the kinds that depend on a shared understanding of what’s absurd about being alive. The researchers concluded that effective humor requires what they called “common ground modeling” — an ongoing, real-time model of what your specific audience knows, fears, and finds ridiculous. AI can approximate this. It cannot inhabit it.

I think about this every time a chatbot tries to be witty with me. The words are arranged correctly. The structure of the joke is sound. But the danger is missing — the risk that this might not land, that I might embarrass myself. Humor without risk is just pattern matching. And we can feel the difference in our bones.

6. Existential Meaning-Making

Why are you here? Not “here” as in reading this article. Here as in alive. What makes your particular existence worth the trouble?

AI cannot ask this question and it cannot answer it. Not because the question is too complex — AI handles complexity fine — but because the question requires a questioner who has something at stake in the answer. Viktor Frankl built an entire school of psychotherapy (Man’s Search for Meaning, 1946) around the insight that humans can endure almost anything if they have a why. The search for meaning isn’t an information-processing task. It’s a survival mechanism rooted in mortality.

Harvard’s Human Flourishing Program, led by Tyler VanderWeele, has published extensive research showing that meaning-making is among the strongest predictors of psychological resilience, physical health, and longevity. Their data shows that people with a strong sense of purpose have a 15.9% lower mortality risk over a given follow-up period compared to those who lack one. This isn’t soft science. It’s epidemiology.

And it requires something AI doesn’t have: a death. A finite window of consciousness that forces the question of what to do with it. I spent a long time thinking about this while writing The Last Skill, and I keep arriving at the same conclusion: meaning is not a problem to be solved. It’s a condition of being mortal. Machines aren’t mortal. So the question doesn’t arise for them, and their answers to it — however eloquent — are always borrowed from us.

7. Taste and Curation

Here’s a dirty secret about the AI age: the more content AI produces, the more valuable human taste becomes. We are drowning in generated output. A million AI-written articles, a hundred thousand AI-generated images, ten thousand AI-composed songs — all technically competent, all forgettable. The scarce resource is no longer creation. It’s selection.

This is what the World Economic Forum means when they rank “analytical thinking” as the second most critical skill for the future workforce (behind creative thinking). Analytical thinking, in their framework, includes the ability to evaluate, prioritize, and judge quality — to look at a sea of options and say this one. That’s taste. That’s curation. And it requires a lifetime of accumulated experience, personal preference, cultural context, and aesthetic conviction that no model possesses.

Think about the best editor you’ve ever worked with. (If you haven’t worked with a great editor, think about the friend whose restaurant recommendations never miss.) What they have isn’t a formula. It’s a sensibility — built over years of reading, eating, watching, living — that allows them to instantly distinguish between “good enough” and “genuinely great.” AI can rank things by metrics. It can sort by popularity, by engagement, by semantic similarity to a reference. What it cannot do is care whether something is great. Caring is the engine of taste, and caring requires a self that has preferences rooted in lived experience.


What This Means (and What It Doesn’t)

I want to be honest about the limits of this argument. Saying “AI can’t replace these seven skills” does not mean your job is safe. Plenty of jobs don’t require much empathy, moral judgment, or creative vision. If your work is primarily execution — producing outputs from well-defined inputs — the threat is real and it’s here now. I’m not going to pretend otherwise.

But here’s what I believe, based on the research and based on my own experience building software and publishing books in the middle of this revolution: the people who will thrive are the ones who double down on the skills that remain human. Empathy. Judgment. Vision. Presence. Humor. Meaning. Taste. These aren’t soft skills. That term has always been a lie. They’re the hardest skills — the ones that take a lifetime to develop and that no shortcut, no automation, no architecture of floating-point operations can replicate.

The WEF estimates that 59% of workers will need reskilling by 2030. Most of the conversation around reskilling focuses on technical competencies — learn to prompt, learn to code, learn data analysis. Fine. Do that. But the research consistently points somewhere deeper: the skills that will matter most in an AI-saturated world are the ones that make us human in the first place.

That senior developer I mentioned at the top of this piece? He’s fine, by the way. He pivoted into a role where he mentors junior engineers, makes architectural decisions that require understanding the business (not just the code), and serves as the person his team trusts when something goes wrong and nobody knows what to do. His value didn’t disappear. It migrated — from execution to judgment, from output to presence.

That migration is available to all of us. But only if we stop pretending the machines are coming for everything, and start getting serious about what they can’t touch.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.