Let me start with what AI can do, because denying it would make everything else I say here untrustworthy.
In the last three years, AI has learned to write code that passes senior-level technical interviews. It drafts legal briefs that practicing attorneys use verbatim. It generates images indistinguishable from photographs. It translates between languages with a fluency that would have seemed miraculous a decade ago. It diagnoses certain cancers from medical imaging more accurately than radiologists with twenty years of experience. It composes music, designs logos, summarizes research papers, writes marketing copy, tutors students, and manages customer service interactions — all at a scale and speed that no human team can match.
This is not hype. This is Tuesday.
And if you’re paying attention to all of this and feeling a low hum of anxiety about your own relevance, you’re not being dramatic. You’re being rational.
But here’s the thing I keep running into, after two years of researching and writing The Last Skill and spending more hours than I can count talking to researchers, founders, therapists, teachers, and artists about what’s actually changing: there are things AI cannot do. Not “hasn’t done yet.” Cannot. Not because the engineering isn’t there yet, but because the architecture of what AI is makes certain capacities structurally impossible.
This distinction matters more than almost anything else in the conversation about AI and human futures. And most people — including most people writing about AI — aren’t making it clearly enough.
The Capability Spectrum
Before I get into specifics, I want to introduce a framework that I’ve found useful for thinking about this honestly. I call it the capability spectrum, and it has three zones.
Things AI genuinely cannot do. These are capacities that require something AI doesn’t have and can’t be given: embodied experience, mortality, genuine stakes, a self that persists through time and bears consequences. Feeling real empathy. Taking actual accountability. Making moral judgments where you are the one who has to live with the outcome. No amount of training data or architectural innovation gives a machine these things, because they aren’t information-processing problems. They’re conditions of being alive.
Things AI does partially, and the gap is deceptive. These are the dangerous middle ground — tasks where AI produces output that looks like the real thing but isn’t. AI can generate text that sounds empathetic. It can produce art that looks original. It can mimic humor, simulate mentorship, approximate cultural sensitivity. The output passes a casual inspection. But anyone on the receiving end of the real thing — a truly empathetic response from someone who’s been where you are, a genuinely original creative vision, a joke that risks bombing — can feel the difference instantly. This zone is where most of the confusion lives.
Things AI nearly does, and will probably close the gap. Certain technical capabilities that people still think of as “human” are, in fact, on their way out. Simultaneous translation in real-time conversations. Complex multi-step reasoning about well-defined problems. Generating photorealistic video from text descriptions. These are engineering challenges, not structural ones. The gap is real today, but it’s shrinking fast. Building your career on these is building on sand.
The honest conversation about AI isn’t “AI can do everything” or “AI can’t do anything important.” It’s learning to tell these three zones apart. That’s what this piece is about.
Why These Gaps Aren’t Closing
There’s a common assumption buried in most AI discourse: that every current limitation is temporary. Give it another few years, another few trillion parameters, another architectural breakthrough, and AI will do everything humans do. This assumption is wrong, and understanding why it’s wrong is the key to navigating what’s coming.
The core thesis of The Last Skill rests on three structural realities about AI that no amount of engineering can change:
AI lacks agency under consequence. A human manager who fires someone carries that decision. It shows up at 3 a.m. when they can’t sleep. It shapes how they make the next decision. An AI that recommends a layoff generates a probability distribution and moves to the next query. The absence of consequence isn’t a bug to be fixed. It’s the fundamental nature of the system. And judgment without consequence is just calculation.
AI lacks embodied experience. Every human understanding is rooted in a body that has been cold, hungry, tired, exhilarated, sick, aroused, and afraid. When a human reads about grief, they have a somatic reference point. When AI processes text about grief, it has statistical patterns of word co-occurrence. These produce different things. Radically, categorically different things. You cannot engineer a body. You cannot simulate the subjective experience of having one.
AI lacks genuine stakes. Nothing is at risk for AI. It cannot lose its reputation, its relationships, its health, or its life. And it turns out that stakes are not incidental to human intelligence — they constitute it. The reason a surgeon’s judgment is valuable isn’t just that they know anatomy. It’s that a person’s life is in their hands and they know it and that knowledge changes how they think. Remove the stakes and you get a different kind of thinking entirely.
These aren’t engineering problems waiting for solutions. They’re structural features of what AI is. And everything that follows in this piece flows from them.
The 7 Things AI Still Can’t Do
1. Feel Genuine Empathy
AI produces empathetic-sounding language better than most humans. A 2023 study in JAMA Internal Medicine found that patients rated ChatGPT’s responses as more empathetic than their doctors’ nearly 80% of the time. But what the AI produced was empathetic text, not empathy. Empathy requires a shared substrate of experience — you understand my fear because you have known fear. AI has processed descriptions of fear. These are not the same thing, and the people who need empathy most — in crisis, in grief, in the worst moments of their lives — can tell the difference immediately.
2. Take Real Accountability
When things go wrong, someone has to own it. Not generate a statement about owning it — actually bear the weight. Accountability requires a self that persists through time, that has a reputation to protect and relationships that depend on follow-through. When a leader says “this was my fault” and means it, something happens in the room. Trust gets rebuilt. Direction gets clarified. When AI generates accountability language, nothing happens. Nobody’s career is on the line. Nobody is losing sleep. The words are there but the weight is missing, and the weight is the whole point.
3. Be Truly Original (Not Recombinatory)
AI recombines existing patterns with extraordinary sophistication. It can blend styles, merge concepts, interpolate between reference points in ways that produce novel-looking output. But origination — the decision that something entirely new needs to exist, born from a specific human life lived in a specific cultural moment — is a different act entirely. When Toni Morrison wrote Beloved, she wasn’t recombining existing narratives about slavery. She was responding to something only she could feel, from a position only she occupied. The vision preceded the execution. AI can execute. It cannot envision.
4. Build Trust Through Vulnerability
Trust between humans is built on a paradox: we trust people who show us their weakness, not their strength. A leader who admits uncertainty. A friend who says “I don’t know what to do either.” A therapist who sits in silence because there are no good words. Vulnerability requires something to lose. AI has nothing to lose. It can simulate the language of vulnerability, but it cannot be vulnerable, and we are wired at a biological level to detect the difference. This is why the deepest human bonds — the ones that actually sustain us — always involve risk that no machine can share.
5. Make Moral Judgments With Real Stakes
AI can apply ethical frameworks with precision. It can tell you what a utilitarian would do, what a Kantian would recommend, what virtue ethics suggests. What it cannot do is choose between these frameworks when the answer isn’t clear and someone’s life is affected by the choice. A hospital administrator deciding who gets the last ventilator. A journalist deciding whether to publish a story that’s true but will destroy a family. These decisions require a decision-maker who will carry the consequences — the guilt, the doubt, the 3 a.m. reckonings. Moral judgment without moral weight is just a lookup table.
6. Understand Cultural Context From Inside
AI can describe cultural practices, translate idioms, and summarize anthropological research. What it cannot do is inhabit a culture — know what it feels like to be an outsider in a room where everyone else shares an unspoken understanding, or to carry the weight of a tradition that your grandmother taught you with her hands, not her words. Cultural understanding isn’t information. It’s participation. It’s the difference between reading about a funeral rite and burying your father. AI has access to every ethnographic paper ever written. It has never been the stranger at the table.
7. Exercise Wisdom (Not Just Knowledge)
Knowledge is knowing what to do. Wisdom is knowing when not to do it. AI has access to more knowledge than any human who has ever lived, but wisdom requires something knowledge alone cannot provide: a life lived with mistakes, losses, recoveries, and the slow accumulation of judgment that comes from having been wrong in ways that cost you something. A wise mentor doesn’t just know the right answer. They know which right answer you’re ready to hear, and which one will break you if delivered too soon. That timing — that restraint — comes from having been broken themselves and rebuilt. No dataset teaches that.
What This Means For You
If you’re reading this and wondering what it means for your career, your life, your sense of purpose — here’s what I’d say.
Stop competing with AI on execution. If your primary value is producing well-defined outputs from well-defined inputs — writing standard copy, generating routine code, processing predictable data — the economics are going to get harder every year. That’s not a judgment. It’s arithmetic.
Start investing in the seven gaps. Every item on the list above is a skill that can be developed. Empathy grows with practice and intention. Accountability is a muscle. Originality comes from living a specific, attentive life. Trust is built one vulnerable conversation at a time. Moral judgment sharpens with experience. Cultural understanding deepens through genuine engagement. Wisdom accumulates, slowly, through the unglamorous process of making mistakes and learning from them.
Understand the spectrum. Not everything AI does poorly today will stay in that category. Be honest about which of your skills live in the “AI will eventually close this gap” zone and which ones are structurally safe. The former requires adaptation. The latter requires deepening.
Use AI as a tool, not a replacement for thinking. The people who will thrive in the next decade are the ones who use AI to handle execution while they focus on judgment, connection, and vision. The tool is extraordinary. But a tool without a human wielding it with purpose is just a tool.
The question has never been whether AI is powerful. It is. The question is whether you’re building the parts of yourself that power can’t replicate.
Go Deeper: The Last Skill
Everything in this piece scratches the surface. If you want the full argument — the research, the framework, the practical implications for your career and your life — that’s what The Last Skill: What AI Will Never Own was written for. It’s the book I needed to read and couldn’t find, so I wrote it.
Read The Last Skill on Amazon →
Explore the Evidence
Each of the questions below takes one specific claim — “Can AI do this?” — and examines it with the depth it deserves. The answers are more nuanced than either the optimists or the pessimists want you to believe.
- Can AI Write a Novel?
- Can AI Be Truly Creative?
- Can AI Feel Empathy?
- Can AI Make Ethical Decisions?
- Can AI Be a Good Manager?
- Can AI Replace Therapy?
- Can AI Teach Children?
- Can AI Lead a Team?
- Can AI Make Real Art?
- Can AI Write Poetry?
- Can AI Be Funny?
- Can AI Fall in Love?
- Can AI Raise Children?
- Can AI Run a Business?
- Can AI Be a Good Friend?
- Can AI Judge Character?
- Can AI Handle a Crisis?
- Can AI Give an Inspiring Speech?
- Can AI Write a Memoir?
- Can AI Be Truly Original?
- Can AI Replace Human Connection?
- Can AI Understand Context?
- Can AI Be Held Accountable?
- Can AI Build Trust?
- Can AI Show Real Courage?
- Can AI Have Good Taste?
- Can AI Be Wise?
- Can AI Compose Music That Moves You?
- Can AI Understand Culture?
- Can AI Be a Mentor?
- Can AI Replace Intuition?
- Can AI Have Common Sense?
- Can AI Be Authentic?
- Can AI Negotiate a Deal?
- Can AI Read a Room?
Related reading
- 7 Skills AI Will Never Replace (According to Research from MIT, Harvard, and the World Economic Forum)
- Scared AI Will Replace You? Here’s What the Research Actually Says
- How to Be Irreplaceable in the Age of AI
Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.