Wooden chess king piece standing on a desk

The question I get asked more than any other — at conferences, in DMs, over coffee — is some version of the same thing: How do I stay relevant?

I understand why people ask it that way. It feels like the ground is shifting. Every week there’s a new model that writes better, codes faster, designs cheaper. The natural response is to figure out how to keep up. How to stay in the race.

But here’s the thing: “How do I stay relevant?” is the wrong question. It assumes the game is about keeping pace with a machine that will always be faster than you. It puts you on a treadmill with no finish line.

The better question is: How do I become irreplaceable?

Relevance is about doing what’s needed today. Irreplaceability is about being the kind of person no system can substitute for, regardless of what tomorrow’s model can do. The difference isn’t semantic. It changes everything about where you invest your time, how you build your career, and what you practice when nobody’s watching.


Stop competing with AI on AI’s terms

Most of the advice floating around right now goes something like this: learn to use AI tools, become more productive, automate your workflows, move faster. And look, that advice isn’t wrong exactly. You should know how to use these tools. I use them every day in my own work across growth, product, and marketing.

But if your entire strategy is “be more productive with AI,” you’ve already lost. Here’s why: everyone will learn to use the tools. They’re getting easier, not harder. Within two years, prompting an AI will be as unremarkable as Googling something. The tools are table stakes, not a competitive advantage.

Worse, if you define your value by speed and output volume, you’re competing directly with the thing that will always be faster and never gets tired. That’s not a race you win. That’s a race where you burn out trying.

The move — the real move — is to stop competing on AI’s terms entirely. Instead of asking “How can I do this faster with AI?” start asking “What can I do that AI genuinely cannot?” Not what it can’t do yet (that goalpost will keep moving), but what it cannot do by design. What is structurally impossible for a system that processes tokens but has never lived a single day?

That’s where the real leverage is.


The five capacities that can’t be automated

After spending two years thinking about this — running companies, building with AI daily, and writing The Last Skill — I’ve landed on five capacities that I believe are structurally resistant to automation. Not because today’s models are weak, but because these capacities require something no architecture of pattern-matching can replicate: a life at stake.

1. Judgment under ambiguity

AI is spectacular when the problem is well-defined. Give it clean data, clear constraints, and a measurable objective, and it will outperform most humans every time. But the most important decisions in business and in life don’t come with clean data. They come wrapped in contradiction, incomplete information, and competing values.

Should you fire a co-founder who’s underperforming but who your team loves? Should you pivot your company when the numbers are ambiguous — not clearly bad, not clearly good? Should you take the safe job or the risky one when you’ve got a kid on the way?

These are judgment calls. They require you to weigh factors that can’t be quantified, to hold uncertainty without collapsing into false certainty, and to act anyway. AI can give you a pros-and-cons list. It cannot make the call. The call requires a person who will live with the consequences.

2. Emotional resonance

Last year I watched a founder pitch investors. Her deck was fine — probably AI-assisted, honestly. What won the room was the sixty seconds where she talked about watching her mother navigate a broken healthcare system in rural Costa Rica. Her voice changed. The room changed. She raised her round.

AI can generate empathetic-sounding text. It can mimic the structure of emotional storytelling. But it cannot resonate. Resonance requires a real person who has actually felt something, transmitting that feeling to another real person who recognizes it. It’s not information transfer. It’s recognition between two beings who know what it costs to be alive.

This matters in leadership, in sales, in any work that involves moving people to action. You cannot automate the moment when someone looks you in the eye and thinks, this person understands.

3. Taste and curation

AI can generate ten thousand options. It cannot tell you which one is right. That’s taste — the ability to look at a sea of possibilities and say this one, and be right in a way that you can feel but can’t fully explain.

Think about a great restaurant menu. It’s not great because it has everything. It’s great because someone decided what to leave off. Think about a well-curated bookstore, a thoughtfully designed product, a brand that just feels right. Behind each of those is a human who made a thousand small decisions about what belongs and what doesn’t.

Taste is trained by living — by eating bad meals and great ones, by reading widely, by traveling, by failing in public and learning what embarrassment teaches you. It’s the residue of a life lived with attention. You can’t train it on data alone.

4. Stakes-based decision making

Here’s something I think about a lot: AI has no skin in the game. It doesn’t lose anything when it’s wrong. It doesn’t have a reputation, a family, a mortgage, a body that suffers consequences. And that absence of stakes changes everything about how it operates.

When a surgeon decides to operate, that surgeon is putting their career, their license, and their patient’s life on the line. When a CEO decides to enter a new market, they’re betting the company. When you decide to leave a stable job to start something, you’re betting years of your life.

These decisions carry a weight that AI fundamentally cannot bear. And people know the difference. We trust the doctor who will be held accountable, the leader who has something to lose, the advisor who is putting their own money where their mouth is. Skin in the game is a signal of seriousness that no algorithm can fake.

5. Creative vision (not execution)

This is the one people get confused about. AI is already very good at creative execution — generating images, writing copy, composing music in the style of whatever you want. If your value was in execution alone, yes, you have a problem.

But execution is the last step. Before execution comes vision: the ability to see something that doesn’t exist yet and believe it should. The ability to say “the world needs this” when no data supports you, when no trend report confirms it, when the only evidence is your own conviction born from your own experience of being alive and paying attention.

Wes Anderson didn’t get his visual style from analyzing successful films. He got it from a life spent noticing specific things and caring about them enough to build a world around them. The same is true for any genuinely original creative act. AI can execute a vision brilliantly. It cannot originate one. Origination requires a point of view, and a point of view requires a life.


What to do Monday morning

Frameworks are nice. But I know from experience that if you don’t walk away from an article with something concrete to do, nothing changes. So here’s what I’d actually do this week if I were rebuilding my career strategy around these ideas.

First, audit your work for replaceability. Take your last two weeks of work and sort every task into two columns: “AI could do this at 80% of my quality” and “AI genuinely can’t do this.” Be honest. For most people, the first column is bigger than they want to admit. That’s fine — awareness is the first step. The goal isn’t to panic. The goal is to start deliberately shifting your time toward column two.

Second, put yourself in rooms where stakes are real. Volunteer to lead the project no one wants because the outcome is uncertain. Take the meeting with the angry client. Make the presentation to the board instead of sending a memo. Every time you put yourself in a situation where the outcome depends on your judgment, your presence, your ability to read a room and respond in real time — you’re building muscle that AI cannot build.

Third, develop your taste deliberately. This means consuming widely and opinionatedly outside your field. Read fiction. Go to galleries. Eat at restaurants where the chef is doing something weird. Travel somewhere that confuses you. Taste isn’t built in your comfort zone. It’s built at the edges, where you encounter things that are different from what you already know and have to decide what you think about them.

Fourth, practice making decisions with incomplete information — and living with them. Most people avoid this. They wait for more data, more consensus, more certainty. But the ability to decide under ambiguity, to commit when you only have 60% of the picture, is a trainable skill. Start small. Make faster calls on things that don’t matter much. Build your tolerance for uncertainty. Then gradually raise the stakes.


The real goal

I want to be direct about something: the goal here is not to survive AI. Survival is a defensive posture. It makes you small. It makes you reactive. And frankly, it’s exhausting.

The goal is to become more human because of AI. To let the machines take the work that was always a little beneath what you’re capable of — the rote stuff, the formatting, the first-draft grinding — and to pour yourself into the work that actually requires you. Your judgment. Your experience. Your taste. Your willingness to put something on the line.

I genuinely believe we’re entering an era where the most valuable people won’t be the ones who are best at using AI. They’ll be the ones who are most fully, irreducibly human — the ones who bring something to the table that no model, no matter how large, can generate.

That’s not a threat. That’s an invitation.

The question was never “How do I stay relevant?” The question is “What kind of person do I want to become?” Answer that one honestly, and the relevance takes care of itself.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.