Warm light streaming through a wooden window frame

It’s 3 a.m. and you’re staring at the ceiling. You saw a demo today — someone typed a prompt and in twelve seconds an AI produced work that would have taken you a full day. It was good work. Maybe not perfect, but close enough. Close enough that the gap between you and the machine suddenly felt like something you could fall through.

You tell yourself to go back to sleep. You tell yourself it’s just a tool. But the thought keeps circling: If a machine can do what I do, what am I for?

If this sounds familiar, you are not alone. Not even close.

What therapists are actually hearing

In January 2026, CNBC reported that therapists across the United States are seeing a significant surge in clients presenting with AI-related anxiety. Not vague dread about robots — specific, concrete fear about their own futures. Designers who watched Midjourney eat their client pipeline. Copywriters whose retainer contracts vanished in a single quarter. Paralegals whose firms quietly started “experimenting” with AI document review.

The American Psychological Association found in 2024 that 41% of workers fear AI will make some or all of their job duties obsolete. Scientific American has covered the rising clinical phenomenon. And therapists have started using a new term for what they’re seeing: FOBO — Fear of Becoming Obsolete.

FOBO isn’t just career anxiety with a new label. The therapists I’ve spoken to describe something more destabilizing. Their clients aren’t coming in saying “I’m worried about money.” They’re coming in saying “I don’t know who I am anymore.”

A graphic designer in her forties who spent two decades mastering her craft. A freelance journalist who prided himself on being fast and precise. A translator who spoke four languages and built a career on the nuance between them. Each one sitting in a therapist’s chair, trying to articulate why AI doesn’t just threaten their income — it threatens something they can’t quite name.

Harvard Business Review ran an analysis on why AI at work makes people anxious, and their findings cut deeper than the obvious economic explanation. The anxiety isn’t primarily about unemployment. It’s about meaninglessness.

Why this fear runs deeper than job loss

Here’s what I think most AI optimists get wrong: they treat the fear as an information problem. If people just understood AI better, they wouldn’t be scared. So they write threads about prompt engineering and post charts showing new job categories that AI will create. And none of it helps, because the fear was never about information.

The fear is about identity.

In modern economies, we have fused who we are with what we do. “What do you do?” is the first question at every dinner party, every networking event, every first date. Your work isn’t just how you pay rent. It’s how you explain yourself to yourself. The designer doesn’t just make layouts — she is a designer. The writer doesn’t just arrange words — he is a writer. The craft became the self.

So when an AI can produce a passable logo in seconds, it doesn’t just threaten the designer’s invoice. It threatens the story she tells about why she matters. And that is a fundamentally different kind of threat than automation has posed before. Factory workers in the 1980s faced devastating job losses, but most of them didn’t define their souls by their position on the assembly line. Knowledge workers and creatives do. That’s the difference, and it’s enormous.

The freelancer data makes this concrete. Reports show that since generative AI tools became mainstream, freelance job postings have declined sharply: roughly 33% for designers, 28% for writers, and 28% for photographers. These aren’t hypothetical losses. They’re happening now, to real people, and the people they’re happening to built their identities around the work that’s disappearing.

What the fear gets right

I want to be honest here, because I think the worst thing you can do with someone’s fear is dismiss it.

Some of this fear is completely rational. AI is replacing certain tasks. Certain freelance markets have contracted. Companies are laying off workers and citing AI efficiency gains. If you’re a mid-career professional whose skill set overlaps heavily with what large language models and image generators can do, your anxiety is not irrational. It’s pattern recognition.

The people who say “AI won’t replace you, a person using AI will replace you” think they’re being reassuring. They’re not. They’re confirming the fear and adding a layer of blame: now it’s your fault if you get replaced, because you didn’t adapt fast enough. That framing is cruel, and it misses the point entirely.

The point is that we built an economic system where human value is measured by productive output, and then we built machines that can match or exceed that output for a growing number of tasks. Of course people are terrified. The fear isn’t a bug in their thinking. It’s a rational response to an irrational situation.

What the fear gets wrong

But fear, even rational fear, has a way of distorting the picture. And there are specific distortions worth naming.

The first is binary thinking. The fear says: either I’m essential or I’m obsolete. Either AI can’t do my job or it can, and if it can, I’m done. But that’s not how displacement actually works. It’s gradual, partial, and uneven. AI is extraordinary at pattern replication and terrible at judgment, context, and the kind of taste that comes from having lived a specific life. Most jobs are a mixture of both, and the mixture is shifting — but it isn’t collapsing into a binary.

The second is catastrophic extrapolation. You see a demo of AI generating a decent first draft and your brain fast-forwards to a world where no human ever writes anything again. But demos are curated. They show the best outputs, not the average ones. They don’t show the revision cycles, the hallucinated facts, the uncanny flatness that creeps into AI-generated work when you read enough of it. The gap between “impressive demo” and “replaces a human professional end to end” is wider than the fear allows.

The third, and maybe the most important, is the assumption that capability equals replacement. A machine can generate a poem. That doesn’t mean anyone wants to read a poem written by a machine. A machine can compose a melody. That doesn’t mean the melody means anything. There is something in human work that has value because it comes from a human — because it carries the weight of experience, limitation, mortality. We don’t value art just for its technical properties. We value it because someone made it, and that someone had something at stake.

What actually helps

I’m not going to tell you to learn prompt engineering. I’m not going to tell you to “upskill.” If one more LinkedIn post tells anxious professionals to “embrace the change,” I think we’ll all lose our minds.

What actually helps, based on everything I’ve read and everyone I’ve talked to, comes down to three things.

First: name what you’re actually afraid of. Most people, when they dig past the surface, find that their fear isn’t really about money. It’s about mattering. It’s the terror that their years of effort and accumulated skill might amount to nothing. Naming that fear precisely — saying it out loud to a therapist, a friend, yourself — doesn’t make it disappear, but it does take away some of its power. Unnamed fear is a fog. Named fear is a problem, and problems can be worked.

Second: separate your identity from your output. This is the hard one, and it’s a long project, not a weekend exercise. But the people I see navigating this moment best are the ones who have started to disentangle who they are from what they produce. You are not your portfolio. You are not your billable hours. You are the consciousness behind the work — the one who decides what matters, who cares about the outcome, who brings the specific texture of a life lived to every decision you make. AI doesn’t have that. It never will.

Third: understand what makes you genuinely irreplaceable. Not in the motivational poster sense, but in the structural sense. What can you do that a statistical model cannot? You can take responsibility. You can hold ambiguity without resolving it prematurely. You can sit with a client who’s grieving and know what not to say. You can make a judgment call that accounts for context no training dataset could contain. You can care about the outcome — actually care, with something at stake.

This is what I tried to work through in The Last Skill. I wrote that book partly because I was feeling this fear myself. I work with AI every day. I’ve watched it get better at things I thought only I could do. And I needed to find — not assume, but actually find — the line between what AI can replicate and what it can’t. That search changed how I think about my own work, and it’s the most honest answer I have to the 3 a.m. question.

The real question underneath

Here’s what I keep coming back to. The AI anxiety crisis is real, and it deserves to be taken seriously — by therapists, by employers, by anyone building these systems. But it’s also revealing something that was true before AI arrived: we had built our sense of self on a fragile foundation. We told people their worth was their productivity, and then we were surprised when a productivity machine made them feel worthless.

AI didn’t create this crisis. It exposed it.

The question isn’t really “will AI take my job?” The question is “was my job ever the right container for my identity in the first place?” And the answer, for most of us, is complicated. The work mattered. The craft mattered. But we mattered before the work, and we’ll matter after it changes shape.

If you’re lying awake at 3 a.m. — and I know some of you are — I won’t insult you by saying don’t worry. Worry if you need to. But also know this: the fact that you’re afraid means you care. It means the work meant something to you. And that capacity to care, to find meaning, to be troubled by the possibility of losing it — that’s the thing no machine will ever have. Not because we haven’t built one smart enough yet, but because caring requires a life. And you have one.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.