Notebook with handwritten notes beside a laptop

I didn’t set out to read 10,000 Reddit comments about AI. But once I started, I couldn’t stop.

It began as background research. I was writing about how people actually feel about artificial intelligence — not what surveys say they feel, not what pundits claim they feel, but the raw, unfiltered version. And the rawest place I could think of was Reddit.

So I went in. I read through threads on r/cscareerquestions, r/Futurology, r/artificial, r/antiwork, r/freelance, r/graphic_design, and dozens of smaller communities. Posts from early 2024 through late 2025. Thousands of comments, sorted by top and controversial, bookmarked and categorized over the course of several weeks.

This isn’t formal research. I didn’t run sentiment analysis or build a dataset. What I did was read — carefully, for a long time — and pay attention to what kept repeating. Five patterns emerged so consistently that by the end, I could predict the shape of a thread before scrolling past the first reply.

Here’s what I found.


Pattern 1: The fear is about identity, not income

The most common assumption about AI anxiety is that people are afraid of losing their paychecks. And yes, that’s part of it. But it’s not the center of it.

The comments that get upvoted the most — the ones that spark the longest threads — aren’t about money. They’re about meaning. Over and over, people write some version of the same question: “What’s the point of learning this if a machine does it better?”

A graphic designer on r/graphic_design: “I spent four years in school and ten years building a portfolio. Now a client can type three sentences into Midjourney and get something ‘good enough.’ I’m not worried about starving. I’m worried about what my life was for.”

A junior developer on r/cscareerquestions: “I chose this career because I loved building things. If Copilot writes better code than I do before I even hit mid-level, what was the point of any of it?”

This is an existential crisis dressed as a career question. People aren’t just asking will I have a job? They’re asking will I matter? And that second question is harder to answer with economic data or productivity statistics. It sits in a place that policy papers don’t reach.

When someone has tied their identity to a craft — when they are the designer, the writer, the coder — the threat of replacement doesn’t just attack their livelihood. It attacks the story they’ve told themselves about who they are. That’s a different kind of fear. Deeper, and much harder to fix with a career pivot.


Pattern 2: The people closest to AI are the most scared

Here’s the part that surprised me most. The loudest anxiety isn’t coming from truck drivers or factory workers or retail cashiers — the groups that automation experts have been warning about for years. It’s coming from programmers. Designers. Writers. Translators. The knowledge workers who were supposed to be safe.

On r/cscareerquestions, the mood has shifted dramatically since early 2024. Threads that used to be about salary negotiation and which FAANG company to target now read like group therapy. “Is it even worth getting a CS degree anymore?” is posted almost weekly. The answers are split between bitter veterans and anxious newcomers, and nobody sounds confident.

The writers on r/freelance have watched their rates collapse. The illustrators on r/graphic_design have seen clients disappear. The translators have been watching neural machine translation eat their margins for years, but the latest models made it feel final.

What’s striking is that proximity to AI doesn’t produce comfort — it produces dread. The people who understand these tools best are the ones who can see most clearly what the tools will be able to do next. They’re not afraid of today’s AI. They’re afraid of next year’s. And the year after that. And the pace at which the gap between human output and machine output is narrowing.

The factory worker who hasn’t interacted with an LLM has abstract concerns. The software engineer who uses one eight hours a day has concrete ones.


Pattern 3: Nobody trusts the optimists

Every AI thread on Reddit has at least one commenter who offers the standard reassurance: “AI will create more jobs than it destroys. That’s what happened with every previous technology.”

These comments get downvoted into oblivion.

The skepticism isn’t ignorant. It’s specific. Redditors push back with pointed questions: Which jobs? When? For whom? And what happens to the people who can’t retrain? The “new jobs will appear” argument is treated the way a drowning person treats someone shouting encouragement from shore — technically true, completely unhelpful.

There’s a deeper skepticism too. People notice who is being optimistic. When a tech CEO says AI will be great for everyone, the response is immediate and acidic: “Easy to say when you own the AI.” When a venture capitalist writes a blog post about abundance, the top reply is usually something like “abundance for you, unemployment for me.”

The optimism is read as class warfare disguised as futurism. Whether or not that’s fair, it’s the dominant sentiment. And it means that the people who most need to hear a credible positive case for AI are the least likely to trust the people making it.

This is a communication crisis as much as a technological one. The message isn’t wrong necessarily — technology has created new categories of work before. But the messenger is suspect, the timeline is vague, and the people hearing it have bills due this month.


Pattern 4: People want permission to feel scared

The most upvoted comments in AI anxiety threads are almost never advice. They’re validation.

“You’re not crazy for being worried.” “I feel this too.” “Anyone who tells you not to worry about this isn’t paying attention.” These are the comments that get hundreds, sometimes thousands of upvotes. The advice comments — even the good ones — rarely compete.

There’s a reason for this. In most professional spaces, expressing fear about AI carries a stigma. It marks you as a Luddite, a pessimist, someone who “doesn’t get it.” Tech culture in particular rewards optimism and punishes doubt. If you’re a developer who says “I’m scared AI will make me obsolete,” the expected response from your peers is “just learn to use the tools.” Helpful, maybe. But not what you needed to hear first.

Reddit offers something rare: a space where the fear is allowed. Where you can say “I’m terrified” and have a hundred strangers say “me too.” That collective exhale is doing real emotional work. It’s not solving anything, but it’s making the problem survivable. And survivable is the first step.

What I take from this is that the emotional dimension of AI disruption is radically underserved. We have conferences about AI strategy. We have white papers about AI governance. We have almost nothing that says, simply and seriously, “This is frightening, and your fear is rational.”


Pattern 5: The advice gap

When advice does appear in these threads, the most common varieties are met with open hostility.

“Learn to code” — mocked, especially by the people who already code and can see what’s coming. “Upskill” — hated. It’s treated as a meaningless corporate word that puts the burden of adaptation entirely on the individual. “Learn to use AI as a tool” — accepted grudgingly, but followed immediately by “and then what? When the tool doesn’t need me to operate it?”

The frustration isn’t with the concept of adaptation. People understand that the world changes and skills must change with it. The frustration is with the vagueness. Upskill to what? The people posting on Reddit are looking for something concrete. They want someone to tell them, specifically, what to do on Monday morning. And nobody can.

Part of the problem is that the honest answer is uncomfortable: nobody fully knows yet. The technology is moving too fast for anyone to give reliable five-year career advice. But “nobody knows” is terrifying when your mortgage is due and your industry is shrinking. So people keep asking, and the answers keep disappointing.

The advice that does land — the rare comment that gets upvoted and saved — tends to be brutally specific and honest. “Here’s what I did when my freelance income dropped 40%.” “Here’s the exact thing I learned that got me a different kind of client.” Lived experience beats theory every time. Grand frameworks are ignored. Survival stories are shared and bookmarked.


What this means

After weeks inside these threads, what struck me most is that Reddit has stumbled onto something that most professional commentary about AI has missed.

The real question isn’t will AI take my job?

It’s what am I when the job is gone?

That’s an identity question. And it’s the one that no amount of reskilling programs or economic forecasts can answer. When your sense of self is built on what you produce — on your skill, your craft, your usefulness — the arrival of something that produces faster and cheaper doesn’t just threaten your income. It threatens your reason for getting up in the morning.

This is something I spent a long time thinking about while writing The Last Skill. The book’s core argument is that useful is not the same as irreplaceable — and that the distinction matters enormously. AI is extraordinarily useful. That doesn’t mean it can replace what humans actually are when they’re operating at their highest capacity. The four proofs — Creativity, Governance, Decision-Making, and Reputation — are capacities that require what I call “agency under consequence”: the willingness to be the one who answers for the choice. Machines can optimize. They can generate. They can even impress. But they cannot carry the weight of a decision that costs something real. They have no Proof of Human.

What’s remarkable is that the Reddit community has arrived at a version of this insight independently, from the ground up. They didn’t read a framework. They lived it. They felt the absence of something they couldn’t name, and they described it in a thousand different ways across a thousand different threads: What am I for? What’s the point of me? What makes me different from the output?

Those are the right questions. They’re also the ones that most public discourse about AI refuses to take seriously, because they’re emotional, and messy, and don’t fit neatly into an op-ed or a policy recommendation.

But they’re the questions that will define this era. Not how fast the models get. Not which company wins the AI race. But whether ordinary people — designers, developers, writers, translators, the people who built their lives around being good at something — can find a version of themselves that holds up when the tool gets better than them at the task.


Reddit isn’t a research paper. The sample is self-selecting. The loudest voices aren’t necessarily the most representative. I know this.

But 10,000 unfiltered voices telling you the same thing is its own kind of data. And what they’re telling you is this: people aren’t just worried about AI. They’re grieving something. A version of their future they thought was secure. A story about themselves that made sense until last year.

If we want to talk honestly about AI and its effects on people, we have to start there. Not with the economics. Not with the capabilities. With the grief.

Everything else comes after.

Related reading


Juan C. Guerrero is the founder of Anthropic Press and the author of The Last Skill: What AI Will Never Own. Born in Costa Rica, he writes about the questions machines can generate but only humans can answer.