Fountain pen lying on a blank sheet of cream paper

Last week I asked Claude to write a paragraph about loneliness. It came back clean, balanced, emotionally aware. If you read it without knowing the source, you would assume a person wrote it. A thoughtful person. Maybe even a person who had been lonely.

That paragraph was better than most first drafts I’ve written. Structurally tighter. Fewer wasted words. And it took four seconds.

So here is the question I keep circling: if AI can produce prose this fluent, why did I spend a year writing The Last Skill? Why does anyone write anything anymore?

I don’t think the answer is what most people expect.


The fluency trap

We spent decades worrying about the wrong thing. We worried AI text would be bad — stilted, robotic, obvious. That turned out to be a temporary problem. The real problem is the opposite: AI text is good. Often very good. Good enough that most readers cannot tell the difference.

A 2025 study from the University of Tubingen found that people correctly identified AI-generated text only 52% of the time. Coin-flip accuracy. When the researchers added a fake human byline, the detection rate dropped further. We are already past the point where fluency signals humanity.

This matters more than we’re admitting.

Because fluency was our shortcut. When we read a well-constructed sentence, we assumed it came from someone who had thought carefully. Someone who had revised, weighed each word, meant it. Fluency was a proxy for intention. That proxy is now broken.

Writing versus generating

Here is the distinction I keep returning to: there is a difference between writing and generating, and it has nothing to do with quality.

Generating is the production of plausible text from statistical patterns. It is what language models do. They are extraordinary at it. The output can be moving, precise, surprising. But the model has no relationship to what it produces. It doesn’t believe the sentences. It doesn’t stand behind them. If the text turns out to be wrong or harmful, nothing happens to the model. There is no consequence.

Writing is different. When I write, I am putting my name on a claim about reality. I am saying: I believe this enough to attach my reputation to it. If I’m wrong, that follows me. If I’m dishonest, people will eventually find out. My history, my experiences, my visible track record of being right and being wrong — all of that travels with the text.

That is accountability. And accountability is the thing that separates authorship from output.

An AI has no reputation to risk. It has no career that suffers if the argument falls apart. It has no body that lived through the experience it describes. It has no skin in the game. This isn’t a flaw in current AI that future versions will fix. It is a structural feature of what these systems are.

The inflation of words

Think about what happens to any currency when the supply becomes unlimited.

It loses value.

We are watching this happen to written language in real time. The internet was already drowning in content before generative AI. Now the volume is increasing by orders of magnitude. LinkedIn posts, blog articles, product descriptions, email campaigns, even condolence messages — all generated, all fluent, all weightless.

When everything reads well, nothing reads as meaningful. The signal disappears into the noise. A beautifully written product page used to suggest that someone at the company cared enough to write one. Now it suggests they had an API key.

I run growth and marketing for products I’ve built. I use AI to draft emails, write ad copy, summarize research, brainstorm headlines. I am not a purist. I am neck-deep in this technology every day. And precisely because I am neck-deep in it, I can see the problem clearly: when readers stop trusting that a human wrote something, they stop trusting the something.

Not consciously. Not yet. But the erosion is already underway.

The reader’s contract

Every book is a contract between two people. The author says: I spent months or years thinking about this. I organized it. I chose what to include and what to leave out. I signed my name. The reader says: I will give you my attention, my time, my trust that you earned the right to hold them.

That contract requires both parties to be human.

When you read a generated article, you might learn something. You might enjoy it. But you are not in a relationship with an author. There is no one on the other end. The text exists the way a reflection exists — it looks like something real, but there is no depth behind the surface.

I have read AI-generated essays that made me think. I have never read one that made me trust the person behind it, because there is no person behind it. Trust requires a person who can be held to what they said. A person who chose to say this and not that. A person who will still exist tomorrow, carrying the weight of today’s words.

Why I wrote a book anyway

I wrote The Last Skill for the same reason anyone writes anything worth reading: because I had something at stake.

I am a founder in Costa Rica who works with AI every day. I have watched it change what my work looks like, what my team can do, what “skilled” means in my industry. I have felt the fear — the specific, 3 a.m., staring-at-the-ceiling fear — that the things I spent years learning might not matter in five years. That fear is real for me. It lives in my body. An AI can describe that fear fluently, but it has never felt it. It has never lost sleep.

The book came from that place. From needing to think through the fear and come out the other side with something honest. Every sentence in it carries the risk that I am wrong — that my argument won’t hold up, that readers will disagree, that I will look back in ten years and wince. That risk is the entire point. It is what makes the book a book and not a generation.

Could AI have written something structurally similar? Probably. It could have produced 60,000 words on human identity in the age of automation. The sentences would have been clean. The structure would have been logical. And it would have meant nothing, because no one would have been behind it.

This is not an anti-AI argument

I want to be clear about this because the nuance matters.

I am not arguing that AI writing is bad. I am not arguing that people should stop using AI to write. I use it constantly. It makes me faster, helps me think, catches patterns I miss. AI is one of the most powerful tools I have ever worked with.

But a tool and an author are different things.

A hammer helps build a house. We don’t say the hammer built it. We say the architect and the builders did, because they made the decisions, took the risks, put their names on the blueprints. If the house collapses, the hammer is not responsible. The people are.

I am arguing for something simple: that we should keep noticing the difference between a person who wrote something and a system that generated it. That the difference matters even when — especially when — we can’t tell from the words alone.

What stays human

Authorship is a commitment. It says: I was here. I thought this. I was willing to be judged for it.

Machines will keep getting better at producing text. The fluency gap closed a while ago. The style gap is closing now. Soon there will be no surface-level feature you can point to and say, that’s how you know a human wrote it.

Good. Let the surface-level features go. They were never the point.

The point is that a human author has a life that shaped what they wrote. They have biases they are trying to see past, and biases they can’t see at all. They have a body that gets tired, a history that left marks, a future they are worried about. All of that bleeds into the work — not in the grammar or the sentence structure, but in the choices. What to say. What to leave unsaid. Where to push and where to hold back.

Those choices are authorship. Everything else is typography.

I wrote a book because I had something to say that only I could say — not because my words were better than what a model could produce, but because they were mine. Staked. Signed. Accountable. That turns out to be the last thing that matters, long after the prose itself becomes indistinguishable.

The machines write beautifully now. What they cannot do is mean it.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.