Fountain pen nib on cream paper with ink

I could have had AI write this article. You wouldn’t have known.

That sentence should bother you. It bothers me. I run a publishing company, I use AI every day in my work, and I’m telling you plainly: a large language model could have produced something that reads like this essay, with my name on it, and most readers would have accepted it without question. The grammar would be clean. The arguments would be structured. The tone would be mine — or close enough.

And that is precisely why I wrote it myself.

Not because AI is bad at writing. It’s disturbingly good. But because authorship was never about the quality of the sentences. Authorship is about who stands behind them.

Authorship as accountability

Here is the distinction that matters: AI generates text. Humans author it. The difference isn’t style or grammar or even originality. The difference is consequence.

When I put my name on this essay, I’m making a bet. I’m betting my reputation that these ideas hold up, that the reasoning is honest, that I haven’t misrepresented something to make a point. If I’m wrong, I pay for it. My credibility takes the hit. People trust me less. That cost is real, and it’s mine alone.

An AI can’t make that bet. It has no reputation to lose, no career to damage, no relationships that fray when it gets something wrong. It produces text in response to a prompt. If the text is misleading or hollow or quietly false, nothing happens to the model. It doesn’t care. It can’t care. It has no stake in the outcome.

This is what I mean when I talk about agency under consequence — the willingness to be the one who answers for it. In The Last Skill, I argue that this is the irreducible thing machines cannot replicate. Not because they lack processing power, but because they lack a life that’s on the line. Useful is not the same as irreplaceable. A machine can be enormously useful without ever being accountable.

Authorship is the oldest form of accountability in intellectual life. You write your name. You own what follows. That transaction — name for trust, trust for influence — has governed publishing, journalism, science, and law for centuries. We are now in danger of breaking it.

The inflation problem

We are living through a content inflation event. The supply of written text is exploding — blog posts, articles, social media threads, reports, emails — and much of it is generated by machines that have no opinion, no experience, and no skin in the game. The words are fine. The words are always fine. But they carry no weight.

Think about what happens when any currency inflates. The unit becomes worth less. You need more of it to buy the same thing. This is happening to written language right now. A thousand-word article used to signal that someone sat down and thought about something. It no longer signals that. It might mean someone typed a prompt and hit enter.

The result is a trust collapse. Readers are already starting to discount everything they encounter online. “Is this real?” is becoming the default question, and that skepticism doesn’t just punish the generated content. It punishes everyone. When readers can’t tell which words were authored and which were generated, they start trusting none of them.

This is not a hypothetical. If you run any kind of publication, you’ve felt it. Open rates decline. Engagement becomes thinner. People scroll past things that would have held their attention two years ago. The flood of machine-generated content hasn’t just diluted the feed — it has trained readers to disengage from text itself.

The scarce thing now is not content. It’s conviction. Someone who means it. Someone who put their name on it because they believe it, and who will answer for it if they’re wrong.

The Fourth Proof

In The Last Skill, I identify four proofs of human irreplaceability: Creativity, Governance, Decision-Making, and Reputation. The first three are about what you do — generating genuine novelty, choosing value hierarchies, absorbing the real downside of the decisions you make. But the fourth — Reputation — is about how all of it accumulates over time into something verifiable. It is the externally verified trail of the other three.

Reputation is what I call the “Proof of Human.” It’s the mechanism by which the world knows you mean what you say and you’ve earned the right to say it. It cannot be faked by a machine because it requires a history of real consequences — of being wrong in public, of correcting course, of building trust through a track record that has cost you something to maintain.

This is the authorship argument in its purest form. When a human author publishes a book, they are not merely producing text. They are converting their accumulated reputation into a claim: I stand behind this. They are offering themselves as collateral. The reader doesn’t just evaluate the words — they evaluate the person. And that evaluation happens against a record of prior work, prior claims, prior consequences.

A language model has no prior work in this sense. It has training data. It has outputs. But it has no career, no record of stakes met or failed, no history of putting itself on the line and being held to account. It cannot build reputation because it cannot risk anything.

This is why I believe human authorship is becoming more valuable, not less. In an environment saturated with generated text, the signed human voice — backed by a real name, a real history, a real capacity for consequence — becomes the scarce resource. It becomes the thing worth paying attention to.

What Anthropic Press stands for

I started Anthropic Press because I saw this problem coming. Not from the outside, as a critic of AI, but from the inside, as someone who builds with it every day. I co-founded Blockchain Jungle. I run Dojo Coding. I use AI tools constantly. I’m not here to argue that machines are bad or that we should go back to typewriters.

But I am here to argue that accountability matters. That when someone publishes a book, readers deserve to know a human being staked their name on it. That the relationship between author and reader is a trust relationship, and trust requires someone who can break it.

Anthropic Press is not anti-AI. We are pro-accountability. Every book we publish goes through a process designed to ensure that the person whose name is on the cover actually authored the ideas, actually stands behind the arguments, actually accepts responsibility for what the book claims. We use AI as a tool in our workflow — for research, for editing assistance, for production. But the authorship is human. The accountability is human. The name on the line is real.

This is a publishing philosophy, but it’s also a bet. I’m betting that readers will increasingly seek out work they can trust — not because of a “certified human” badge, but because the author has a track record, a reputation, a history of showing up and answering for their claims. I’m betting that in a world drowning in generated content, the human voice with something at stake will be the one that cuts through.

The freedom in being accountable

There is a chapter in The Last Skill — Part III, “The Freedom Architecture” — where I lay out the practical implications of all this. Not just the philosophy, but what you actually do with it. And one of the things I come back to is that accountability is not a burden. It is a freedom.

When you sign your name to something, you are free to be wrong. You are free to change your mind later. You are free to say something unpopular, because the cost of being wrong is yours to bear. That freedom is not available to a machine. A machine cannot choose to take a risk with its reputation, because it has none. It cannot decide that an idea is worth defending at personal cost, because nothing is personal to it.

The act of authorship — real authorship, with your name and your neck — is one of the last spaces where individual agency still means something concrete. You chose these words. You own their consequences. Nobody else — and no machine — gets the credit or the blame.

That matters. In a world where more and more of our intellectual life is being automated, delegated, and generated, the decision to sit down and write something yourself, under your own name, with full knowledge that you’ll be held to it — that decision is an act of freedom.

The whole point

I don’t know what publishing looks like in ten years. I don’t know how much of what we read will be generated, or what new tools will emerge, or whether readers will develop better instincts for telling human work from machine output. The technology will keep changing. The business models will keep shifting.

But I know this: the relationship between an author and a reader is, at bottom, a relationship of trust. And trust requires someone who can be held responsible. Someone who chose to say this thing, in this way, with their name attached. Someone who will still be here tomorrow to answer for it.

Every book we publish at Anthropic Press is someone’s name on the line. That’s the whole point.

Related reading


Juan C. Guerrero is a Costa Rican founder, the creator of Anthropic Press, and the author of The Last Skill: What AI Will Never Own. He writes about what stays human in an increasingly automated world.