Full disclosure: I wrote one of these books. But I’m going to try something that might be stupid — an honest comparison of my book with the one I think is its best counterpart.
I’m doing this because I keep getting asked the same question: “Should I read The Last Skill or Co-Intelligence?” People treat them as competitors. I don’t think they are. I think they’re two halves of a conversation that nobody is having in full. But you deserve to decide that for yourself, so here’s my honest attempt at a comparison — biases and all.
The shared question
Both books start from the same place: the ground is shifting under our feet. AI systems can now write, reason, code, design, and analyze at a level that would have seemed like science fiction five years ago. Both Ethan Mollick and I watched this happen and felt the same thing — a need to respond. To say something honest about what this means for the people who actually have to live through it.
The question both books try to answer is: What should humans do now?
But here’s where the paths split. Mollick’s answer is essentially: work with it. Mine is: know what can’t be shared. These aren’t contradictory answers, but they come from very different places, and they lead to very different kinds of comfort.
What Co-Intelligence argues
Mollick is a Wharton professor who has been using AI in his classroom since GPT-4 dropped, and Co-Intelligence is the book of someone who has logged the hours. His central thesis is that AI should be treated as a collaborator — a “co-intelligence” — rather than as a replacement or a toy. He proposes a set of principles for working alongside these systems: always invite AI to the table, always be the human in the loop, always treat the output as a draft.
The book’s greatest strength is its practicality. Mollick doesn’t just theorize; he reports. He tested AI on his students’ assignments. He ran experiments. He watched what happened when people with and without AI access tried to solve the same problems. The data is real, the observations are sharp, and the advice is immediately usable. If you read Co-Intelligence on a Sunday, you can work differently on Monday.
Mollick is also genuinely fair-minded. He doesn’t pretend AI is flawless. He documents hallucinations, overconfidence, and the strange ways people become either too trusting or too dismissive of AI output. He takes the technology seriously without worshipping it.
Where I think the book has limits — and I say this with respect, because these limits are also what keep the book so useful — is in its framing. Co-Intelligence assumes you want a partner. It assumes the right response to AI is to figure out how to collaborate with it. For many readers, that assumption works perfectly. But for others, the fear isn’t about productivity. It’s about identity. It’s not “how do I use this tool?” but “what am I worth if this tool can do what I do?”
Mollick’s book doesn’t ignore that question. But it doesn’t sit with it, either. It moves past it, toward solutions. And sometimes the question needs to be sat with before you can move anywhere.
What The Last Skill argues
My book starts in a different place. It starts with the fear.
I wrote The Last Skill because I kept meeting people — writers, designers, analysts, teachers — who were genuinely afraid. Not of losing their jobs next quarter, necessarily, but of something harder to name: the feeling that the thing they’d spent their lives getting good at might not matter anymore. That their expertise, their craft, their identity as someone who does this thing well, was being made irrelevant by a system that never gets tired and never asks for a raise.
My thesis is that some human capacities are not collaborative. They’re sovereign. The book lays out four proofs of this irreplaceability: creativity (genuine novelty, not recombination), governance (choosing the value hierarchy and absorbing its consequences), decision-making (making the cut and paying for it with something real — time, money, reputation, sleep), and reputation (the externally verified trail of all three — what I call “Proof of Human”). A language model can generate a paragraph about grief, but it has never grieved. It can produce a business plan, but it has never bet its savings on one.
The book argues that this isn’t a sentimental distinction — it’s a structural one. The four proofs are different angles of the same truth, and their sum is what I call “agency under consequence” — the willingness to be the one who answers for it. Not just being intelligent, but being invested.
The strength of this argument, I think, is that it meets people where they actually are emotionally. It validates the fear before trying to fix it. It says: you’re not crazy for feeling what you’re feeling, and here’s why the thing you’re afraid of losing can’t actually be taken.
The weakness — and I’ll be honest about this — is that the book is more philosophical than practical. If you finish The Last Skill on a Sunday, you probably won’t work differently on Monday. You might feel differently. You might think about your work differently. But I didn’t write a step-by-step guide to using AI. I wrote a case for why certain things about you can’t be automated. Those are different projects, and mine is the less immediately useful one.
Who should read which
Here’s the honest version:
Read Co-Intelligence if you need to work with AI right now and want practical, tested guidance. If your company is adopting AI tools and you want to be the person who uses them well rather than the person who gets replaced by them. If you’re a manager trying to figure out policy. If you want something grounded, evidence-based, and immediately applicable. Mollick wrote the best practical book on AI collaboration I’ve read. That’s not a small thing.
Read The Last Skill if you’re lying awake at 2 a.m. wondering whether you still matter. If the question isn’t “how do I use AI?” but “who am I if AI can do what I do?” If you need someone to take the existential weight of this moment seriously before telling you what to do about it. If you want a book that treats your fear as rational rather than something to be optimized away.
If you’re smart, read both. Read Mollick for the Monday morning. Read mine for the 2 a.m. questions. They cover different ground, and together they give you something neither one gives you alone: a complete picture of what it means to be human alongside something that is becoming very, very good at imitating us.
The Last Skill on Amazon →
Co-Intelligence on Amazon →
What they agree on
For all their differences, these books share a conviction that matters more than any disagreement: the doom narrative is wrong.
Neither Mollick nor I believe that AI makes humans obsolete. Neither of us thinks the correct response is to panic, withdraw, or accept irrelevance. We both believe humans have something irreplaceable. We just define that something differently.
Mollick defines it in terms of collaboration — the human in the loop, the judgment that guides the machine, the creativity that prompts the system in the right direction. His irreplaceable human is the one who knows how to work with intelligence, whether it’s artificial or not.
I define it in terms of sovereignty — four proofs (creativity, governance, decision-making, reputation) that together form “agency under consequence.” My irreplaceable human is the one who creates genuine novelty, chooses the direction, absorbs the downside, and builds a verifiable trail of all three. Their contribution requires having a life that can be affected by the outcome.
These aren’t competing definitions. They’re complementary ones. The question of how to work with AI and the question of what AI can never take from you are both real questions. Answering one doesn’t make the other go away.
A conversation between books
I’ve been thinking about why these two books keep getting compared. Part of it is timing — they’re both relevant right now, and they both address the same cultural anxiety. But I think the deeper reason is that readers sense these books are talking to each other, even though they were written independently.
Mollick is saying: Don’t be afraid. Here’s how to work with this. I’m saying: Your fear is valid. And here’s why the thing you’re afraid of losing was never really at risk. One gives you tools. The other gives you ground to stand on. You need both.
The best response to AI isn’t a single book. It’s a conversation between books — between the practical and the philosophical, between the collaborative and the sovereign, between Monday morning and 2 a.m. If these two books were a dialogue, we’d be closer to the full truth than either one gets alone.
I don’t know if comparing my own book is brave or foolish. Probably both. But I’d rather be honest about what my book does and doesn’t do than pretend it’s the only one you need. It isn’t. And neither is Mollick’s. The age of AI is too big for one answer.
Related reading
- What The Last Skill Gets Right That Other AI Books Miss
- The 10 Best Books About AI and What It Means to Be Human (2026)
- The 5 Best Books About AI and the Future of Work (2026 Reading List)
Juan C. Guerrero is the founder of Anthropic Press and the author of The Last Skill: What AI Will Never Own. He writes from Costa Rica about the things that make us human in an age that keeps asking whether we still are.