Ask three questions: What data was this trained on? What are the incentives behind it? And what happens when it's wrong?

The nuance

Critical thinking about AI starts with understanding that every AI system embodies the assumptions, biases, and priorities of the people who built it and the data they used. It’s not neutral. It’s not objective. It’s a reflection of particular choices made by particular humans.

Three questions cut through the hype: What data was it trained on? (This determines what it knows and what it’s blind to.) Who benefits from its deployment? (Follow the incentives.) What happens when it fails? (The consequences of AI errors are often borne by people with the least power to contest them.)

Healthy AI skepticism isn’t anti-technology. It’s the same critical lens you’d apply to any powerful tool: who controls it, who profits from it, and who pays when it goes wrong. The best thinkers about AI are neither utopians nor doomsayers — they’re realists who ask hard questions.

Key takeaway

Thinking critically about AI means asking who benefits, what's missing from the data, and what happens when it fails.


For a deeper framework on what makes humans irreplaceable in the age of AI, read The Last Skill: What AI Will Never Own by Juan C. Guerrero.

More: Best AI ethics books · What AI can't do · How to future-proof your career