Extremely unlikely. The existential risk from AI is not zero, but it’s far more probable that AI causes harm through misuse, bias, and concentration of power than through autonomous destruction.

The nuance

The “AI destroys humanity” scenario typically involves a superintelligent system pursuing a goal in ways that conflict with human survival—the classic “paperclip maximizer” thought experiment. While intellectually interesting, this scenario requires several assumptions that don’t hold today: that we’d build a system with unbounded autonomy, no kill switches, and goals misaligned with human welfare.

The actual risks from AI are less cinematic but more immediate. Algorithmic bias in criminal justice. Deepfakes undermining democratic discourse. Autonomous weapons making kill decisions without human oversight. Job displacement faster than retraining can keep up. These are real harms happening now, not hypothetical future scenarios.

The productive approach is treating AI safety as an engineering and governance challenge, not an existential panic. We need robust oversight, accountability frameworks, and the political will to regulate. The existential doom narrative, ironically, can distract from these practical steps by making the problem feel too large and too speculative to address.

Key takeaway

AI is far more likely to cause harm through human misuse than through autonomous destruction. Focus on governance, not doomsday.


For a deeper framework on what makes humans irreplaceable in the age of AI, read The Last Skill: What AI Will Never Own by Juan C. Guerrero.

More: Why AI doomsday predictions keep getting it wrong · 7 skills AI will never replace · Will AI replace humans?