(...but also gets the most important part right.)
Bentham's Bulldog (BB), a prominent EA/philosophy blogger, recently reviewed If Anyone Builds It, Everyone Dies. In my eyes a review is good if it uses sound reasoning and encourages deep thinking on important topics, regardless of whether I agree with the bottom line. Bentham's Bulldog definitely encourages deep, thoughtful engagement on things that matter. He's smart, substantive, and clearly engaging in good faith. I laughed multiple times reading his review, and I encourage others to read his thoughts, both on IABIED and in general.
One of the most impressive aspects of the piece that I want to call out in particular is the presence of the mood that is typically missing among skeptics of AI x-risk.
Overall with my probabilities you end up with a credence in extinction from misalignment of 2.6%. Which, I want to make clear, is totally fucking insane. I am, by the standards of people who have looked into the topic, a rosy optimist. And yet even on my view, I think odds are one in fifty that AI will kill you and everyone you love, or leave the world no longer in humanity's hands. I think [...]
---
Outline:
(02:38) Confidence
(05:38) The Multi-stage Fallacy
(09:43) The Three Theses of IABI
(11:57) Stages of Doom
(16:49) We Might Never Build It
(18:30) Alignment by Default
(23:31) The Evolution Analogy
(36:40) What Does Ambition Look Like?
(41:34) Solving Alignment
(46:15) Superalignment
(52:20) Warning Shots
(56:16) ASI Might Be Incapable of Winning
(59:33) Conclusion
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
January 29th, 2026
Source:
https://www.lesswrong.com/posts/RNKK6GXxYDepGk8sA/bentham-s-bulldog-is-wrong-about-ai-risk
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.