Doom Debates

Liron Shapira
Doom Debates
Nieuwste aflevering

131 afleveringen

  • Doom Debates

    His P(Doom) Is Only 2.6% — AI Doom Debate with Bentham's Bulldog, a.k.a. Matthew Adelstein

    10-2-2026 | 2 u. 27 Min.
    Get ready for a rematch with the one & only Bentham’s Bulldog, a.k.a. Matthew Adelstein! Our first debate covered a wide range of philosophical topics.
    Today’s Debate #2 is all about Matthew’s new argument against the inevitability of AI doom. He comes out swinging with a calculated P(Doom) of just 2.6% , based on a multi-step probability chain that I challenge as potentially falling into a “Type 2 Conjunction Fallacy” (a.k.a. Multiple Stage Fallacy).
    We clash on whether to expect “alignment by default” and the nature of future AI architectures. While Matthew sees current RLHF success as evidence that AIs will likely remain compliant, I argue that we’re building “Goal Engines” — superhuman optimization modules that act like nuclear cores wrapped in friendly personalities. We debate whether these engines can be safely contained, or if the capability to map goals to actions is inherently dangerous and prone to exfiltration.
    Despite our different forecasts (my 50% vs his sub-10%), we actually land in the “sane zone” together on some key policy ideas, like the potential necessity of a global pause.
    While Matthew’s case for low P(Doom) hasn’t convinced me, I consider his post and his engagement with me to be super high quality and good faith. We’re not here to score points, we just want to better predict how the intelligence explosion will play out.
    Timestamps
    00:00:00 — Teaser
    00:00:35 — Bentham’s Bulldog Returns to Doom Debates
    00:05:43 — Higher-Order Evidence: Why Skepticism is Warranted
    00:11:06 — What’s Your P(Doom)™
    00:14:38 — The “Multiple Stage Fallacy” Objection
    00:21:48 — The Risk of Warring AIs vs. Misalignment
    00:27:29 — Historical Pessimism: The “Boy Who Cried Wolf”
    00:33:02 — Comparing AI Risk to Climate Change & Nuclear War
    00:38:59 — Alignment by Default via Reinforcement Learning
    00:46:02 — The “Goal Engine” Hypothesis
    00:53:13 — Is Psychoanalyzing Current AI Valid for Future Systems?
    01:00:17 — Winograd Schemas & The Fragility of Value
    01:09:15 — The Nuclear Core Analogy: Dangerous Engines in Friendly Wrappers
    01:16:16 — The Discontinuity of Unstoppable AI
    01:23:53 — Exfiltration: Running Superintelligence on a Laptop
    01:31:37 — Evolution Analogy: Selection Pressures for Alignment
    01:39:08 — Commercial Utility as a Force for Constraints
    01:46:34 — Can You Isolate the “Goal-to-Action” Module?
    01:54:15 — Will Friendly Wrappers Successfully Control Superhuman Cores?
    02:04:01 — Moral Realism and Missing Out on Cosmic Value
    02:11:44 — The Paradox of AI Solving the Alignment Problem
    02:19:11 — Policy Agreements: Global Pauses and China
    02:26:11 — Outro: PauseCon DC 2026 Promo
    Links
    Bentham’s Bulldog Official Substack — https://benthams.substack.com
    The post we debated — https://benthams.substack.com/p/against-if-anyone-builds-it-everyone
    Apply to PauseCon DC 2026 here or via https://pauseai-us.org
    Forethought Institute’s paper: Preparing for the Intelligence Explosion
    Tom Davidson (Forethought Institute)’s post: How quick and big would a software intelligence explosion be?
    Scott Alexander on the Coffeepocalypse Argument

    ---
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates

    What Dario Amodei Misses In "The Adolescence of Technology" — Reaction With MIRI's Harlan Stewart

    04-2-2026 | 1 u. 15 Min.
    Harlan Stewart works in communications for the Machine Intelligence Research Institute (MIRI).
    In this episode, Harlan and I give our honest opinions on Dario Amodei's new essay "The Adolescence of Technology".
    Timestamps
    0:00:00 — Cold Open
    0:00:47 — How Harlan Stewart Got Into AI Safety
    0:02:30 — What’s Your P(Doom)?™
    0:04:09 — The “Doomer” Label
    0:06:13 — Overall Reaction to Dario’s Essay: The Missing Mood
    0:09:15 — The Rosy Take on Dario’s Essay
    0:10:42 — Character Assassination & Low Blows
    0:13:39 — Dario Amodei is Shifting the Overton Window in The Wrong Direction
    0:15:04 — Object-Level vs. Meta-Level Criticisms
    0:17:07 — The “Inevitability” Strawman Used by Dario
    0:19:03 — Dario Refers to Doom as a Self-Fulfilling Prophecy
    0:22:38 — Dismissing Critics as “Too Theoretical”
    0:43:18 — The Problem with Psychoanalyzing AI
    0:56:12 — “Intellidynamics” & Reflective Stability
    1:07:12 — Why Is Dario Dismissing an AI Pause?
    1:11:45 — Final Takeaways
    Links
    Harlan’s X — https://x.com/HumanHarlan
    “The Adolescence of Technology” by Dario Amodei — https://www.darioamodei.com/essay/the-adolescence-of-technology
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates

    Q&A: Is Liron too DISMISSIVE of AI Harms? + New Studio, Demis Would #PauseAI, AI Water Use Debate

    27-1-2026 | 2 u. 9 Min.
    Check out the new Doom Debates studio in this Q&A with special guest Producer Ori! Liron gets into a heated discussion about whether doomers must validate short-term risks, like data center water usage, in order to build a successful political coalition.
    Originally streamed on Saturday, January 24.
    Timestamps
    00:00:00 — Cold Open
    00:00:26 — Introduction and Studio Tour
    00:08:17 — Q&A: Alignment, Accelerationism, and Short-Term Risks
    00:18:15 — Dario Amodei, Davos, and AI Pause
    00:27:42 — Producer Ori Joins: Locations and Vibes
    00:35:31 — Legislative Strategy vs. Social Movements (The Tobacco Playbook)
    00:45:01 — Ethics of Investing in or Working for AI Labs
    00:54:23 — Defining Superintelligence and Human Limitations
    01:02:58 — Technical Risks: Self-Replication and Cyber Warfare
    01:19:08 — Live Debate with Zane: Short-Term vs. Long-Term Strategy
    01:53:15 — Marketing Doom Debates and Guest Outreach
    01:56:45 — Live Call with Jonas: Scenarios for Survival
    02:05:52 — Conclusion and Mission Statement
    Links
    Liron’s X Post about Destiny — https://x.com/liron/status/2015144778652905671?s=20
    Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77 — https://www.youtube.com/watch?v=IUX00c5x2UM
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates

    Taiwan's Cyber Ambassador-At-Large Says Humans & AI Can FOOM Together

    20-1-2026 | 1 u. 52 Min.
    Audrey Tang was the youngest minister in Taiwanese history. Now she's working to align AI with democratic principles as Taiwan's Cyber Ambassador.
    In this debate, I probe her P(doom) and stress-test her vision for safe AI development.
    Timestamps
    00:00:00 — Episode Preview
    00:01:43 — Introducing Audrey Tang, Cyber Ambassador of Taiwan
    00:07:20 — Being Taiwan’s First Digital Minister
    00:17:19 — What's Your P(Doom)? ™
    00:21:10 — Comparing AI Risk to Nuclear Risk
    00:22:53 — The Statement on AI Extinction Risk
    00:27:29 — Doomerism as a Hyperstition
    00:30:51 — Audrey Explains Her Vision of "Plurality"
    00:37:17 — Audrey Explains Her Principles of Civic Ethics, The "6-Pack of Care"
    00:45:58 — AGI Timelines: "It's Already Here"
    00:54:41 — The Apple Analogy
    01:03:09 — What If AI FOOMs?
    01:11:19 — What AI Can vs What AI Will Do
    01:15:20 — Lessons from COVID-19
    01:19:59 — Is Society Ready? Audrey Reflects on a Personal Experience with Mortality
    01:23:50 — AI Alignment Cannot Be Top-Down
    01:34:04 — AI-as-Mother vs AI-as-Gardener
    01:37:26 — China and the Geopolitics of AI Chip Manufacturing in Taiwan
    01:40:47 — Red Lines, International Treaties, and the Off Button
    01:48:26 — Debate Wrap-Up

    Links
    Plurality: The Future of Collaborative Technology and Democracy by Glen Weyl and Audrey Tang — https://www.amazon.com/Plurality-Future-Collaborative-Technology-Democracy/dp/B0D98RPKCK
    Audrey’s X — https://x.com/audreyt
    Audrey’s Wikipedia — https://en.wikipedia.org/wiki/Audrey_Tang

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates

    Liron Enters Bannon's War Room to Explain Why AI Could End Humanity

    13-1-2026 | 30 Min.
    I joined Steve Bannon’s War Room Battleground to talk about AI doom.
    Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call.
    00:00:00 — Episode Preview
    00:01:17 — Joe Allen opens the show and introduces Liron Shapira
    00:04:06 — Liron: What’s Your P(Doom)?
    00:05:37 — How Would an AI Take Over?
    00:07:20 — The Timeline to AGI
    00:08:17 — Benchmarks & AI Passing the Turing Test
    00:14:43 — Liron Is Typically a Techno-Optimist
    00:18:00 — Raising a Family with a High P(Doom)
    00:23:48 — Mobilizing a Grassroots AI Survival Campaign
    00:26:45 — Final Message: A Wake-Up Call
    00:29:23 — Joe Allen’s Closing Message to the War Room Posse
    Links:
    Joe’s Substack — https://substack.com/@joebot
    Joe’s Twitter — https://x.com/JOEBOTxyz
    Bannon’s War Room Twitter — https://x.com/Bannons_WarRoom
    WarRoom Battleground EP 922: AI Doom Debates with Liron Shapira on Rumble — https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.html
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe

Meer Zaken en persoonlijke financiën podcasts

Over Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Luister naar Doom Debates, Het Grote Plaatje en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies