His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore
Liam Robins is a math major at George Washington University who's diving deep into AI policy and rationalist thinking.In Part 1, we explored how AI is transforming college life. Now in Part 2, we ride the Doom Train together to see if we can reconcile our P(Doom) estimates. 🚂Liam starts with a P(Doom) of just 3%, but as we go through the stops on the Doom Train, something interesting happens: he actually updates his beliefs in realtime!We get into heated philosophical territory around moral realism, psychopaths, and whether intelligence naturally yields moral goodness.By the end, Liam's P(Doom) jumps from 3% to 8% - one of the biggest belief updates I've ever witnessed on the show. We also explore his "Bayes factors" approach to forecasting, debate the reliability of superforecasters vs. AI insiders, and discuss why most AI policies should be Pareto optimal regardless of your P(Doom).This is rationality in action: watching someone systematically examine their beliefs, engage with counterarguments, and update accordingly.0:00 - Opening0:42 - What’s Your P(Doom)™01:18 - Stop 1: AGI timing (15% chance it's not coming soon)01:29 - Stop 2: Intelligence limits (1% chance AI can't exceed humans)01:38 - Stop 3: Physical threat assessment (1% chance AI won't be dangerous)02:14 - Stop 4: Intelligence yields moral goodness - the big debate begins04:42 - Moral realism vs. evolutionary explanations for morality06:43 - The psychopath problem: smart but immoral humans exist08:50 - Game theory and why psychopaths persist in populations10:21 - Liam's first major update: 30% down to 15-20% on moral goodness12:05 - Stop 5: Safe AI development process (20%)14:28 - Stop 6: Manageable capability growth (20%)15:38 - Stop 7: AI conquest intentions - breaking down into subcategories17:03 - Alignment by default vs. deliberate alignment efforts19:07 - Stop 8: Super alignment tractability (20%)20:49 - Stop 9: Post-alignment peace (80% - surprisingly optimistic)23:53 - Stop 10: Unaligned ASI mercy (1% - "just cope")25:47 - Stop 11: Epistemological concerns about doom predictions27:57 - Bayes factors analysis: Why Liam goes from 38% to 3%30:21 - Bayes factor 1: Historical precedent of doom predictions failing33:08 - Bayes factor 2: Superforecasters think we'll be fine39:23 - Bayes factor 3: AI insiders and government officials seem unconcerned45:49 - Challenging the insider knowledge argument with concrete examples48:47 - The privilege access epistemology debate56:02 - Major update: Liam revises base factors, P(Doom) jumps to 8%58:18 - Odds ratios vs. percentages: Why 3% to 8% is actually huge59:14 - AI policy discussion: Pareto optimal solutions across all P(Doom) levels1:01:59 - Why there's low-hanging fruit in AI policy regardless of your beliefs1:04:06 - Liam's future career plans in AI policy1:05:02 - Wrap-up and reflection on rationalist belief updatingShow Notes* Liam Robins on Substack -* Liam’s Doom Train post -* Liam’s Twitter - @liamhrobinsAnthropic's "Alignment Faking in Large Language Models" - The paper that updated Liam's beliefs on alignment by default---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
1:05:12
--------
1:05:12
AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad
Amjad Masad is the founder and CEO of Replit, a full-featured AI-powered software development platform whose revenue reportedly just shot up from $10M/yr to $100M/yr+.Last week, he went on Joe Rogan to share his vision that "everyone will become an entrepreneur" as AI automates away traditional jobs.In this episode, I break down why Amjad's optimistic predictions rely on abstract hand-waving rather than concrete reasoning. While Replit is genuinely impressive, his claims about AI limitations—that they can only "remix" and do "statistics" but can't "generalize" or create "paradigm shifts"—fall apart when applied to specific examples.We explore the entrepreneurial bias problem, why most people can't actually become successful entrepreneurs, and how Amjad's own success stories (like quality assurance automation) actually undermine his thesis. Plus: Roger Penrose's dubious consciousness theories, the "Duplo vs. Lego" problem in abstract thinking, and why Joe Rogan invited an AI doomer the very next day.00:00 - Opening and introduction to Amjad Masad03:15 - "Everyone will become an entrepreneur" - the core claim08:45 - Entrepreneurial bias: Why successful people think everyone can do what they do15:20 - The brainstorming challenge: Human vs. AI idea generation22:10 - "Statistical machines" and the remixing framework28:30 - The abstraction problem: Duplos vs. Legos in reasoning35:50 - Quantum mechanics and paradigm shifts: Why bring up Heisenberg?42:15 - Roger Penrose, Gödel's theorem, and consciousness theories52:30 - Creativity definitions and the moving goalposts58:45 - The consciousness non-sequitur and Silicon Valley "hubris"01:07:20 - Ahmad George success story: The best case for Replit01:12:40 - Job automation and the 50% reskilling assumption01:18:15 - Quality assurance jobs: Accidentally undermining your own thesis01:23:30 - Online learning and the contradiction in AI capabilities01:29:45 - Superintelligence definitions and learning in new environments01:35:20 - Self-play limitations and literature vs. programming01:41:10 - Marketing creativity and the Think Different campaign01:45:45 - Human-machine collaboration and the prompting bottleneck01:50:30 - Final analysis: Why this reasoning fails at specificity01:58:45 - Joe Rogan's real opinion: The Roman Yampolskiy follow-up02:02:30 - Closing thoughtsShow NotesSource video: Amjad Masad on Joe Rogan - July 2, 2025Roman Yampolskiy on Joe Rogan - https://www.youtube.com/watch?v=j2i9D24KQ5kReplit - https://replit.comAmjad’s Twitter - https://x.com/amasadDoom Debates episode where I react to Emmett Shear’s Softmax - https://www.youtube.com/watch?v=CBN1E1fvh2gDoom Debates episode where I react to Roger Penrose - https://www.youtube.com/watch?v=CBN1E1fvh2g---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
1:45:48
--------
1:45:48
Every Student is CHEATING with AI — College in the AGI Era (feat. Sophomore Liam Robins)
Liam Robins is a math major at George Washington University who recently had his own "AGI awakening" after reading Leopold Aschenbrenner's Situational Awareness. I met him at my Manifest 2025 talk about stops on the Doom Train.In this episode, Liam confirms what many of us suspected: pretty much everyone in college is cheating with AI now, and they're completely shameless about it.We dive into what college looks like today: how many students are still "rawdogging" lectures, how professors are coping with widespread cheating, how the social life has changed, and what students think they’ll do when they graduate.* 00:00 - Opening* 00:50 - Introducing Liam Robins* 05:27 - The reality of college today: Do they still have lectures?* 07:20 - The rise of AI-enabled cheating in assignments* 14:00 - College as a credentialing regime vs. actual learning* 19:50 - "Everyone is cheating their way through college" - the epidemic* 26:00 - College social life: "It's just pure social life"* 31:00 - Dating apps, social media, and Gen Z behavior* 36:21 - Do students understand the singularity is near?Show NotesGuest:* Liam Robins on Substack - https://thelimestack.substack.com/* Liam's Doom Train post - https://thelimestack.substack.com/p/my-pdoom-is-276-heres-why* Liam’s Twitter - @liamrobinsKey References:* Leopold Aschenbrenner - "Situational Awareness"* Bryan Caplan - "The Case Against Education"* Scott Alexander - Astral Codex Ten* Jeffrey Ding - ChinAI Newsletter* New York Magazine - "Everyone Is Cheating Their Way Through College"Events & Communities:* Manifest Conference* LessWrong* Eliezer Yudkowsky - "Harry Potter and the Methods of Rationality"Previous Episodes:* Doom Debates Live at Manifest 2025 - https://www.youtube.com/watch?v=detjIyxWG8MDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
38:40
--------
38:40
Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!
Carl Feynman got his Master’s in Computer Science and B.S. in Philosophy from MIT, followed by a four-decade career in AI engineering.He’s known Eliezer Yudkowsky since the ‘90s, and witnessed Eliezer’s AI doom argument taking shape before most of us were paying any attention!He agreed to come on the show because he supports Doom Debates’s mission of raising awareness of imminent existential risk from superintelligent AI.00:00 - Teaser00:34 - Carl Feynman’s Background02:40 - Early Concerns About AI Doom03:46 - Eliezer Yudkowsky and the Early AGI Community05:10 - Accelerationist vs. Doomer Perspectives06:03 - Mainline Doom Scenarios: Gradual Disempowerment vs. Foom07:47 - Timeline to Doom: Point of No Return08:45 - What’s Your P(Doom)™09:44 - Public Perception and Political Awareness of AI Risk11:09 - AI Morality, Alignment, and Chatbots Today13:05 - The Alignment Problem and Competing Values15:03 - Can AI Truly Understand and Value Morality?16:43 - Multiple Competing AIs and Resource Competition18:42 - Alignment: Wanting vs. Being Able to Help Humanity19:24 - Scenarios of Doom and Odds of Success19:53 - Mainline Good Scenario: Non-Doom Outcomes20:27 - Heaven, Utopia, and Post-Human Vision22:19 - Gradual Disempowerment Paper and Economic Displacement23:31 - How Humans Get Edged Out by AIs25:07 - Can We Gaslight Superintelligent AIs?26:38 - AI Persuasion & Social Influence as Doom Pathways27:44 - Riding the Doom Train: Headroom Above Human Intelligence29:46 - Orthogonality Thesis and AI Motivation32:48 - Alignment Difficulties and Deception in AIs34:46 - Elon Musk, Maximal Curiosity & Mike Israetel’s Arguments36:26 - Beauty and Value in a Post-Human Universe38:12 - Multiple AIs Competing39:31 - Space Colonization, Dyson Spheres & Hanson’s “Alien Descendants”41:13 - What Counts as Doom vs. Not Doom?43:29 - Post-Human Civilizations and Value Function44:49 - Expertise, Rationality, and Doomer Credibility46:09 - Communicating Doom: Missing Mood & Public Receptiveness47:41 - Personal Preparation vs. Belief in Imminent Doom48:56 - Why Can't We Just Hit the Off Switch?50:26 - The Treacherous Turn and Redundancy in AI51:56 - Doom by Persuasion or Entertainment53:43 - Differences with Eliezer Yudkowsky: Singleton vs. Multipolar Doom55:22 - Why Carl Chose Doom Debates56:18 - Liron’s OutroShow NotesCarl’s Twitter — https://x.com/carl_feynmanCarl’s LessWrong — https://www.lesswrong.com/users/carl-feynmanGradual Disempowerment — https://gradual-disempowerment.aiThe Intelligence Curse — https://intelligence-curse.aiAI 2027 — https://ai-2027.comAlcor cryonics — https://www.alcor.orgThe LessOnline Conference — https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
57:05
--------
57:05
Richard Hanania vs. Liron Shapira — AI Doom Debate
Richard Hanania is the President of the Center for the Study of Partisanship and Ideology. His work has been praised by Vice President JD Vance, Tyler Cowen, and Bryan Caplan among others.In his influential newsletter, he’s written about why he finds AI doom arguments unconvincing. He was gracious enough to debate me on this topic. Let’s see if one of us can change the other’s P(Doom)!0:00 Intro1:53 Richard's politics2:24 The state of political discourse3:30 What's your P(Doom)?™6:38 How to stop the doom train8:27 Statement on AI risk9:31 Intellectual influences11:15 Base rates for AI doom15:43 Intelligence as optimization power31:26 AI capabilities progress53:46 Why isn't AI yet a top blogger?58:02 Diving into Richard's Doom Train58:47 Diminishing Returns on Intelligence1:06:36 Alignment will be relatively trivial1:15:14 Power-seeking must be programmed1:21:27 AI will simply be benevolent1:27:17 Superintelligent AI will negotiate with humans1:33:00 Super AIs will check and balance each other out1:36:54 We're mistaken about the nature of intelligence1:41:46 Summarizing Richard's AI doom position1:43:22 Jobpocalypse and gradual disempowerment1:49:46 Ad hominem attacks in AI discourseShow NotesSubscribe to Richard Hanania's Newsletter: https://richardhanania.comRichard's blogpost laying out where he gets off the AI "doom train": https://www.richardhanania.com/p/ai-doomerism-as-science-fictionRichard's interview with Steven Pinker: https://www.richardhanania.com/p/pinker-on-alignment-and-intelligenceRichard's interview with Robin Hanson: https://www.richardhanania.com/p/robin-hanson-says-youre-going-toMy Doom Debate with Robin Hanson: https://www.youtube.com/watch?v=dTQb6N3_zu8My reaction to Steven Pinker's AI doom position, and why his arguments are shallow: https://www.youtube.com/watch?v=-tIq6kbrF-4"The Betterness Explosion" by Robin Hanson: https://www.overcomingbias.com/p/the-betterness-explosionhtml---Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16FgPauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com