Powered by RND

Doom Debates

Liron Shapira
Doom Debates
Nieuwste aflevering

Beschikbare afleveringen

5 van 112
  • Former MIRI Researcher Solving AI Alignment by Engineering Smarter Human Babies
    Former Machine Intelligence Research Institute (MIRI) researcher Tsvi Benson-Tilsen is championing an audacious path to prevent AI doom: engineering smarter humans to tackle AI alignment.I consider this one of the few genuinely viable alignment solutions, and Tsvi is at the forefront of the effort. After seven years at MIRI, he co-founded the Berkeley Genomics Project to advance the human germline engineering approach.In this episode, Tsvi lays out how to lower P(doom), arguing we must stop AGI development and stigmatize it like gain-of-function virus research. We cover his AGI timelines, the mechanics of genomic intelligence enhancement, and whether super-babies can arrive fast enough to save us.I’ll be releasing my full interview with Tsvi in 3 parts. Stay tuned for part 2 next week!Timestamps0:00 Episode Preview & Introducing Tsvi Benson-Tilsen1:56 What’s Your P(Doom)™4:18 Tsvi’s AGI Timeline Prediction6:16 What’s Missing from Current AI Systems10:05 The State of AI Alignment Research: 0% Progress11:29 The Case for PauseAI 15:16 Debate on Shaming AGI Developers25:37 Why Human Germline Engineering31:37 Enhancing Intelligence: Chromosome Vs. Sperm Vs. Egg Selection37:58 Pushing the Limits: Head Size, Height, Etc.40:05 What About Human Cloning?43:24 The End-to-End Plan for Germline Engineering45:45 Will Germline Engineering Be Fast Enough?48:28 Outro: How to Support Tsvi’s WorkShow NotesTsvi’s organization, the Berkeley Genomics Project — https://berkeleygenomics.orgIf you’re interested to connect with Tsvi about germline engineering, you can reach out to him at [email protected] Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    49:27
  • Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position
    Today I'm sharing my interview on Robert Wright's Nonzero Podcast where we unpack Eliezer Yudkowsky's AI doom arguments from his bestselling book, "If Anyone Builds It, Everyone Dies." Bob is an exceptionally thoughtful interviewer who asks sharp questions and pushes me to defend the Yudkowskian position, leading to a rich exploration of the AI doom perspective. I highly recommend getting a premium subscription to his podcast: 0:00 Episode Preview 2:43 Being a "Stochastic Parrot" for Eliezer Yudkowsky 5:38 Yudkowsky's Book: "If Anyone Builds It, Everyone Dies" 9:38 AI Has NEVER Been Aligned 12:46 Liron Explains "Intellidynamics" 15:05 Natural Selection Leads to Maladaptive Behaviors — AI Misalignment Foreshadowing 29:02 We Summon AI Without Knowing How to Tame It 32:03 The "First Try" Problem of AI Alignment 37:00 Headroom Above Human Capability 40:37 The PauseAI Movement: The Silent Majority 47:35 Going into Overtime Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    47:36
  • Climate Change Is Stupidly EASY To Stop — Andrew Song, Cofounder of Make Sunsets
    Today I’m taking a rare break from AI doom to cover the dumbest kind of doom humanity has ever created for itself: climate change. We’re talking about a problem that costs less than $2 billion per year to solve. For context, that’s what the US spent on COVID relief every 7 hours during the pandemic. Bill Gates could literally solve this himself.My guest Andrew Song runs Make Sunsets, which launches weather balloons filled with sulfur dioxide (SO₂) into the stratosphere to reflect sunlight and cool the planet. It’s the same mechanism volcanoes use—Mount Pinatubo cooled Earth by 0.5°C for a year in 1991. The physics is solid, the cost is trivial, and the coordination problem is nonexistent.So why aren’t we doing it? Because people are squeamish about “playing God” with the atmosphere, even while we’re building superintelligent AI. Because environmentalists would rather scold you into turning off your lights than support a solution that actually works.This conversation changed how I think about climate change. I went from viewing it as this intractable coordination problem to realizing it’s basically already solved—we’re just LARPing that it’s hard! 🙈 If you care about orders of magnitude, this episode will blow your mind. And if you feel guilty about your carbon footprint: you can offset an entire year of typical American energy usage for about 15 cents. Yes, cents.Timestamps* 00:00:00 - Introducing Andrew Song, Cofounder of Make Sunsets* 00:03:08 - Why the company is called “Make Sunsets”* 00:06:16 - What’s Your P(Doom)™ From Climate Change* 00:10:24 - Explaining geoengineering and solar radiation management* 00:16:01 - The SO₂ dial we can turn* 00:22:00 - Where to get SO₂ (gas supply stores, sourcing from oil)* 00:28:44 - Cost calculation: Just $1-2 billion per year* 00:34:15 - “If everyone paid $3 per year”* 00:42:38 - Counterarguments: moral hazard, termination shock* 00:44:21 - Being an energy hog is totally fine* 00:52:16 - What motivated Andrew (his kids, Luke Iseman)* 00:59:09 - “The stupidest problem humanity has created”* 01:11:26 - Offsetting CO₂ from OpenAI’s Stargate* 01:13:38 - Playing God is goodShow NotesMake Sunsets* Website: https://makesunsets.com* Tax-deductible donations (US): https://givebutter.com/makesunsetsPeople Mentioned* Casey Handmer: https://caseyhandmer.wordpress.com/* Emmett Shear: https://twitter.com/eshear* Palmer Luckey: https://twitter.com/PalmerLuckeyResources Referenced* Book: Termination Shock by Neal Stephenson* Book: The Rational Optimist by Matt Ridley* Book: Enlightenment Now by Steven Pinker* Harvard SCoPEx project (the Bill Gates-funded project that got blocked)* Climeworks (direct air capture company): https://climeworks.comData/Monitoring* NOAA (National Oceanic and Atmospheric Administration): https://www.noaa.gov* ESA Sentinel-5P TROPOMI satellite data---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    1:17:41
  • David Deutschian vs. Eliezer Yudkowskian Debate: Will AGI Cooperate With Humanity? — With Brett Hall
    I’ve been puzzled by David Deutsch’s AI claims for years. Today I finally had the chance to hash it out: Brett Hall, one of the foremost educators of David Deutsch’s ideas around epistemology & science, was brave enough to debate me!Brett has been immersed in Deutsch’s philosophy since 1997 and teaches it on his Theory of Knowledge podcast, which has been praised by tech luminary Naval Ravikant. He agrees with Deutsch on 99.99% of issues, especially the dismissal of AI as an existential threat.In this debate, I stress-test the Deutschian worldview, and along the way we unpack our diverging views on epistemology, the orthogonality thesis, and pessimism vs optimism.Timestamps0:00 — Debate preview & introducing Brett Hall4:24 — Brett’s opening statement on techno-optimism13:44 — What’s Your P(Doom)?™15:43 — We debate the merits of Bayesian probabilities20:13 — Would Brett sign the AI risk statement?24:44 — Liron declares his “damn good reason” for AI oversight35:54 — Debate milestone: We identify our crux of disagreement!37:29 — Prediction vs prophecy44:28 — The David Deutsch CAPTCHA challenge1:00:41 — What makes humans special?1:15:16 — Reacting to David Deutsch’s recent statements on AGI1:24:04 — Debating what makes humans special1:40:25 — Brett reacts to Roger Penrose’s AI claims1:48:13 — Debating the orthogonality thesis1:56:34 — The powerful AI data center hypothetical2:03:10 — “It is a dumb tool, easily thwarted”2:12:18 — Clash of worldviews: goal-driven vs problem-solving2:25:05 — Ideological Turing test: We summarize each other’s positions2:30:44 — Are doomers just pessimists?Show NotesBrett’s website — https://www.bretthall.orgBrett’s Twitter — https://x.com/TokTeacherThe Deutsch Files by Brett Hall and Naval Ravikant* https://nav.al/deutsch-files-i* https://nav.al/deutsch-files-ii* https://nav.al/deutsch-files-iii* https://nav.al/deutsch-files-ivBooks:* The Fabric of Reality by David Deutsch* The Beginning of Infinity by David Deutsch* Superintelligence by Nick Bostrom* If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    2:37:59
  • Debating People On The Street About AI Doom
    We took Eliezer Yudkowsky and Nate Soares’s new book, If Anyone Builds It, Everyone Dies, on the streets to see what regular people think.Do people think that artificial intelligence is a serious existential risk? Are they open to considering the argument before it’s too late? Are they hostile to the idea? Are they totally uninterested?Watch this episode to see the full spectrum of reactions from a representative slice of America!---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    23:02

Meer Zaken en persoonlijke financiën podcasts

Over Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Luister naar Doom Debates, Het Beurscafé en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 10/25/2025 - 8:12:28 PM