Powered by RND
Luister naar Doom Debates in de app
Luister naar Doom Debates in de app
(2.067)(250 021)
Favorieten opslaan
Wekker
Slaaptimer

Doom Debates

Podcast Doom Debates
Liron Shapira
It's time to talk about the end of the world! lironshapira.substack.com

Beschikbare afleveringen

5 van 62
  • Roger Penrose is WRONG about Gödel's Theorem and AI Consciousness
    Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse. Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.00:00 Episode Highlights01:29 Introduction to Roger Penrose11:56 Uncomputability16:52 Penrose on Gödel's Incompleteness Theorem19:57 Liron Explains Gödel's Incompleteness Theorem27:05 Why Penrose Gets Gödel Wrong40:53 Scott Aaronson's Gödel CAPTCHA46:28 Penrose's Critique of the Turing Test48:01 Searle's Chinese Room Argument52:07 Penrose's Views on AI and Consciousness57:47 AI's Computational Power vs. Human Intelligence01:21:08 Penrose's Perspective on AI Risk01:22:20 Consciousness = Quantum Wave Function Collapse?01:26:25 Final ThoughtsShow NotesSource video — Feb 22, 2025 Interview with Roger Penrose on “This Is World” — https://www.youtube.com/watch?v=biUfMZ2dts8Scott Aaronson’s “Gödel CAPTCHA” — https://www.scottaaronson.com/writings/captcha.htmlMy recent Scott Aaronson episode — https://www.youtube.com/watch?v=xsGqWeqKjEgMy explanation of what’s wrong with arguing “by definition” — https://www.youtube.com/watch?v=ueam4fq8k8IWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:31:38
  • We Found AI's Preferences — What David Shapiro MISSED in this bombshell Center for AI Safety paper
    The Center for AI Safety just dropped a fascinating paper — they discovered that today’s AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didn’t know it was already happening.This episode has two parts:In Part I (48 minutes), I react to David Shapiro’s coverage of the paper and push back on many of his points.In Part II (60 minutes), I explain the paper myself.00:00 Episode Introduction05:25 PART I: REACTING TO DAVID SHAPIRO10:06 Critique of David Shapiro's Analysis19:19 Reproducing the Experiment35:50 David's Definition of Coherence37:14 Does AI have “Temporal Urgency”?40:32 Universal Values and AI Alignment49:13 PART II: EXPLAINING THE PAPER51:37 How The Experiment Works01:11:33 Instrumental Values and Coherence in AI01:13:04 Exchange Rates and AI Biases01:17:10 Temporal Discounting in AI Models01:19:55 Power Seeking, Fitness Maximization, and Corrigibility01:20:20 Utility Control and Bias Mitigation01:21:17 Implicit Association Test01:28:01 Emailing with the Paper’s Authors01:43:23 My TakeawayShow NotesDavid’s source video: https://www.youtube.com/watch?v=XGu6ejtRz-0The research paper: http://emergent-values.aiWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:48:25
  • Does AI Competition = AI Alignment? Debate with Gil Mark
    My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.00:00 Introduction02:36 Gil & Liron’s Early Doom Days04:58: AIs : Humans :: Humans : Ants08:02 The Convergence of AI Goals15:19 What’s Your P(Doom)™19:23 Multiple AIs and Human Welfare24:42 Gil’s Alignment Claim42:31 Cheaters and Frankensteins55:55 Superintelligent Game Theory01:01:16 Slower Takeoff via Resource Competition01:07:57 Recapping the Disagreement01:15:39 Post-Debate BanterShow NotesGil’s LinkedIn: https://www.linkedin.com/in/gilmark/Gil’s Twitter: https://x.com/gmfromgmWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:17:05
  • Toy Model of the AI Control Problem
    Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.We’re not talking about AGI or ASI here — we’re just looking at an AI that does brute-force search over actions in a simple grid world.The slide deck I’m presenting was created by Jaan Tallinn, cofounder of the Future of Life Institute.00:00 Introduction01:24 The Toy Model06:19 Misalignment and Manipulation Drives12:57 Search Capacity and Ontological Insights16:33 Irrelevant Concepts in AI Control20:14 Approaches to Solving AI Control Problems23:38 Final ThoughtsWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    25:37
  • Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill
    Bryan Cantrill, co-founder of Oxide Computer, says in his talk that engineering in the physical world is too complex for any AI to do it better than teams of human engineers. Success isn’t about intelligence; it’s about teamwork, character and resilience.I completely disagree.00:00 Introduction02:03 Bryan’s Take on AI Doom05:55 The Concept of P(Doom)08:36 Engineering Challenges and Human Intelligence15:09 The Role of Regulation and Authoritarianism in AI Control29:44 Engineering Complexity: A Case Study from Oxide Computer40:06 The Value of Team Collaboration46:13 Human Attributes in Engineering49:33 AI's Potential in Engineering58:23 Existential Risks and AI PredictionsBryan’s original talk: Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16FgPauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:05:33

Meer Technologie podcasts

Over Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Luister naar Doom Debates, Digitaal | BNR en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.11.0 | © 2007-2025 radio.de GmbH
Generated: 3/14/2025 - 7:16:49 AM