PodcastsTechnologieAXRP - the AI X-risk Research Podcast

AXRP - the AI X-risk Research Podcast

Daniel Filan
AXRP - the AI X-risk Research Podcast
Nieuwste aflevering

62 afleveringen

  • AXRP - the AI X-risk Research Podcast

    49 - Caspar Oesterheld on Program Equilibrium

    18-02-2026 | 2 u. 32 Min.
    How does game theory work when everyone is a computer program who can read everyone else's source code? This is the problem of 'program equilibria'. In this episode, I talk with Caspar Oesterheld on work he's done on equilibria of programs that simulate each other, and how robust these equilibria are.
    Patreon: https://www.patreon.com/axrpodcast
    Ko-fi: https://ko-fi.com/axrpodcast
    Transcript: https://axrp.net/episode/2026/02/18/episode-49-caspar-oesterheld-program-equilibrium.html
    Note from Caspar on 2:00:06: At least given my current interpretation of what you say here, my answer is wrong. What actually happens is that we're just back in the uncorrelated case. Basically my simulations will be a simulated repeated game in which everything is correlated _because I feed you my random sequence_ and your simulations will be a repeated game where everything is correlated. Halting works the same as usual. But of course what we end up actually playing will be uncorrelated. We discuss something like this later in the episode.
     
    Topics we discuss, and timestamps:
    0:00:44 Program equilibrium basics
    0:14:20 Desiderata for program equilibria
    0:24:35 Why program equilibrium matters
    0:33:35 Prior work: reachable equilibria and proof-based approaches
    0:53:26 The basic idea of Robust Program Equilibrium
    1:07:47 Are ϵGroundedπBots inefficient?
    1:15:06 Compatibility of proof-based and simulation-based program equilibria
    1:18:32 Cooperating against CooperateBot, and how to avoid it
    1:44:43 Making better simulation-based bots
    2:01:22 Characterizing simulation-based program equilibria
    2:21:24 Follow-up work
    2:29:49 Following Caspar's research
     
    Links for Caspar:
    Academic website: https://www.andrew.cmu.edu/user/coesterh/
    Google Scholar: https://scholar.google.com/citations?user=xeEcRjkAAAAJ&hl=en
    Blog: https://casparoesterheld.com/
    X / Twitter: https://x.com/c_oesterheld
     
    Research we discuss:
    Robust program equilibrium: https://link.springer.com/article/10.1007/s11238-018-9679-3
    Characterising Simulation-Based Program Equilibria: https://arxiv.org/abs/2412.14570
    Manifold open-source prisoner's dilemma tournament: https://manifold.markets/IsaacKing/which-240-character-program-wins-th
    Results of Alex Mennen's open source prisoner's dilemma tournament: https://www.lesswrong.com/posts/QP7Ne4KXKytj4Krkx/prisoner-s-dilemma-tournament-results-0
    A General Counterexample to Any Decision Theory and Some Responses: https://arxiv.org/abs/2101.00280
    Cooperative and uncooperative institution designs: Surprises and problems in open-source game theory: https://arxiv.org/abs/2208.07006
    Parametric Bounded Löb's Theorem and Robust Cooperation of Bounded Agents: https://arxiv.org/abs/1602.04184
    A Note on the Compatibility of Different Robust Program Equilibria of the Prisoner's Dilemma: https://arxiv.org/abs/2211.05057
     
    Episode art by Hamish Doodles: hamishdoodles.com
  • AXRP - the AI X-risk Research Podcast

    48 - Guive Assadi on AI Property Rights

    15-02-2026 | 2 u. 5 Min.
    In this episode, Guive Assadi argues that we should give AIs property rights, so that they are integrated in our system of property and come to rely on it. The claim is that this means that AIs would not kill or steal from humans, because that would undermine the whole property system, which would be extremely valuable to them.
    Patreon: https://www.patreon.com/axrpodcast
    Ko-fi: https://ko-fi.com/axrpodcast
    Transcript: https://axrp.net/episode/2026/02/15/episode-48-guive-assadi-ai-property-rights.html
     
    Topics we discuss, and timestamps:
    0:00:28 AI property rights
    0:08:01 Why not steal from and kill humans
    0:15:25 Why AIs may fear it could be them next
    0:20:56 AI retirement
    0:23:28 Could humans be upgraded to stay useful?
    0:26:41 Will AI progress continue?
    0:30:00 Why non-obsoletable AIs may still not end human property rights
    0:38:35 Why make AIs with property rights?
    0:48:01 Do property rights incentivize alignment?
    0:50:09 Humans and non-human property rights
    1:02:18 Humans and non-human bodily autonomy
    1:16:59 Step changes in coordination ability
    1:24:39 Acausal coordination
    1:32:37 AI, humans, and civilizations with different technology levels
    1:41:39 The case of British settlers and Tasmanians
    1:47:22 Non-total expropriation
    1:53:47 How Guive thinks x-risk could happen, and other loose ends
    2:03:46 Following Guive's work
     
    Guive on Substack: https://guive.substack.com/
    Guive on X/Twitter: https://x.com/GuiveAssadi
     
    Research we discuss:
    The Case for AI Property Rights: https://guive.substack.com/p/the-case-for-ai-property-rights
    AXRP Episode 44 - Peter Salib on AI Rights for Human Safety: https://axrp.net/episode/2025/06/28/episode-44-peter-salib-ai-rights-human-safety.html
    AI Rights for Human Safety (by Salib and Goldstein): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913167
    We don't trade with ants: https://worldspiritsockpuppet.substack.com/p/we-dont-trade-with-ants
    Alignment Fine-tuning is Character Writing (on Claude as a techy philosophy SF-dwelling type): https://guive.substack.com/p/alignment-fine-tuning-is-character
    Claude's charater (Anthropic post on character training): https://www.anthropic.com/research/claude-character
    Git Re-Basin: Merging Models modulo Permutation Symmetries: https://arxiv.org/abs/2209.04836
    The Filan Cabinet: Caspar Oesterheld on Evidential Cooperation in Large Worlds: https://thefilancabinet.com/episodes/2025/08/03/caspar-oesterheld-on-evidential-cooperation-in-large-worlds-ecl.html
     
    Episode art by Hamish Doodles: hamishdoodles.com
  • AXRP - the AI X-risk Research Podcast

    47 - David Rein on METR Time Horizons

    02-01-2026 | 1 u. 47 Min.
    When METR says something like "Claude Opus 4.5 has a 50% time horizon of 4 hours and 50 minutes", what does that mean? In this episode David Rein, METR researcher and co-author of the paper "Measuring AI ability to complete long tasks", talks about METR's work on measuring time horizons, the methodology behind those numbers, and what work remains to be done in this domain.
    Patreon: https://www.patreon.com/axrpodcast
    Ko-fi: https://ko-fi.com/axrpodcast
    Transcript: https://axrp.net/episode/2026/01/03/episode-47-david-rein-metr-time-horizons.html
     
    Topics we discuss, and timestamps:
    0:00:32 Measuring AI Ability to Complete Long Tasks
    0:10:54 The meaning of "task length"
    0:19:27 Examples of intermediate and hard tasks
    0:25:12 Why the software engineering focus
    0:32:17 Why task length as difficulty measure
    0:46:32 Is AI progress going superexponential?
    0:50:58 Is AI progress due to increased cost to run models?
    0:54:45 Why METR measures model capabilities
    1:04:10 How time horizons relate to recursive self-improvement
    1:12:58 Cost of estimating time horizons
    1:16:23 Task realism vs mimicking important task features
    1:19:50 Excursus on "Inventing Temperature"
    1:25:46 Return to task realism discussion
    1:33:53 Open questions on time horizons
     
    Links for METR:
    Main website: https://metr.org/
    X/Twitter account: https://x.com/METR_Evals/
     
    Research we discuss:
    Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
    RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human experts: https://arxiv.org/abs/2411.15114
    HCAST: Human-Calibrated Autonomy Software Tasks: https://arxiv.org/abs/2503.17354
    Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity: https://arxiv.org/abs/2507.09089
    Anthropic Economic Index: Tracking AI's role in the US and global economy: https://www.anthropic.com/research/anthropic-economic-index-september-2025-report
    Bridging RL Theory and Practice with the Effective Horizon (i.e. the Cassidy Laidlaw paper): https://arxiv.org/abs/2304.09853
    How Does Time Horizon Vary Across Domains?: https://metr.org/blog/2025-07-14-how-does-time-horizon-vary-across-domains/
    Inventing Temperature: https://global.oup.com/academic/product/inventing-temperature-9780195337389
    Is there a Half-Life for the Success Rates of AI Agents? (by Toby Ord): https://www.tobyord.com/writing/half-life
    Lawrence Chan's response to the above: https://nitter.net/justanotherlaw/status/1920254586771710009
    AI Task Length Horizons in Offensive Cybersecurity: https://sean-peters-au.github.io/2025/07/02/ai-task-length-horizons-in-offensive-cybersecurity.html
     
    Episode art by Hamish Doodles: hamishdoodles.com
  • AXRP - the AI X-risk Research Podcast

    46 - Tom Davidson on AI-enabled Coups

    07-08-2025 | 2 u. 5 Min.
    Could AI enable a small group to gain power over a large country, and lock in their power permanently? Often, people worried about catastrophic risks from AI have been concerned with misalignment risks. In this episode, Tom Davidson talks about a risk that could be comparably important: that of AI-enabled coups.
    Patreon: https://www.patreon.com/axrpodcast
    Ko-fi: https://ko-fi.com/axrpodcast
    Transcript: https://axrp.net/episode/2025/08/07/episode-46-tom-davidson-ai-enabled-coups.html
     
    Topics we discuss, and timestamps:
    0:00:35 How to stage a coup without AI
    0:16:17 Why AI might enable coups
    0:33:29 How bad AI-enabled coups are
    0:37:28 Executive coups with singularly loyal AIs
    0:48:35 Executive coups with exclusive access to AI
    0:54:41 Corporate AI-enabled coups
    0:57:56 Secret loyalty and misalignment in corporate coups
    1:11:39 Likelihood of different types of AI-enabled coups
    1:25:52 How to prevent AI-enabled coups
    1:33:43 Downsides of AIs loyal to the law
    1:41:06 Cultural shifts vs individual action
    1:45:53 Technical research to prevent AI-enabled coups
    1:51:40 Non-technical research to prevent AI-enabled coups
    1:58:17 Forethought
    2:03:03 Following Tom's and Forethought's research
     
    Links for Tom and Forethought:
    Tom on X / Twitter: https://x.com/tomdavidsonx
    Tom on LessWrong: https://www.lesswrong.com/users/tom-davidson-1
    Forethought Substack: https://newsletter.forethought.org/
    Will MacAskill on X / Twitter: https://x.com/willmacaskill
    Will MacAskill on LessWrong: https://www.lesswrong.com/users/wdmacaskill
     
    Research we discuss:
    AI-Enabled Coups: How a Small Group Could Use AI to Seize Power: https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power
    Seizing Power: The Strategic Logic of Military Coups, by Naunihal Singh: https://muse.jhu.edu/book/31450
    Experiment using AI-generated posts on Reddit draws fire for ethics concerns: https://retractionwatch.com/2025/04/28/experiment-using-ai-generated-posts-on-reddit-draws-fire-for-ethics-concerns/
     
    Episode art by Hamish Doodles: hamishdoodles.com
  • AXRP - the AI X-risk Research Podcast

    45 - Samuel Albanie on DeepMind's AGI Safety Approach

    06-07-2025 | 1 u. 15 Min.
    In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines.
    Patreon: https://www.patreon.com/axrpodcast
    Ko-fi: https://ko-fi.com/axrpodcast
    Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html
     
    Topics we discuss, and timestamps:
    0:00:37 DeepMind's Approach to Technical AGI Safety and Security
    0:04:29 Current paradigm continuation
    0:19:13 No human ceiling
    0:21:22 Uncertain timelines
    0:23:36 Approximate continuity and the potential for accelerating capability improvement
    0:34:29 Misuse and misalignment
    0:39:34 Societal readiness
    0:43:58 Misuse mitigations
    0:52:57 Misalignment mitigations
    1:05:20 Samuel's thinking about technical AGI safety
    1:14:02 Following Samuel's work
     
    Samuel on Twitter/X: x.com/samuelalbanie
     
    Research we discuss:
    An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849
    Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462
    The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/
    Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499
     
    Episode art by Hamish Doodles: hamishdoodles.com

Meer Technologie podcasts

Over AXRP - the AI X-risk Research Podcast

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Podcast website

Luister naar AXRP - the AI X-risk Research Podcast, Bright Podcast en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

AXRP - the AI X-risk Research Podcast: Podcasts in familie