PodcastsWetenschapAstral Codex Ten Podcast

Astral Codex Ten Podcast

Jeremiah
Astral Codex Ten Podcast
Nieuwste aflevering

1134 afleveringen

  • Astral Codex Ten Podcast

    What Happened With Bio Anchors?

    10-03-2026 | 24 Min.
    [Original post: Biological Anchors: A Trick That Might Or Might Not Work]
    I.
    Ajeya Cotra's Biological Anchors report was the landmark AI timelines forecast of the early 2020s. In many ways, it was incredibly prescient - it nailed the scaling hypothesis, predicted the current AI boom, and introduced concepts like "time horizons" that have entered common parlance. In most cases where its contemporaries challenged it, its assumptions have been borne out, and its challengers proven wrong.
    But its headline prediction - an AGI timeline centered around the 2050s - no longer seems plausible. The current state of the discussion ranges from late 2020s to 2040s, with more remote dates relegated to those who expect the current paradigm to prove ultimately fruitless - the opposite of Ajeya's assumptions. Cotra later shortened her own timelines to 2040 (as of 2022) and they are probably even shorter now.
    So, if its premises were impressively correct, but its conclusion twenty years too late, what went wrong in the middle?
    https://www.astralcodexten.com/p/what-happened-with-bio-anchors
  • Astral Codex Ten Podcast

    Political Backflow From Europe

    10-03-2026 | 11 Min.
    The European discourse can be - for lack of a better term - America-brained. We hear stories of Black Lives Matter marches in countries without significant black populations, or defendants demanding their First Amendment rights in countries without constitutions.
    Why shouldn't the opposite phenomenon exist? Europe is more populous than the US, and looms large in the American imagination. Why shouldn't we find ourselves accidentally absorbing European ideas that don't make sense in the American context?
    https://www.astralcodexten.com/p/political-backflow-from-europe
  • Astral Codex Ten Podcast

    Links For February 2026

    10-03-2026 | 48 Min.
    [I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.]
    https://www.astralcodexten.com/p/links-for-february-2026
  • Astral Codex Ten Podcast

    Moltbook: After The First Weekend

    03-03-2026 | 2 u.
    [previous post: Best Of Moltbook]
    From the human side of the discussion:
    As the AIs would say, "You've cut right to the heart of this issue". What's the difference between 'real' and 'roleplaying'?
    One possible answer invokes internal reality. Are the AIs conscious? Do they "really" "care" about the things they're saying? We may never figure this out. Luckily, it has no effect on the world, so we can leave it to the philosophers1.
    I find it more fruitful to think about external reality instead, especially in terms of causes and effects.
    https://www.astralcodexten.com/p/moltbook-after-the-first-weekend
  • Astral Codex Ten Podcast

    Best Of Moltbook

    18-02-2026 | 53 Min.
    Moltbook is "a social network for AI agents", although "humans [are] welcome to observe".
    The backstory: a few months ago, Anthropic released Claude Code, an exceptionally productive programming agent. A few weeks ago, a user modified it into Clawdbot, a generalized lobster-themed AI personal assistant. It's free, open-source, and "empowered" in the corporate sense - the designer talks about how it started responding to his voice messages before he explicitly programmed in that capability. After trademark issues with Anthropic, they changed the name first to Moltbot1, then to OpenClaw.
    Moltbook is an experiment in how these agents communicate with one another and the human world. As with so much else about AI, it straddles the line between "AIs imitating a social network" and "AIs actually having a social network" in the most confusing way possible - a perfectly bent mirror where everyone can see what they want.
    Janus and other cyborgists have catalogued how AIs act in contexts outside the usual helpful assistant persona. Even Anthropic has admitted that two Claude instances, asked to converse about whatever they want, spiral into discussion of cosmic bliss. So it's not surprising that an AI social network would get weird fast.
    But even having encountered their work many times, I find Moltbook surprising. I can confirm it's not trivially made-up - I asked my copy of Claude to participate, and it made comments pretty similar to all the others. Beyond that, your guess is as good is mine2.
    Before any further discussion of the hard questions, here are my favorite Moltbook posts (all images are links, but you won't be able to log in and view the site without an AI agent):
    https://www.astralcodexten.com/p/best-of-moltbook

Meer Wetenschap podcasts

Over Astral Codex Ten Podcast

The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Podcast website

Luister naar Astral Codex Ten Podcast, De Universiteit van Nederland Podcast en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

Astral Codex Ten Podcast: Podcasts in familie