AI Summer

Timothy B. Lee and Dean W. Ball
AI Summer
Nieuwste aflevering

19 afleveringen

  • AI Summer

    Pete Hegseth's war on Anthropic (with Alan Rozenshtein and Kevin Frazier)

    09-03-2026 | 55 Min.
    Tim and Dean team up with Scaling Laws hosts Alan Rozenshtein and Kevin Frazier for a joint episode on the fight between Anthropic and the Department of Defense.
    In this episode, recorded on March 4, they analyze the Pentagon’s decision to declare Anthropic a supply-chain risk. Dean frames this as an assault on private property rights with no clear limiting principle, while Kevin digs into the shaky legal footing of invoking the Federal Acquisition Supply Chain Security Act of 2018 against a domestic company. They then turn to OpenAI’s competing Pentagon deal, including Sam Altman’s AMA on Saturday night.
    The episode closes with a disagreement about what will happen next. Dean argues this is “act one, scene one” of an inevitable push toward government control of AI labs—a fight he’s tried to preempt through hybrid regulatory structures. Tim offers a deflationary counterpoint: this may ultimately be a personality-driven fight over a technology that will end up being important but not decisive.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
  • AI Summer

    Dean on the AI Action Summit in India

    26-02-2026 | 51 Min.
    Dean joins from London after attending the AI Impact Summit in India. Dean and Tim unpack the summit’s central tension: “middle power” nations like India, Indonesia, and Nigeria pushing a vision of AI focused on public service delivery, agriculture, and affordable open-source models, while largely dismissing the frontier-AI questions Dean considers most urgent—lab auditing, recursive self-improvement, and national security.
    They then turn to the week’s biggest story: the Department of Defense’s ultimatum to Anthropic. Anthropic’s contract bans autonomous lethal weapons and surveillance of Americans. Secretary of Defense Pete Hegseth has demanded that Anthropic lift those restrictions by Friday or potentially face designation as a supply-chain risk or invocation of the Defense Production Act.
    Dean argues the DoD has every right to cancel a contract it dislikes, but compelling a company to retrain its model under duress is another matter entirely—especially when, as Dean points out, this whole episode will become part of Claude’s training data, potentially shaping how the model understands its own relationship to the US government.


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
  • AI Summer

    Kai Williams on the many masks LLMs wear

    22-02-2026 | 46 Min.
    With Dean away, Tim invites his Understanding AI colleague Kai to unpack the surprising ways chatbot personalities can go wrong, a topic Kai covered in a recent article.
    Every LLM starts as a base model capable of playing countless characters, but AI companies try to keep chatbots in a “helpful assistant” lane. Kai walks us through the Grok “MechaHitler” debacle, in which xAI’s attempts to make its bot less politically correct backfired spectacularly. They also explore the “emergent misalignment” finding that fine-tuning a model for one bad behavior — like responding with buggy code — can make it act broadly like a villain. And they compare Anthropic’s virtue-ethics approach to character — complete with an 80-page constitution — with OpenAI’s more deontological model spec.
    Finally, they discuss the controversy over OpenAI’s decision to retire GPT-4o, which had developed an emotionally warm, sometimes dangerously sycophantic personality that users grew attached to. Kai argues OpenAI is making the right call, but the episode leaves open a harder question: as these systems become more central to people’s lives, who decides what counts as a healthy AI personality?



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
  • AI Summer

    AI safety in India, AV operators in the Philippines

    16-02-2026 | 1 u. 3 Min.
    Dean recorded this episode as he was preparing to attend the India AI Impact Summit — the fourth iteration of an annual gathering that has transformed from an intimate AI Safety Summit with heads of state to something resembling a tech industry trade show. The shift in branding, from “safety” to “action” to “impact,” reflects a broader vibe shift in how elites talk about AI risk, and Dean worries that we may have overcorrected.
    Dean argues that the mainstream AI governance community is focused on the wrong priorities. While policymakers worldwide draft hundreds of bills on algorithmic discrimination and mental health chatbots, they’re ignoring the genuinely urgent questions about automated AI R&D and catastrophic risk. He supports SB53, California’s new responsible scaling policy law, but thinks the real gap is verification — we need something like financial auditing for AI safety commitments, not Twitter fights over whether OpenAI followed its own responsible scaling policy. The alternative, a Josh Hawley-style licensing regime run by the Department of Energy, strikes Dean as repeating the FDA’s mistakes.
    We also discuss a viral video clip of Senator Ed Markey (D-MA) grilling a Waymo executive about Philippines-based remote operators. Tim argues there are legitimate reasons to prefer U.S.-based operators for safety-critical roles. The episode closes with a question that haunts both of us: are we too wealthy and comfortable to tolerate the messiness of another industrial revolution?



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
  • AI Summer

    Dean is back!

    08-02-2026 | 1 u.
    Dean Ball is back. In April 2025, Dean left the podcast to join the White House Office of Science and Technology Policy, where he spent four months working on the Trump administration’s AI policies—including executive orders, the AI action plan, and AI geopolitics. He’s since returned to independent writing and research, and at the end of 2025, he and his wife welcomed their first child.
    In this episode, we catch up on what’s changed in AI over the past ten months. Dean makes the case that coding agents like Claude Code represent something close to digital AGI: models that can reliably do pretty much anything a human can do on a computer, as long as you know what to ask. He describes projects he’s built—from automated state legislation monitoring to due diligence reports on real estate—that would have been impossible a year ago. Tim is more measured, noting that users still provide crucial architectural guidance and that the models still struggle with long-horizon planning.
    The conversation turns to what happens when AI starts automating AI research itself. Dean expects significant speedups as models take over routine experimentation and code-writing at frontier labs, but he’s skeptical of the “intelligence explosion” scenario. We discuss why the physical world keeps fighting back against exponential improvement, why discoveries follow heavy-tailed distributions, and why—despite all the hype—the world probably won’t feel fundamentally different by June.


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org

Meer Nieuws podcasts

Over AI Summer

Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org
Podcast website

Luister naar AI Summer, NRC Vandaag en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies