
The puzzle pieces that can defuse the US-China AI race dynamic, with Kayla Blomquist
23-12-2025 | 35 Min.
Almost every serious discussion about options to constrain the development of advanced AI results in someone raising the question: “But what about China?” The worry behind this question is that slowing down AI research and development in the US and Europe will allow China to race ahead.It's true: the relationship between China and the rest of the world has many complications. That’s why we’re delighted that our guest in this episode is Kayla Blomquist, the Co-founder and Director of the Oxford China Policy Lab, or OCPL for short. OCPL describes itself as a global community of China and emerging technology researchers at Oxford, who produce policy-relevant research to navigate risks in the US-China relationship and beyond.In parallel with her role at OCPL, Kayla is pursuing a DPhil at the Oxford Internet Institute. She is a recent fellow at the Centre for Governance of AI, and the lead researcher and contributing author to the Oxford China Briefing Book. She holds an MSc from the Oxford Internet Institute and a BA with Honours in International Relations, Public Policy, and Mandarin Chinese from the University of Denver. She also studied at Peking University and is professionally fluent in Mandarin.Kayla previously worked as a diplomat in the U.S. Mission to China, where she specialized in the governance of emerging technologies, human rights, and improving the use of new technology within government services.Selected follow-ups:Kayla Blomquist - Personal siteOxford China Policy LabThe Oxford Internet Institute (OII)Google AI defeats human Go champion (Ke Jie)AI Safety Summit 2023 (Bletchley Park, UK)United Kingdom: Balancing Safety, Security, and Growth - OCPLChina wants to lead the world on AI regulation - report from APEC 2025China's WAICO proposal and the reordering of global AI governanceImpact of AI on cyber threat from now to 2027Options for the future of the global governance of AI - London Futurists WebinarA Tentative Draft of a Treaty - Online appendix to the book If Anyone Builds It, Everyone DiesAn International Agreement to Prevent the Premature Creation of Artificial SuperintelligenceMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts Spotify

Jensen Huang and the zero billion dollar market, with Stephen Witt
16-12-2025 | 45 Min.
Our guest in this episode is Stephen Witt, an American journalist and author who writes about the people driving the technological revolutions. He is a regular contributor to The New Yorker, and is famous for deep-dive investigations.Stephen's new book is "The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip", which has just won the 2025 Financial Times and Schroders Business Book of the Year Award. It is a definitive account of the rise of Nvidia, from its foundation in a Denny's restaurant in 1993 as a video game component manufacturer, to becoming the world's most valuable company, and the hardware provider for the current AI boom.Stephen's previous book, “How Music Got Free”, is a history of music piracy and the MP3, and was also a finalist for the FT Business Book of the Year.Selected follow-ups:Stephen Witt - personal siteArticles by Stephen Witt on The New YorkerThe Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip - book siteStephen Witt wins FT and Schroders Business Book of the Year - Financial TimesNvidia ExecutivesBattle Royale (Japanese film) - IMDbThe Economic Singularity - book by Calum ChaceA Cubic Millimeter of a Human Brain Has Been Mapped in Spectacular Detail - NatureNotebookLM - by GoogleMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts Spotify

What's your p(Pause)? with Holly Elmore
05-12-2025 | 44 Min.
Our guest in this episode is Holly Elmore, who is the Founder and Executive Director of PauseAI US. The website pauseai-us.org starts with this headline: “Our proposal is simple: Don’t build powerful AI systems until we know how to keep them safe. Pause AI.”But PauseAI isn’t just a talking shop. They’re probably best known for organising public protests. The UK group has demonstrated in Parliament Square in London, with Big Ben in the background, and also outside the offices of Google DeepMind. A group of 30 PauseAI protesters gathered outside the OpenAI headquarters in San Francisco. Other protests have taken place in New York, Portland, Ottawa, Sao Paulo, Berlin, Paris, Rome, Oslo, Stockholm, and Sydney, among other cities.Previously, Holly was a researcher at the think tank Rethink Priorities in the area of Wild Animal Welfare. And before that, she studied evolutionary biology in Harvard’s Organismic and Evolutionary Biology department.Selected follow-ups:Holly Elmore - substackPauseAI USPauseAI - global siteWild Animal Suffering... and why it mattersHard problem of consciousness - WikipediaThe Unproven (And Unprovable) Case For Net Wild Animal Suffering. A Reply To Tomasik - by Michael PlantLeading Evolution Compassionately - Herbivorize PredatorsDavid Pearce (philosopher) - WikipediaThe AI industry is racing toward a precipice - Machine Intelligence Research Institute (MIRI)Nick Bostrom's new views regarding AI/AI safety - redditAI is poised to remake the world; Help us ensure it benefits all of us - Future of Life InstituteOn being wrong about AI - by Scott Aharonson, on his previous suggestion that it might take "a few thousand years" to reach superhuman AICalifornia Institute of Machine Consciousness - organisation founded by Joscha BachPausing AI is the only safe approach to digital sentience - article by Holly ElmoreCrossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers - book by Geoffrey MooreMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts Spotify

Real-life superheroes and troubled institutions, with Tom Ough
31-10-2025 | 40 Min.
Popular movies sometimes feature leagues of superheroes who are ready to defend the Earth against catastrophe. In this episode, we’re going to be discussing some real-life superheroes, as chronicled in the new book by our guest, Tom Ough. The book is entitled “The Anti-Catastrophe League: The Pioneers And Visionaries On A Quest To Save The World”. Some of these heroes are already reasonably well known, but others were new to David, and, he suspects, to many of the book’s readers.Tom is a London-based journalist. Earlier in his career he worked in newspapers, mostly for the Telegraph, where he was a staff feature-writer and commissioning editor. He is currently a senior editor at UnHerd, where he commissions essays and occasionally writes them. Perhaps one reason why he writes so well is that he has a BA in English Language and Literature from Oxford University, where he was a Casberd scholar.Selected follow-ups:About Tom OughThe Anti-Catastrophe League - The book's webpageOn novel methods of pandemic preventionWhat is effective altruism? (EA)Sam Bankman-Fried - Wikipedia (also covers FTX)Open PhilanthropyConsciumHere Comes the Sun - book by Bill McKibbenThe 10 Best Beatles Songs (Based on Streams)Carrington Event - WikipediaMirror life - WikipediaFuture of Humanity Institute 2005-2024: final report - by Anders SandbergOxford FHI Global Catastrophic Risks - FHI Conference, 2008ForethoughtReview of Nick Bostrom’s Deep Utopia - by CalumDeepMind and OpenAI claim gold in International Mathematical OlympiadWhat the Heck is Hubble Tension?The Decade Ahead - by Leopold AschenbrennerAI 2027AnglofuturismMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts Spotify

Safe superintelligence via a community of AIs and humans, with Craig Kaplan
10-10-2025 | 41 Min.
Craig Kaplan has been thinking about superintelligence longer than most. He bought the URL superintelligence.com back in 2006, and many years before that, in the late 1980s, he co-authored a series of papers with one of the founding fathers of AI, Herbert Simon.Craig started his career as a scientist with IBM, and later founded and ran a venture-backed company called PredictWallStreet that brought the wisdom of the crowd to Wall Street, and improved the performance of leading hedge funds. He sold that company in 2020, and now spends his time working out how to make the first superintelligence safe. As he puts it, he wants to reduce P(Doom) and increase P(Zoom).Selected follow-ups:iQ CompanySuperintelligence - by iQ CompanyHerbert A. Simon - WikipediaAmara’s Law and Its Place in the Future of Tech - Pohan LinThe Society of Mind - book by Marvin MinskyAI 'godfather' Geoffrey Hinton warns of dangers as he quits Google - BBC NewsStatement on AI Risk - Center for AI SafetyI’ve Spent My Life Measuring Risk. AI Rings Every One of My Alarm Bells - Paul Tudor JonesSecrets of Software Quality: 40 Innovations from IBM - book by Craig KaplanLondon Futurists Podcast episode featuring David BrinReason in human affairs - book by Herbert SimonUS and China will intervene to halt ‘suicide race’ of AGI – Max TegmarkIf Anybody Builds It, Everyone Dies - book by Eliezer Yudkowsky and Nate SoaresAGI-25 - conference in ReykjavikThe First Global Brain Workshop - Brussels 2001Center for Integrated CognitionPaul S. RosenbloomTatiana Shavrina, MetaHenry Minsky launches AI startup inspired by father’s MIT researchMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationC-Suite PerspectivesElevate how you lead with insight from today’s most influential executives.Listen on: Apple Podcasts Spotify



London Futurists