PodcastsTechnologieFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Nieuwste aflevering

486 afleveringen

  • Future of Life Institute Podcast

    How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)

    23-12-2025 | 1 u. 18 Min.

    David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.LINKS:David Duvenaud academic homepageGradual DisempowermentThe Post-AGI WorkshopPost-AGI Studies DiscordCHAPTERS:(00:00) Episode Preview(01:05) Introducing gradual disempowerment(06:06) Obsolete labor and UBI(14:29) Property, power, and control(23:38) Culture shifts toward AIs(34:34) States misalign without people(44:15) Competition and preservation tradeoffs(53:03) Building post-AGI studies(01:02:29) Forecasting and coordination tools(01:10:26) Human values and futuresPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    Why the AI Race Undermines Safety (with Steven Adler)

    12-12-2025 | 1 u. 28 Min.

    Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models. LINKS:Steven Adler's Substack: https://stevenadler.substack.comCHAPTERS:(00:00) Episode Preview(01:00) Race Dynamics And Safety(18:03) Chatbots And Mental Health(30:42) Models Outsmart Safety Tests(41:01) AI Swarms And Work(54:21) Human Bottlenecks And Oversight(01:06:23) Animals And Superintelligence(01:19:24) Safety Capabilities And GovernancePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)

    27-11-2025 | 1 u. 1 Min.

    Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.LINKS:The Midas Project WebsiteTyler Johnston's LinkedIn ProfileCHAPTERS:(00:00) Episode Preview(01:06) Introducing the Midas Project(05:01) Shining a Light on AI(08:36) Industry Lockdown and Transparency(13:45) The OpenAI Files(20:55) Subpoenaed by OpenAI(29:10) Responding to the Subpoena(37:41) The Case for Transparency(44:30) Pricing Risk and Regulation(52:15) Measuring Transparency and Auditing(57:50) Hope for the FuturePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    We're Not Ready for AGI (with Will MacAskill)

    14-11-2025 | 2 u. 3 Min.

    William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning.LINKS:- Better Futures Research Series: https://www.forethought.org/research/better-futures- William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskillCHAPTERS:(00:00) Episode Preview(01:03) Improving The Future's Quality(09:58) Moral Errors and AI Rights(18:24) AI's Impact on Thinking(27:17) Utopias and Population Ethics(36:41) The Danger of Moral Lock-in(44:38) Deals with Misaligned AI(57:25) AI and Moral Trade(01:08:21) Improving AI Ethical Reasoning(01:16:05) The Risk of Path Dependence(01:27:41) Avoiding Future Lock-in(01:36:22) The Urgency of Space Governance(01:46:19) A Future Research Agenda(01:57:36) Is Intelligence a Good Bet?PRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)

    07-11-2025 | 1 u. 8 Min.

    Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates.LINKS:About the AI Whistleblower InitiativeKarl KochPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(00:55) Starting the Whistleblower Initiative(05:43) Current State of Protections(13:04) Path to Optimal Policies(23:28) A Whistleblower's First Steps(32:29) Life After Whistleblowing(39:24) Evaluating Company Policies(48:19) Alternatives to Whistleblowing(55:24) High-Stakes Future Scenarios(01:02:27) AI and National SecuritySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyPDISCLAIMERS: - AIWI does not request, encourage or counsel potential whistleblowers or listeners of this podcast to act unlawfully. - This is not legal advice and if you, the listener, find yourself needing legal counsel, please visit https://aiwi.org/contact-hub/ for detailed profiles of the world's leading whistleblower support organizations.

Meer Technologie podcasts

Over Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Podcast website

Luister naar Future of Life Institute Podcast, AI Report en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

Future of Life Institute Podcast: Podcasts in familie

Social
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/23/2025 - 10:49:00 PM