Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11)
This is an interview with Emmett Shear - CEO of SoftMax, co-founder of Twitch, former interim CEO of OpenAI, and one of the few public-facing tech leaders who seems to take both AGI development and AGI alignment seriously.In this episode, we explore Emmett’s vision of AGI as a kind of living system, not unlike a new kind of cell, joining the tissue of intelligent life.We talk through the limits of our moral vocabulary, the obligations we might owe to future digital minds, and the uncomfortable trade-offs between safety and stagnation. The interview is our eleventh installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This episode referred to the following other essay:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy/Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://www.youtube.com/watch?v=cNz25BSZNfMSee the full article from this episode: https://danfaggella.com/shear1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
1:30:42
--------
1:30:42
Joshua Clymer - Where Human Civilization Might Crumble First (Early Experience of AGI - Episode 2)
This is an interview with Joshua Clymer, AI safety researcher at Redwood Research, and former researcher at METR.Joshua has spent years focused on institutional readiness for AGI, especially the kinds of governance bottlenecks that could become breaking points. His thinking is less about far-off futures and more about near-term institutional failure modes - the brittle places that might shatter first.In this episode, Joshua and I discuss where AGI pressure might rupture our systems: intelligence agencies, the military, tech labs, and the veil of classification that surrounds them. What struck me most in this conversation with Joshua was his grounded honesty. He doesn’t offer easy predictions - just hard-won insight from years near the edge.This is the second episode in our new “Early Experience of AGI” series - where we explore the early impacts of AGI on our work and personal lives.This episode referred to the following other essay:-- Josh's X thread titled "How AI Might Take Over in 2 Years": -- Closing the Human Reward Circuit: https://danfaggella.com/rewardListen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/yMPJKjutm7MSee the full article from this episode: https://danfaggella.com/clymer1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
1:51:37
--------
1:51:37
Peter Singer - Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]
This is an interview with Peter Singer, one of the most influential moral philosophers of our time. Singer is best known for his groundbreaking work on animal rights, global poverty, and utilitarian ethics, and his ideas have shaped countless conversations about the moral obligations of individuals, governments, and societies.This interview is our tenth installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This episode referred to the following other essay:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Singer's Podcast "Lives Well Lived" - https://podcasts.apple.com/us/podcast/lives-well-lived/id1743702376Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://www.youtube.com/watch?v=IFxFj-WrZS0See the full article from this episode: https://danfaggella.com/singer1/...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
1:25:55
--------
1:25:55
David Duvenaud - What are Humans Even Good For in Five Years? [Early Experience of AGI - Episode 1]
This is an interview with David Duvenaud, Assistant Professor at University of Toronto, co-author of the Gradual Disempowerment paper, and former researcher at Anthropic.This is the first episode in our new “Early Experience of AGI” series - where we explore the early impacts of AGI on our work and personal lives.This episode referred to the following other essays and resources:-- Closing the Human Reward Circuit: https://danfaggella.com/reward-- Gradual Disempowerment: http://www.gradual-disempowerment.aiWatch the full episode on YouTube: https://youtu.be/XPpg89K3ULM See the full article from this episode: https://danfaggella.com/duvenaud1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
--------
1:55:59
--------
1:55:59
Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]
This is an interview with Kristian Rönn, author, successful startup founder, and now CEO of Lucid, and AI hardware governance startup based in SF.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Kristian's "Darwinian Trap" book: https://www.amazon.com/Darwinian-Trap-Evolutionary-Explain-Threaten/dp/0593594053Watch this episode on The Trajectory YouTube channel: https://www.youtube.com/watch?v=fSnYeCc_C6ISee the full article from this episode: https://danfaggella.com/ronn1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.