Dean Xue Lan - A Multi-Pronged Approach to Pre-AGI Coordination (AGI Governance, Episode 10)
Joining us in our tenth episode of our AGI Governance series on The Trajectory is Dean Xue Lan, longtime scholar of public policy and global governance, whose recent work centers on AI safety and international coordination.In this episode, Xue stresses that AGI governance must evolve as an adaptive network. The UN can set frameworks among nations, but companies, safety institutes, and industry associations also play critical roles. Only through combining these overlapping layers can governance respond to the challenges of an unprecedented technology.Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/-HLjD6FRjug See the full article from this episode: https://danfaggella.com/lan1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
37:01
--------
37:01
RAND’s Joel Predd - Competitive and Cooperative Dynamics of AGI (US-China AGI Relations, Episode 4)
This is an interview with Joel Predd, a senior engineer at the RAND Corporation and co-author of RAND’s work on “five hard national security problems from AGI,”. In this conversation, Joel lays out a sober frame for leaders: treat AGI as technically credible but deeply uncertain; assume it will be transformational if it arrives; and recognize that the pace of progress is outstripping our capacity for governance.This is the fourth installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.This episode referred to the following other essays and resources:Artificial General Intelligence's Five Hard National Security Problems: https://www.rand.org/pubs/perspectives/PEA3691-4.html?Types of AI Disasters – Uniting and Dividing: https://danfaggella.com/disaster/Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/Ojg9l5q-gaoSee the full article from this episode: https://danfaggella.com/predd1…There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
1:09:41
--------
1:09:41
Drew Cukor - AI Adoption as a National Security Priority (US-China AGI Relations, Episode 3)
USMC Colonel Drew Cukor spent 25 years as decades in uniform and helped spearhead early Department of Defense AI efforts, eventually leading project including the Pentagon’s Project Maven. After government service, he’s led AI initiatives in the private sector, first with JP Morgan and now with TWG Global.Drew argues that when it comes to the US-China AGI race, the decisive lever isn’t what we block – it’s what we adopt. The nation that most completely fuses people and machines across daily life, industry, and government will set the tempo for everyone else.This is the third installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/GnO4dRHBzKISee the full article from this episode: https://danfaggella.com/cukor1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
51:15
--------
51:15
Stuart Russell - Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)
Joining us in our ninth episode of our AGI Governance series on The Trajectory is Stuart Russell, Professor of Computer Science at UC Berkeley and author of Human Compatible. In this episode, Stuart explores why current AI race dynamics resemble a prisoner’s dilemma, why governments must establish enforceable red lines, and how international coordination might begin with consensus principles before tackling more difficult challenges. This episode referred to the following other resources:-- IDAIS, co-convened by Stuart: https://idais.ai/Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/w0U5V86TMjoSee the full article from this episode: https://danfaggella.com/russell1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
1:04:32
--------
1:04:32
Craig Mundie - Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)
Joining us in our eighth episode of our AGI Governance series on The Trajectory is Craig Mundie, former Chief Research and Strategy Officer at Microsoft and longtime advisor on the evolution of digital infrastructure, AI, and national security. In this episode, Craig and I explore how bottom-up governance could emerge from commercial pressures and cross-national enterprise collaboration, and how this pragmatic foundation might lead us into a future of symbiotic co-evolution rather than catastrophic conflict.Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/Utt-Q8hjF5cSee the full article from this episode: https://danfaggella.com/mundie1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.