Powered by RND
PodcastsTechnologieThe Trajectory

The Trajectory

Daniel Faggella
The Trajectory
Nieuwste aflevering

Beschikbare afleveringen

5 van 22
  • Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]
    This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - http://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
    --------  
    1:46:15
  • Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]
    This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades). Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and governance?" he mentioned Yi immediately. Jaan was right. See the full article from this episode: https://danfaggella.com/zeng1 Watch the full episode on YouTube:  https://youtu.be/jNfnYUcBlmM This episode referred to the following other essays and resources: -- AI Safety Connect - https://aisafetyconnect.com -- Yi's profile on the Chinese Academy of Sciences - https://braincog.ai/~yizeng/...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
    --------  
    1:14:19
  • Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]
    This is an interview with Max Tegmark, MIT professor, Founder of the Future of Humanity Institute, and author of Life 3.0.This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris. See the full article from this episode: https://danfaggella.com/tegmark1Listen to the full podcast episode: https://youtu.be/yQ2fDEQ4Ol0 This episode referred to the following other essays and resources:-- Max's A.G.I. Framework / "Keep the Future Human" - https://keepthefuturehuman.ai/-- AI Safety Connect - https://aisafetyconnect.com...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
    --------  
    26:06
  • Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]
    This is an interview with Dr. Michael Levin, a pioneering developmental biologist at Tufts University.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.Listen to this episode on The Trajectory Youtube Channel: https://www.youtube.com/watch?v=DmKafur28S8See the full article from this episode: https://danfaggella.com/levin1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
    --------  
    1:16:35
  • Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]
    This is an interview with Eliezer Yudkowsky, AI Researcher at the Machine Intelligence Research Institute.This is the sixth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.Watch this episode on The Trajectory Youtube Channel: https://www.youtube.com/watch?v=YlsvQO0zDiESee the full article from this episode: https://danfaggella.com/yudkowsky1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: https://danfaggella.com/trajectory -- X: https://x.com/danfaggella -- LinkedIn: https://linkedin.com/in/danfaggella -- Newsletter: https://bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
    --------  
    1:14:45

Meer Technologie podcasts

Over The Trajectory

What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.
Podcast website

Luister naar The Trajectory, How I AI en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

The Trajectory: Podcasts in familie

Social
v7.16.2 | © 2007-2025 radio.de GmbH
Generated: 4/26/2025 - 10:31:20 AM