Powered by RND
PodcastsOverheidYour Undivided Attention

Your Undivided Attention

The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin
Your Undivided Attention
Nieuwste aflevering

Beschikbare afleveringen

5 van 142
  • How OpenAI's ChatGPT Guided a Teen to His Death
    Content Warning: This episode contains references to suicide and self-harm. Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.RECOMMENDED MEDIA The 988 Suicide and Crisis LifelineFurther reading on Adam’s storyFurther reading on AI psychosisFurther reading on the backlash to GPT5 and the decision to bring back 4oOpenAI’s press release on sycophancy in 4oFurther reading on OpenAI’s decision to eliminate the persuasion red lineKashmir Hill’s reporting on the woman with an AI boyfriendRECOMMENDED YUA EPISODESAI is the Next Free Speech BattlegroundPeople are Lonelier than Ever. Enter AI.Echo Chambers of One: Companion AI and the Future of Human ConnectionWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonCORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.
    --------  
    45:12
  • “Rogue AI” Used to be a Science Fiction Trope. Not Anymore.
    Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger.And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all.In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years.  Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security.The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAGladstone AI’s State Department Action Plan, which discusses the loss of control risk with AIApollo Research’s summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo ResearchAnthropic’s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research’s work on alignment fakingThe Trump White House AI Action PlanFurther reading on the phenomenon of more advanced AIs being better at deception.Further reading on Replit AI wiping a company’s coding databaseFurther reading on the owl example that Jeremie gaveFurther reading on AI induced psychosisDan Hendryck and Eric Schmidt’s “Superintelligence Strategy” RECOMMENDED YUA EPISODESDaniel Kokotajlo Forecasts the End of Human DominanceBehind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveThis Moment in AI: How We Got Here and Where We’re GoingCORRECTIONSTristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times.Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions.
    --------  
    42:11
  • AI is the Next Free Speech Battleground
    Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property.  Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability.This isn't a science fiction scenario. It’s the future we’re racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts.In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court’s role in steering AI and what we can do to help steer it better.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIA“The First Amendment Does Not Protect Replicants” by Larry LessigMore information on the Tech Justice Law ProjectFurther reading on Sewell Setzer’s storyFurther reading on NYT v. SullivanFurther reading on the Citizens United caseFurther reading on Google’s deal with Character AIMore information on Megan Garcia’s foundation, The Blessed Mother Family FoundationRECOMMENDED YUA EPISODESWhen the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell SetzerWhat Can We Do About Abusive Chatbots? With Meetali Jain and Camille CarltonAI Is Moving Fast. We Need Laws that Will Too.The AI Dilemma 
    --------  
    49:11
  • Daniel Kokotajlo Forecasts the End of Human Dominance
    In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he’s out with AI 2027, a forecast of where that direction might take us in the very near future. AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you’re living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don’t have to agree with Daniel’s specific forecast to recognize that the incentives around AI could take us to a very bad place.We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes.  Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe AI 2027 forecast from the AI Futures ProjectDaniel’s original AI 2026 blog post Further reading on Daniel’s departure from OpenAIAnthropic recently released a survey of all the recent emergent misalignment researchOur statement in support of Sen. Grassley’s AI Whistleblower bill RECOMMENDED YUA EPISODESThe Narrow Path: Sam Hammond on AI, Institutions, and the Fragile FutureAGI Beyond the Buzz: What Is It, and Are We Ready?Behind the DeepSeek Hype, AI is Learning to ReasonThe Self-Preserving Machine: Why AI Learns to DeceiveClarification: Daniel K. referred to whistleblower protections that apply when companies “break promises” or “mislead the public.” There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections. 
    --------  
    38:19
  • Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel
    Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete?Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal.We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction?Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe Tyranny of Merit by Michael SandelDemocracy’s Discontent by Michael SandelWhat Money Can’t Buy by Michael SandelTake Michael’s online course “Justice”Michael’s discussion on AI Ethics at the World Economic ForumFurther reading on “The Intelligence Curse”Read the full text of Robert F. Kennedy’s 1968 speechRead the full text of Dr. Martin Luther King Jr.’s 1968 speechNeil Postman’s lecture on the seven questions to ask of any new technologyRECOMMENDED YUA EPISODESAGI Beyond the Buzz: What Is It, and Are We Ready?The Man Who Predicted the Downfall of ThinkingThe Tech-God Complex: Why We Need to be SkepticsThe Three Rules of Humane TechAI and Jobs: How to Make AI Work With Us, Not Against Us with Daron AcemogluMustafa Suleyman Says We Need to Contain AI. How Do We Do It?
    --------  
    46:45

Meer Overheid podcasts

Over Your Undivided Attention

Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Senior Producer Julia Scott and Researcher/Producer is Joshua Lash. Sasha Fegan is our Executive Producer. We are a member of the TED Audio Collective.
Podcast website

Luister naar Your Undivided Attention, Podcast #ONDERMIJNING en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.23.3 | © 2007-2025 radio.de GmbH
Generated: 8/27/2025 - 2:48:20 PM