I 2024 brugte folk særligt Chatbots som Claude, Grok og ChatGPT til at generere ideer. I 2025 er den mest almindelige brug skiftet til at være terapi og selvomsorg. Men kan sprogmodeller overhovedet bruges som terapeuter? Hvad betyder det for folk at der ikke er et menneske i den anden ende, men en chatbot?
Og hvordan ser det ud med tavshedspligt og datafortrolighed når OpenAI gemmer ens chatlogs?
I dette afsnit af Cybernormer har besøg af journalist og kommunikationsmedarbejder i Psykaitrifonden Christina Leonora Steffensen for at tale med os om AI terapi, psykoser, selvskade og ødelagte relationer som følge af interaktioner med chatbots, effekten på teenageres mentale helbred og hvad man kan gøre hvis ens nærmeste tror, at chatGPT er deres terapeut.
Indholdsadvarsel: Vi omtaler selvskade og selvmord
Psykiatrifondens hjælpelinje: https://psykiatrifonden.dk/hjaelp-raadgivning
Støt podcasten på Patreon:
https://www.patreon.com/c/Cybernauterne
Kilder og referencer
How People Are Really Using Gen AI in 2025, Haward Business Review
How LLM Counselors Violate Ethical Standards in Mental Health Practice:A Practitioner-Informed Framework, AIES 2025
Exploring the Dangers of AI in Mental Health Care, Stanford
New study: AI chatbots systematically violate mental health ethics standards, Brown
The Story of ELIZA: The AI That Fooled the World, LIAcademy
Can AI chatbots trigger psychosis? What the science says, Nature
Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?, Schizophrenia Bulletin
Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers, Cornell University
Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it), PsyArchiv
People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions, Futurism
ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners, Futurism
Their teenage sons died by suicide. Now, they are sounding an alarm about AI chatbots, NPR
'A predator in your home': Mothers say chatbots encouraged their sons to kill themselves, BBC
‘You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself, CNN
Common Sense Media Finds Major AI Chatbots Unsafe for Teen Mental Health Support, Common Sense Media
Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, Common Sense Media
The Emerging Problem of "AI Psychosis", Spychology today
The era of AI persuasion in elections is about to begin, MIT Technology Review
Israel wants to train ChatGPT to be more pro-Israel, Responsible Statescraft
Sam Altman slams Democratic Party, declares himself ‘politically homeless’ in another sign of Silicon Valley shifting right, Fortune
Privacy in an AI Era: How Do We Protect Our Personal Information? , Stanford university
ChatGPT and Privacy: Everything You Need to Know in 2025, Private Internet Access
'A New Category of Evidence.' Feds Cite ChatGPT Logs of Palisades Fire Suspect, PC Mag UK
Reddit, My Boyfriend Is AI
Teens, Social Media and AI Chatbots 2025 , Pew Research Center
Meyer, Sofie i Tidsskriftet for Psykologi Nr. 2, 2025 “Det er nemt at bygge en AI-psykoterapeut, som kan hjælpe millioner – men det er svært at forudsige konsekvenserne”
Lydklip fra
https://www.reddit.com/r/cringe/comments/1ps9bh0/openai_founder_sam_altman_says_he_uses_chatgpt_to/
https://futurism.com/openai-investor-chatgpt-mental-health
Cybernauterne er et netværk af eksperter i cybersikkerhed, internetkultur og digital forståelse.
I vores podcast Cybernormer undersøger vi internettets subkulturer, hvordan teknologi påvirker os som mennesker og samfund, og hvordan vi kan gribe teknologierne, så de ikke styrer os.
Du kan støtte udgivelsen af Cybernormer ved at blive medlem på vores Patreon