Powered by RND
PodcastsTechnologieArtificiality: Being with AI

Artificiality: Being with AI

Helen and Dave Edwards
Artificiality: Being with AI
Nieuwste aflevering

Beschikbare afleveringen

5 van 103
  • Adam Cutler: AI, Design, and the Human Future
    In this conversation, we sit down with Adam Cutler, Distinguished Designer at IBM and pioneer in human-centered AI design, to explore how generative AI is reshaping creativity, reliance, and human experience. Adam reflects on the parallels between today’s AI moment and past technology shifts—from the rise of Web 2.0 to the early days of the internet—and why we may be living through a “mini singularity.” We discuss the risks of over-reliance, the importance of intentional design, and the opportunities for AI to augment curiosity, creativity, and community. As always, a conversation with Adam provides a thoughtful and caring view of possible futures with AI. And it's heartening to spend time with someone who is so central to the future of AI who consistenly thinks about humans first.Adam will be speaking (again) at the Artificiality Summit in Bend, Oregon on Oct 23-25, 2025. More info: https://artificialityinstitute.org/summit
    --------  
    43:27
  • Joscha Bach at the Artificiality Summit 2024
    In this episode, we bring you a lecture from the Artificiality Summit in October 2024 given by Joscha Bach. Joscha is a cognitive scientist, AI researcher, and philosopher known for his work on cognitive architectures, artificial intelligence, mental representation, emotion, social modeling, multi-agent systems, and philosophy of mind. His research aims to bridge cognitive science and AI by studying how human intelligence and consciousness can be modeled computationally.In his lecture, Joscha explores the nature of intelligence, consciousness, and reality. Drawing from philosophy, neuroscience, and artificial intelligence, Joscha examines how minds emerge, how consciousness functions as the “conductor” of our mental orchestra, and why software and self-organization may hold the key to understanding life itself. He also reflects on animism, the possibility of machine consciousness, and the cultural meaning of large language models. A provocative talk that blends science, philosophy, and speculation on the future of minds—both human and artificial.
    --------  
    26:00
  • Christine Rosen: The Extinction of Experience
    In this conversation, we explore the shifts in human experience with Christine Rosen, senior fellow at the American Enterprise Institute and author of "The Extinction of Experience: Being Human in a Disembodied World." As a member of the "hybrid generation" of Gen X, Christine (like us) brings the perspective of having lived through the transition from an analog to a digital world and witnessed firsthand what we've gained and lost in the process.Christine frames our current moment through the lens of what naturalist Robert Michael Pyle called "the extinction of experience"—the idea that when something disappears from our environment, subsequent generations don't even know to mourn its absence. Drawing on over 20 years of studying technology's impact on human behavior, she argues that we're experiencing a mass migration from direct to mediated experience, often without recognizing the qualitative differences between them.Key themes we explore:The Archaeology of Lost Skills: How the abandonment of handwriting reveals the broader pattern of discarding embodied cognition—the physical practices that shape how we think, remember, and process the world around usMediation as Default: Why our increasing reliance on screens to understand experience is fundamentally different from direct engagement, and how this shift affects our ability to read emotions, tolerate friction, and navigate uncomfortable social situationsThe Machine Logic of Relationships: How technology companies treat our emotions "like the law used to treat wives as property"—as something to be controlled, optimized, and made efficient rather than experienced in their full complexityEmbodied Resistance: Why skills like cursive handwriting, face-to-face conversation, and the ability to sit with uncomfortable emotions aren't nostalgic indulgences but essential human capacities that require active preservationThe Keyboard Metaphor: How our technological interfaces—with their control buttons, delete keys, and escape commands—are reshaping our expectations for human relationships and emotional experiencesChristine challenges the Silicon Valley orthodoxy that frames every technological advancement as inevitable progress, instead advocating for what she calls "defending the human." This isn't a Luddite rejection of technology but a call for conscious choice about what we preserve, what we abandon, and what we allow machines to optimize out of existence.The conversation reveals how seemingly small decisions—choosing to handwrite a letter, putting phones in the center of the table during dinner, or learning to read cursive—become acts of resistance against a broader cultural shift toward treating humans as inefficient machines in need of optimization. As Christine observes, we're creating a world where the people designing our technological future live with "human nannies and human tutors and human massage therapists" while prescribing AI substitutes for everyone else.What emerges is both a warning and a manifesto: that preserving human experience requires actively choosing friction, inefficiency, and the irreducible messiness of being embodied creatures in a physical world. Christine's work serves as an essential field guide for navigating the tension between technological capability and human flourishing—showing us how to embrace useful innovations while defending the experiences that make us most fully human.About Christine Rosen: Christine Rosen is a senior fellow at the American Enterprise Institute, where she focuses on the intersection of technology, culture, and society. Previously the managing editor of The New Republic and founding editor of The Hedgehog Review, her writing has appeared in The Atlantic, The New York Times, The Wall Street Journal, and numerous other publications. "The Extinction of Experience" represents over two decades of research into how digital technologies are reshaping human behavior and social relationships.
    --------  
    55:26
  • Beth Rudden: AI, Trust, and Bast AI
    Join Beth Rudden at the Artificiality Summit in Bend, Oregon—October 23-25, 2025—to imagine a meaningful life with synthetic intelligence for me, we and us. Learn more here: www.artificialityinstitute.org/summitIn this thought-provoking conversation, we explore the intersection of archaeological thinking and artificial intelligence with Beth Rudden, former IBM Distinguished Engineer and CEO of Bast AI. Beth brings a unique interdisciplinary perspective—combining her training as an archaeologist with over 20 years of enterprise AI experience—to challenge fundamental assumptions about how we build and deploy artificial intelligence systems.Beth describes her work as creating "the trust layer for civilization," arguing that current AI systems reflect what Hannah Arendt called the "banality of evil"—not malicious intent, but thoughtlessness embedded at scale. As she puts it, "AI is an excavation tool, not a villain," surfacing patterns and biases that humanity has already normalized in our data and language.Key themes we explore:Archaeological AI: How treating AI as an excavation tool reveals embedded human thoughtlessness, and why scraping random internet data fundamentally misunderstands the nature of knowledge and contextOntological Scaffolding: Beth's approach to building AI systems using formal knowledge graphs and ontologies—giving AI the scaffolding to understand context rather than relying on statistical pattern matching divorced from meaningData Sovereignty in Healthcare: A detailed exploration of Bast AI's platform for explainable healthcare AI, where patients control their data and can trace every decision back to its source—from emergency logistics to clinical communicationThe Economics of Expertise: Moving beyond the "humans as resources" paradigm to imagine economic models that compete to support and amplify human expertise rather than eliminate itEmbodied Knowledge and Community: Why certain forms of knowledge—surgical skill, caregiving, craftsmanship—are irreducibly embodied, and how AI should scale this expertise rather than replace itHopeful Rage: Beth's vision for reclaiming humanist spaces and community healing as essential infrastructure for navigating technological transformationBeth challenges the dominant narrative that AI will simply replace human workers, instead proposing systems designed to "augment and amplify human expertise." Her work at Bast AI demonstrates how explainable AI can maintain full provenance and transparency while reducing cognitive load—allowing healthcare providers to spend more time truly listening to patients rather than wrestling with bureaucratic systems.The conversation reveals how archaeological thinking—with its attention to context, layers of meaning, and long-term patterns—offers essential insights for building trustworthy AI systems. As Beth notes, "You can fake reading. You cannot fake swimming"—certain forms of embodied knowledge remain irreplaceable and should be the foundation for human-AI collaboration.About Beth Rudden: Beth Rudden is CEO and Chairwoman of Bast AI, building explainable artificial intelligence systems with full provenance and data sovereignty. A former IBM Distinguished Engineer and Chief Data Officer, she's been recognized as one of the 100 most brilliant leaders in AI Ethics. With her background spanning archaeology, cognitive science, and decades of enterprise AI development, Beth offers a grounded perspective on technology that serves human flourishing rather than replacing it.This interview was recorded as part of the lead-up to the Artificiality Summit 2025 (October 23-25 in Bend, Oregon), where Beth will be speaking about the future of trustworthy AI.
    --------  
    36:34
  • Steve Sloman: Information to Bits at the Artificiality Summit 2024
    At the Artificiality Summit in October 2024, Steve Sloman, professor at Brown University and author of The Knowledge Illusion and The Cost of Conviction, catalyzed a conversation about how we perceive knowledge in ourselves, others, and now in machines. What happens when our collective knowledge includes a community of machines? Steve challenged us to think about the dynamics of knowledge and understanding in an AI-driven world, about the evolving landscape of narratives, and ask the question can AI make us believe in ways that humans make us believe? What would it take for AI to construct a compelling ideology and belief system that humans would want to follow?Bio: Steven Sloman has taught at Brown since 1992. He studies higher-level cognition. He is a Fellow of the Cognitive Science Society, the Society of Experimental Psychologists, the American Psychological Society, the Eastern Psychological Association, and the Psychonomic Society. Along with scientific papers and editorials, his published work includes a 2005 book Causal Models: How We Think about the World and Its Alternatives, a 2017 book The Knowledge Illusion: Why We Never Think Alone co-authored with Phil Fernbach, and the forthcoming Righteousness: How Humans Decide from MIT Press. He has been Editor-in-Chief of the journal Cognition, Chair of the Brown University faculty, and created Brown’s concentration in Behavioral Decision Sciences.
    --------  
    34:59

Meer Technologie podcasts

Over Artificiality: Being with AI

Artificiality was founded in 2019 to help people make sense of artificial intelligence. We are artificial philosophers and meta-researchers. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We publish essays, podcasts, and research on AI including a Pro membership, providing advanced research to leaders with actionable intelligence and insights for applying AI. Learn more at www.artificiality.world.
Podcast website

Luister naar Artificiality: Being with AI, Hard Fork en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.23.7 | © 2007-2025 radio.de GmbH
Generated: 9/13/2025 - 4:28:48 AM