PodcastsTechnologieDesign of AI | Build Products that Customers & Businesses Value

Design of AI | Build Products that Customers & Businesses Value

Design of AI
Design of AI | Build Products that Customers & Businesses Value
Nieuwste aflevering

52 afleveringen

  • Design of AI | Build Products that Customers & Businesses Value

    52. Claude Bot & Moltbook: When Demos Hijack Reality [Jim Love]

    10-2-2026 | 43 Min.
    Viral agent demos are training product teams to trust spectacle instead of outcomes—and that’s how unsafe automation slips into real workflows. In this episode we welcome Jim Love, one to the most respected voices in technology news to unpack what “Claude Bot / open claw” and Moltbook-style experiments actually prove, what they exaggerate, and why the hardest problems aren’t capability—they’re control, security, and measurement.

    In this episode we cover:
    Why viral demos distort reality: Hype spotlights novelty, not reliability—so teams miss what breaks when the demo meets real users.

    Local agents raise risk fast: Local access turns assistants into operators—writing, deleting, impersonating, and expanding blast radius.

    “It learns” is overstated: Many stacks “learn” by saving state—easy to inspect, steal, poison, and manipulate.

    Emergence isn’t intelligence: Weird behaviors can emerge at scale without intent—don’t mistake patterns for agency or judgment.

    Outcomes > inputs, always: Great teams define success, measure impact, and kill distractions—even when the tech looks magical.

    You’ll leave with a sharper lens for evaluating agent stacks before they create collateral damage you can’t see or stop.
    Jim Love has spent more than 40 years in technology, working globally as a consultant, leading an international consulting practice, serving as a CIO, and building his own consulting company. He was also CIO and head of content at the iconic publication IT World Canada.
    Today he runs a new publication Tech Newsday and hosts two widely followed technology podcasts, Cybersecurity Today and Hashtag Trending. He continues to advise a select group of companies, mostly startups looking to deal with AI.
    Jim is the author of both fiction and non-fiction, including Digital Transformation in the First Person.
    His latest novel, Elisa: A Tale of Quantum Kisses, explores a near-future shaped by artificial intelligence and became an Audible bestseller shortly after release.
    Tech Newsday — Jim Love’s publication covering tech, AI, and security.
    https://technewsday.com/
    Hashtag Trending — the podcast feed for fast tech headlines + commentary.
    https://technewsday.com/podcasts_categories/hashtag-trending/
    Elisa: A Tale of Quantum Kisses — Jim’s near-future AI novel (Amazon listing).
    https://www.amazon.com/Elisa-Quantum-Kisses-Jim-Love/dp/B0DPFZMDGZ

    If this episode helped, follow/subscribe so you don’t miss what’s next. And if you’re listening on Apple Podcasts or Spotify, leave a rating and a review—it’s the simplest way to help more product teams find the show.

    Get the ideas, frameworks, and episode takeaways as a written brief—subscribe to the Design of AI Substack.

    PH1 Research helps product teams improve digital experiences in the AI era—across strategy, benchmarking, and UX evaluations—so you can measure what matters, reduce impact blindness, and ship systems customers actually trust and adopt. Learn more at https://www.ph1.ca/.
  • Design of AI | Build Products that Customers & Businesses Value

    51. Agents Will Disrupt Search & Shopping [Devi Parikh, CEO Yutori, ex Meta

    02-2-2026 | 42 Min.
    While the world is obsessed with the Moltbot/Clawdbot AI agent, founders like Devi Parikh are laying the foundation for how agents will transform search and shopping—agents that monitor, negotiate, and navigate on behalf of users, securely.
    Search is becoming proactive. Shopping is becoming delegated. And the next interface won’t be a results page—it’ll be agents running quietly in the background, surfacing what matters when it matters.
    How agents turn search into continuous monitoring

    Why shopping shifts from browsing to delegation

    Where value shows up first in real workflows

    What trust requires before agents can transact

    The path from alerts → actions → autonomy

    In this episode, Devi breaks down how Scouts reframes search as “future-facing discovery”: track price drops, in-stock alerts, sales leads, funding news, flights, and local events—then get notified the moment conditions change.
    We also explore what comes next: moving from monitoring to task completion—where agents can execute purchases and bookings with explicit confirmations, hard guardrails, and a deliberate “trust staircase” designed to prevent surprises.

    If you enjoyed this episode, follow the podcast and leave a rating + review—it helps more builders find the show.

    Subscribe to the Design of AI Substack for in-depth AI product strategy resources, operator-grade analysis, and frameworks on what makes AI products succeed (and why they fail).

    This episode is brought to you by PH1 Research—a strategy + research partner for product leaders shipping AI-enabled experiences. We help teams define success metrics that actually matter, validate value before scaling, and reduce trust and adoption risk through AI strategy, UX evaluation, and evidence-driven product decisions.

    Devi Parikh is the co-founder and co-CEO of Yutori, and was previously a Senior Director in Generative AI at Meta and an Associate Professor at Georgia Tech. Her research focuses on human–AI collaboration, generative AI, multimodal AI, and AI for creativity. She holds a Ph.D. from Carnegie Mellon University and has received recognitions including the PAMI Mark Everingham Prize.
    Try Scouts: https://scouts.yutori.com/

    Blog: The Bitter Lesson for Web Agents: https://yutori.com/blog/the-bitter-lesson-for-web-agents
  • Design of AI | Build Products that Customers & Businesses Value

    50. Designing AI for 2026: Trust, Cost, Orchestration [Yaddy Arroyo]

    20-1-2026 | 44 Min.
    2026 will reward AI products that get three things right: trust, cost, and orchestration. This episode looks ahead at how those forces are reshaping AI product strategy—and what teams need to pay attention to now.
    Brittany and Arpy are joined by Yaddy Arroyo, who has spent a decade designing multimodal AI systems in financial services, where reliability and governance are table stakes. She's also been one of the key community builders amongst the design community who are leaders within AI orgs.
    Together, they reflect on what the last two years of AI adoption revealed and how those lessons are directly informing decisions teams are making in 2026.

    Why trust now shapes AI product success
    Orchestration matters more than prompting
    Token costs quietly reshape UX decisions
    When small models outperform large ones
    How AI design roles must evolve in 2026

    Episode chapters
    01:21 Reflecting on Two Years of AI Adoption
    02:52 The Rise of Copilot and AI's Impact on Creativity
    03:37 Challenges and Concerns with AI Safety
    04:24 Designing AI for Human-Centric Use Cases
    04:53 Meta's Investment and Intelligence as a Service
    09:25 Hallucinations and the Reliability of LLMs
    11:14 The Business Value and Limitations of Gen AI
    18:55 Founders and the Rush to Monetize AI
    19:25 Token Optimization and UX Challenges
    21:31 Personalizing AI Interactions
    21:48 Challenges in AI Adoption
    22:27 PH One's AI Solutions
    22:53 The Orchestration Problem
    24:22 AI's Role in Everyday Tasks
    26:08 AI in UX and Design
    27:55 Future of AI and Small Language Models
    30:35 Human in the Loop and UI Generators
    37:35 Accountability and AI's Future
    42:39 Closing Thoughts and Future Directions

    The conversation connects early generative AI optimism with today’s realities—probabilistic systems, rising costs, and scaling pressure—and surfaces where momentum is building, from smaller models to on-device intelligence.
    This episode also marks Episode 50 of Design of AI and two years of conversations with builders, researchers, and leaders shaping AI-powered products—follow the podcast to stay ahead as this next phase unfolds
    .
    About PH1
    The Design of AI podcast is brought to you by PH1, an AI strategy consultancy. PH1 has worked with the biggest corporations in tech to redefine CX in the era of AI through strategic research, prototyping, and aligning product to power. Visit ph1.ca to ask about your project.

    Go Deeper
    For deeper, unfiltered thinking on AI strategy, governance, and product decisions, our Substack (https://designofai.substack.com) is the best place to follow our work. It’s where we go beyond the episodes—breaking down what’s actually changing, what’s overhyped, and what leaders should do next.

    Connect with the Hosts
    Contact Arpy if you’re navigating AI product strategy, platform architecture, orchestration, or high-stakes system decisions that need to scale.

    Contact Brittany if you need clarity on AI UX, research, service design, or evaluating whether an AI product is actually delivering value for users.
  • Design of AI | Build Products that Customers & Businesses Value

    49. AI Was Supposed to Help Humans. What Happened? [Ovetta Sampson]

    02-1-2026 | 48 Min.
    If you’re building your product on private large language models, you are outsourcing control of your business—your data, your roadmap, and your long‑term defensibility—to companies whose incentives do not align with yours.
    Ovetta Sampson is a tech industry leader who has spent more than a decade leading engineers, designers, and researchers across some of the most influential organizations in technology, including Google, Microsoft, IDEO, and Capital One. She has designed and delivered machine learning, artificial intelligence, and enterprise software systems across multiple industries, and in 2023 was named one of Business Insider’s Top 15 People in Enterprise Artificial Intelligence.
    In 2025, Ovetta left her role as Director of AI and Compute Enablement at Google to found Right AI, a consultancy focused on helping organizations minimize the human, organizational, and strategic risks of building and deploying AI.
    In this episode you'll learn about:
    Why LLM‑first architectures undermine control and defensibility

    How enterprise data is unintentionally exposed and reused

    Where “responsible AI” breaks down in practice

    When generative AI is the wrong tool

    What safer, controllable AI systems look like instead

    If this episode challenged how you’re thinking about AI, make sure you’re following Design of AI wherever you listen to podcasts. Rating and reviewing the show helps more founders, product leaders, and designers find these conversations.
    For deeper, unfiltered thinking on AI strategy, governance, and product decisions, our Substack (https://designofai.substack.com) is the best place to follow our work. It’s where we go beyond the episodes—breaking down what’s actually changing, what’s overhyped, and what leaders should do next.
    Ovetta’s work focuses on helping leaders, designers, and organizations reduce human and systemic risk in AI—without defaulting to hype-driven architectures or opaque models.
    Follow Ovetta on LinkedIn: https://www.linkedin.com/in/ovettasampson/

    About Ovetta & her work: https://www.ovetta-sampson.com/

    Join her mailing list: https://www.ovetta-sampson.com/mailing-list-qr-code

    Right AI (consulting & advisory): https://www.rightainow.com/

    Free Mindful AI Playbook (QR Code): https://docs.google.com/presentation/d/1Tzsr25r4o0g0Szz4oOSnUvrrrxAuXfhpqcB08KzdTyA/edit?usp=sharing

    This is episode 49 and was hosted by Arpy Dragffy Guerrero. Follow him on LinkedIn: https://www.linkedin.com/in/adragffy/
    The Design of AI podcast is brought to you by PH1, an AI strategy consultancy., PH1 has worked with the biggest corporations in tech to redefine CX in the era of AI through strategic research, prototyping, and aligning product to power.
  • Design of AI | Build Products that Customers & Businesses Value

    48. AI Trap: Hard Truths About the Job Market

    15-12-2025 | 30 Min.
    2025 is almost over, and it’s time to stop pretending everything is fine.
    If you work in design, writing, product, research, or agencies, you’ve felt it: fewer jobs, lower rates, shrinking teams—and an industry telling you AI is here to free you while quietly replacing you.
    In AI Trap, Episode 48, we break down the biggest myths we’ve been sold:
    AI will free creatives to do more meaningful work

    AI will create more jobs than it destroys

    AI will make us smarter and more creative

    Some of these are partially true. That’s what makes them dangerous.
    We look at real data, real job market signals, and what’s already happening inside agencies and tech companies. We talk about why creativity is being commoditized, why value is collapsing for most creatives, and the line too many people are crossing: outsourcing their thinking instead of outsourcing their work.
    ---
    Please help us: we’re running a short survey alongside this episode. If you work in a creative or knowledge role, your input is critical. It takes about three minutes, and it helps us separate hype from reality. https://tally.so/r/Y5D2Q5
    ----
    This is episode 48 of the Design of AI podcast.
    If you found this conversation valuable, please rate and share the show — your support shapes what we explore next.
    For more AI strategy, creative research, and product insight, subscribe to designofai.substack.com

    Hosted by Arpy Dragffy Guerrero & Brittany Hobbs

    -----
    Most AI projects fail—not because the technology is weak, but because they’re not designed to deliver real customer value.
    PH1 Research helps organizations reimagine their customer experience with AI. We pinpoint what customers actually need, prototype and test solutions, and audit AI products before they ship.
    We’ve worked with teams at Microsoft, Spotify, and fast-growing startups. Learn more at ph1.ca, or reach out directly to our host, Arpy Dragffy.

Meer Technologie podcasts

Over Design of AI | Build Products that Customers & Businesses Value

We provide a pragmatic and practical deep dive into what AI can do and how it is transforming industries. We help designers, researchers, and product managers excel in a rapidly changing future. Hosted by: Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/ Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/ Make sure to subscribe to our Substack to never miss an episode and receive more strategic insights and news https://designofai.substack.com/ Brought to you by PH1 https://ph1.ca a strategy consultancy specialized in improving the success of your AI product.
Podcast website

Luister naar Design of AI | Build Products that Customers & Businesses Value, De Nieuwe Wereld volgens de Grannies en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies