PodcastsTechnologieProduct Impact Podcast | Formerly Design of AI

Product Impact Podcast | Formerly Design of AI

Presented by PH1
Product Impact Podcast | Formerly Design of AI
Nieuwste aflevering

56 afleveringen

  • Product Impact Podcast | Formerly Design of AI

    3. Win The AI Context Wars — Unlock The Value of Data [Juan Sequeda ]

    12-03-2026 | 52 Min.
    Benchmark wars are over. Claude Code just proved it — the AI products winning right now aren't the ones with the best models, they're the ones that know their customers best. Context is the new moat.

    Juan Sequeda has spent 20 years solving the problem most product teams don't even know they have: your AI is only as powerful as your business's ability to predict what a customer wants to do and why. That intelligence isn't in the model — it's buried in your data. And the secret to unlocking it isn't writing better skills files or crafting smarter prompts. It's re-architecting how your business knowledge is structured, connected, and made available to AI. Juan shows you exactly how.

    Product teams who've made this move are seeing accuracy improvements of over 50%, and every new use case they ship compounds on the last. In this episode of the Product Impact Podcast, Juan introduces his three-layer knowledge framework — business metadata, technical metadata, and the mapping layer that connects them — and shows how this foundation transforms what your AI can deliver. You'll leave with a clear starting point, a way to tie your AI investment directly to business outcomes, and a mental model for how the best product teams are pulling ahead.

    In this episode you'll learn:
    ➡️ Why context — not model quality — is now the primary driver of AI product performance
    ➡️ The three-layer knowledge framework that gives AI a shared language across your entire organization
    ➡️ Three concrete first steps to build your context foundation starting tomorrow
    ➡️ How to tie every AI initiative directly to your company's top OKRs and earn lasting executive buy-in
    ➡️ Why knowledge-first teams compound their advantage — each new use case gets faster and more powerful

    Thank you for listening to the Product Impact Podcast (formerly Design of AI) — Prove impact. Improve impact. Scale impact.

    Go to productimpactpod.com to rate the impact of AI products you use at work.

    Hosted by:
    Arpy Dragffy Guerrero — ⁠https://www.linkedin.com/in/adragffy/⁠

    Brittany Hobbs — ⁠https://www.linkedin.com/in/brittanyhobbs/⁠

    Support the show: subscribe, share this episode with a product leader, and leave a rating/review—it’s how this podcast reaches the teams building what comes next.
    Subscribe for frameworks + AI strategy resources: ⁠https://productimpactpod.substack.com⁠Brought to you by PH1 (⁠https://ph1.ca⁠) — an AI strategy consultancy specialized in improving the measurable success of AI products.

    About our guest
    Juan Sequeda is Principal Scientist and Head of the AI Lab at data.world, now part of ServiceNow. He has spent 20 years at the frontier of knowledge graphs, ontologies, and semantic architecture — focused on one question: how do you give AI a genuine understanding of your business so it can deliver answers you can actually trust?
    His lab's research proved that pairing knowledge graphs with LLMs improves enterprise question-answering accuracy by over 50% — findings that helped define the industry's thinking on context and AI reliability. He co-founded Capsenta (acquired by data.world), coined the concept of "context wars," and recently published his landmark LinkedIn series: "20 Lessons from 20 Years of Building Ontologies and Knowledge Graphs."
    He also co-hosts Catalog & Cocktails, one of the most respected podcasts in the data community, and publishes regularly on LinkedIn and Substack.

    Resources
    ➡️ Juan Sequeda on LinkedIn: https://www.linkedin.com/in/juansequeda
    ➡️ Catalog & Cocktails Podcast: https://data.world/podcasts/catalog-and-cocktails
    ➡️ Juan's Substack: https://juansequeda.substack.com
    ➡️ "20 Lessons from 20 Years of Building Ontologies and Knowledge Graphs" — https://www.linkedin.com/posts/juansequeda_i-finished-posting-my-20-lessons-from-20-activity-7429147437681864704-C7ki/
    ➡️ Software Wasteland — Dave McComb
    ➡️ The Data-Centric Revolution — Dave McComb
  • Product Impact Podcast | Formerly Design of AI

    2. Five steps to defend your AI product value

    03-03-2026 | 34 Min.
    AI is entering an abundance era: models get smarter, faster, and cheaper—so capability alone is no longer defensible. Feature cloning accelerates, pricing compresses, and many application-layer products get sampled and abandoned unless they prove measurable outcomes and earn long-term commitment.

    In this episode of the Product Impact Podcast, we break down why defensibility now matters more than capability—and what to do about it. You’ll leave with five actions to take this quarter: run a silent failure audit, map peak cost exposure, stress-test defensibility, fix the missing middle in pricing for power users, and build outcome visibility directly into the product.

    ➡️ Dangerous economics of a capital-rich and value-poor market
    ➡️ Master the unit economics of power users
    ➡️ Proof that capability is no longer defensible
    ➡️ 5 steps to defend your product value

    You’ll leave with five concrete actions to take this quarter because in a market where everyone has access to the same models, your moat is not capability. It’s customer success, trust, and measurable impact.

    Links & resources
    Read the strategy we reference: https://ph1.ca/blog/strategy-for-measuring-improving-ai-products

    Take the AI Benchmarking Survey (measure your product’s impact): https://bullseyebenchmark.fillout.com/aiproducts

    Thank you for listening to the Product Impact Podcast (formerly Design of AI) — Prove impact. Improve impact. Scale impact.
    Hosted by:
    Arpy Dragffy Guerrero — https://www.linkedin.com/in/adragffy/

    Brittany Hobbs — https://www.linkedin.com/in/brittanyhobbs/

    Support the show: subscribe, share this episode with a product leader, and leave a rating/review—it’s how this podcast reaches the teams building what comes next.
    Subscribe for frameworks + AI strategy resources: https://productimpactpod.substack.com
    Brought to you by PH1 (https://ph1.ca) — an AI strategy consultancy specialized in improving the measurable success of AI products.
  • Product Impact Podcast | Formerly Design of AI

    1. Why Your AI Metrics Are Lying to You - Framework for improving AI product performance

    24-02-2026 | 35 Min.
    How is it that Microsoft and OpenAI’s CEOs are telling us to panic because white collar jobs are going to be replaced by AI,

    Then there’s endless evidence of the opposite: Most companies that implement AI see little gains, with execs from over 80% of companies reporting no productivity gains at all.

    In this episode of the Product Impact Podcast we tackle Why Your AI Metrics Are Lying to You. We’ll provide you with a framework for improving AI product performance. We discuss how Evals can't answer the most important questions you have about your product's impact and the importance of calibrating your products for success by balancing key pillars.

    In this episode you’ll learn:
    - Agents hide friction from view, creating dangerous impact blindness
    - Balance power, speed, impact & joy to win in the AI era, like F1 cars
    - Success doesn’t equal satisfaction—you must measure both outcomes
    - Measure outcomes and feelings, not just activity logs and checkmarks

    Read the Strategy for Measuring & Improving AI Products we reference in the episode here: https://ph1.ca/blog/strategy-for-measuring-improving-ai-products

    Thank you for listening to the Product Impact Podcast (Formerly Design of AI)
    Prove impact. Improve impact. Scale impact.
    Learn frameworks and strategies to ensure your product is delivering impact to users, teams, businesses, and communities. We investigate enterprise adoption and highlight builders/startups disrupting value creation.

    Hosted by:
    Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/
    Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/

    Subscribe to https://productimpactpod.substack.com for AI Strategy resources

    Brought to you by PH1 https://ph1.ca an AI strategy consultancy specialized in improving the success of your AI product.
  • Product Impact Podcast | Formerly Design of AI

    Why Design of AI is becoming the Product Impact Podcast

    23-02-2026 | 16 Min.
    We started the Design of AI podcast at the end of 2023 at a time when GenAI was a black box of possibilities. Our focus was to help people working in tech navigate a great time of change and unpack how to experiment this new technology. We've now moved into the next era of AI —scaling value and our podcast must adapt too. Where season one was focused on explaining AI and how roles will be forced to change, season 2 will focus on how to measure impact and how to scale impact.

    Keep up to date with the new podcast: https://productimpactpod.com

    Our focus is highly strategic and pragmatic. We want business, product, design, and research leaders who can unpack the uncomfortable truths about delivering impact at scale. We are seeking dedicated academics and thought leaders who challenge the status quo on how to measure and improve impact delivery. We want voices that challenge the belief that all impact is positive and who can provide a compass to guide the evolution of tech and industries.
  • Product Impact Podcast | Formerly Design of AI

    52. Clawd Bot & Moltbook: When Demos Hijack Reality [Jim Love]

    10-02-2026 | 43 Min.
    Viral agent demos are training product teams to trust spectacle instead of outcomes—and that’s how unsafe automation slips into real workflows. In this episode we welcome Jim Love, one to the most respected voices in technology news to unpack what “Claude Bot / open claw” and Moltbook-style experiments actually prove, what they exaggerate, and why the hardest problems aren’t capability—they’re control, security, and measurement.

    In this episode we cover:
    Why viral demos distort reality: Hype spotlights novelty, not reliability—so teams miss what breaks when the demo meets real users.

    Local agents raise risk fast: Local access turns assistants into operators—writing, deleting, impersonating, and expanding blast radius.

    “It learns” is overstated: Many stacks “learn” by saving state—easy to inspect, steal, poison, and manipulate.

    Emergence isn’t intelligence: Weird behaviors can emerge at scale without intent—don’t mistake patterns for agency or judgment.

    Outcomes > inputs, always: Great teams define success, measure impact, and kill distractions—even when the tech looks magical.

    You’ll leave with a sharper lens for evaluating agent stacks before they create collateral damage you can’t see or stop.
    Jim Love has spent more than 40 years in technology, working globally as a consultant, leading an international consulting practice, serving as a CIO, and building his own consulting company. He was also CIO and head of content at the iconic publication IT World Canada.
    Today he runs a new publication Tech Newsday and hosts two widely followed technology podcasts, Cybersecurity Today and Hashtag Trending. He continues to advise a select group of companies, mostly startups looking to deal with AI.
    Jim is the author of both fiction and non-fiction, including Digital Transformation in the First Person.
    His latest novel, Elisa: A Tale of Quantum Kisses, explores a near-future shaped by artificial intelligence and became an Audible bestseller shortly after release.
    Tech Newsday — Jim Love’s publication covering tech, AI, and security.
    https://technewsday.com/
    Hashtag Trending — the podcast feed for fast tech headlines + commentary.
    https://technewsday.com/podcasts_categories/hashtag-trending/
    Elisa: A Tale of Quantum Kisses — Jim’s near-future AI novel (Amazon listing).
    https://www.amazon.com/Elisa-Quantum-Kisses-Jim-Love/dp/B0DPFZMDGZ

    If this episode helped, follow/subscribe so you don’t miss what’s next. And if you’re listening on Apple Podcasts or Spotify, leave a rating and a review—it’s the simplest way to help more product teams find the show.

    Get the ideas, frameworks, and episode takeaways as a written brief—subscribe to the Design of AI Substack.

    PH1 Research helps product teams improve digital experiences in the AI era—across strategy, benchmarking, and UX evaluations—so you can measure what matters, reduce impact blindness, and ship systems customers actually trust and adopt. Learn more at https://www.ph1.ca/.

Meer Technologie podcasts

Over Product Impact Podcast | Formerly Design of AI

Prove impact. Improve impact. Scale impact. Learn frameworks and strategies to ensure your product is delivering impact to users, teams, businesses, and communities. We investigate enterprise adoption and highlight builders/startups disrupting value creation. Hosted by: Arpy Dragffy Guerrero https://www.linkedin.com/in/adragffy/ Brittany Hobbs https://www.linkedin.com/in/brittanyhobbs/ Subscribe to https://productimpactpod.substack.com for AI Strategy resources Brought to you by PH1 https://ph1.ca a strategy consultancy specialized in improving the success of your AI product.
Podcast website

Luister naar Product Impact Podcast | Formerly Design of AI, De Grote Tech Show | BNR en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies