Powered by RND
PodcastsTechnologieAI Adoption Playbook

AI Adoption Playbook

Credal
AI Adoption Playbook
Nieuwste aflevering

Beschikbare afleveringen

5 van 5
  • Building deterministic security for multi-agent AI workflows | David Gildea (Druva)
    David Gildea has learned that traditional security models collapse when AI agents start delegating tasks to 50 or 60 other agents in enterprise workflows. As VP of Product for AI at Druva, he's building deterministic security harnesses that solve the authentication nightmare of multi-agent systems while maintaining the autonomous capabilities that make AI valuable. David explains why MCP specifications gained faster enterprise adoption than A2A despite having weaker security features, telling Ravin how his team is addressing authentication gaps through integration with existing identity management systems like Okta. He shares Druva's approach to wrapping AI agents in security frameworks that require human approval for high-risk actions while learning from user behavior to reduce approval friction over time. He also covers Druva's evolution from custom RAG systems to AWS Bedrock Knowledge Bases, demonstrating how to build knowing that components will be replaced by better solutions.  Topics discussed: Multi-agent workflow security challenges with 50+ agent delegation chains MCP specification adoption advantages over A2A for enterprise authentication Deterministic security harnesses wrapping non-deterministic AI agent behaviors Identity management complexity when agents impersonate human users in enterprise systems Human-in-the-loop scaling problems and supervisor agent solutions for authorization AI-first capability layers replacing traditional API structures for agent interactions Hyper-personalization learning from individual user behavior patterns over time Objective-based chat interfaces eliminating traditional software navigation complexity Building replaceable AI components while maintaining development velocity and learning Listen to more episodes:  Apple  Spotify  YouTube Website
    --------  
    33:02
  • Building AI agents that learn from feedback: BigPanda's drag-and-drop system | Alexander Page
    The fastest path to production AI isn't perfect architecture, according to Alexander Page. It's customer validation. In his former role of Principal AI Architect at BigPanda, he transformed an LLM-based prototype into "Biggy," an AI system for critical incident management. BigPanda moved beyond basic semantic search to build agentic integrations with ServiceNow and Jira, creating AI that understands organizational context and learns from incident history while helping with the entire lifecycle from detection through post-incident documentation. Alexander also gives Ravin BigPanda's framework for measuring AI agent performance when traditional accuracy metrics fall short: combine user feedback with visibility into agent decision-making, allowing operators to drag-and-drop incorrect tool calls or sequence errors. He reveals how they encode this feedback into vector databases that influence future agent behavior, creating systems that genuinely improve over time.    Topics discussed: LLM accessibility compared to traditional ML development barriers Fortune 500 IT incident management across 10-30 monitoring tools Building Biggy, an AI agent for incident analysis and resolution Customer-driven development methodology with real data prototyping Agentic integrations with ServiceNow and Jira for organizational context Moving beyond semantic search to structured system queries AI agent performance evaluation when accuracy is subjective User feedback mechanisms for correcting agent tool calls and sequences Encoding corrections into vector databases for behavior improvement Sensory data requirements for human-level AI reasoning Listen to more episodes:  Apple  Spotify  YouTube Website
    --------  
    31:21
  • From 14 to 14,000 patients: How UCHealth scales healthcare with AI | Richard Zane (UCHealth)
    UCHealth’s healthcare AI methodology currently enables 1 nurse to monitor 14 fall-risk patients, with plans to scale to 140, then 1,400 through computer vision and predictive analytics. Instead of exhausting pilots, they deploy in phases: test, prove, optimize, then scale. This has created a system that prioritizes force multiplication of current staff rather than replacing them, enabling healthcare professionals to work at the top of their scope. Richard Zane, Chief Innovation Officer also tells Ravin how their computational linguistics system automatically categorizes thousands of chest X-ray incidental findings into risk levels and manages closed-loop follow-up communication, ensuring critical findings don't fall through administrative cracks. Richard's three-part evaluation framework for technology partners — subject matter expertise, technical deep dive, and financial viability — helps them avoid the startup graveyard.    Topics discussed: UCHealth's phase deployment methodology: test, prove, optimize, scale Force multiplication strategy enabling 1 nurse to monitor 14+ patients Computational linguistics for automating incidental findings Three-part startup evaluation: subject matter, technical, and financial assessment FDA regulatory challenges with learning algorithms in healthcare AI Problem-first approach versus solution-seeking in healthcare AI adoption Cultural alignment and operational cadence in multi-year technology partnerships Listen to more episodes:  Apple  Spotify  YouTube Website
    --------  
    31:18
  • Why AI should live in the business unit, not security: Lessons from mobile and cloud transitions | Mandy Andress (Elastic)
    At Elastic, CISO Mandy Andress learned that pragmatic guardrails work better than blanket bans for managing AI adoption across their 3,500-person distributed workforce. Instead, she enables AI tools with smart controls rather than block them entirely. As both a customer and provider in the AI ecosystem, Elastic faces unique challenges in AI strategy. Mandy explains how they're applying hard-learned lessons from cloud vendor lock-in to build flexible AI systems that can switch foundation models with minimal engineering effort.  She also shares why AI ownership is naturally migrating from security teams to business units as organizations mature their understanding of the technology.   Topics discussed: Elastic's dual role as vector database provider and AI customer. Transitioning AI ownership from security teams to business units. Building foundation model flexibility to avoid vendor lock-in. Quantifying AI business value through time auditing versus traditional ROI. Managing enterprise AI tool procurement floods without innovation barriers. Pragmatic AI guardrails versus blanket AI-blocking strategies. AI team organizational structures based on technical maturity. Focusing AI governance on access controls and API fundamentals. Behavioral analytics for credential-based attack detection. Listen to more episodes:  Apple  Spotify  YouTube Website
    --------  
    35:34
  • How Yext created AI fact sheets to standardize vendor evaluations | Rohit Parchuri (CISO at Yext)
    At Yext, evaluating every AI tool through a security-first lens sparked comprehensive AI governance frameworks that protect enterprise data without stifling productivity. Rohit Parchuri, SVP & CISO, explains how they developed "AI fact sheets" for these evaluations, comparing each tool against specific business goals, data protection requirements, and existing capabilities. This process prevents tool duplication while ensuring security standards are met before deployment. But governance is just one piece of Yext's AI strategy. As a company born from AI technology, they've already built their own ML models to filter false positives from security tools, and they have direct experience with AI's data amplification risks — like how incorrect restaurant ingredient data could trigger FDA issues across all client listings. Rohit explores how enterprises can build sustainable AI programs that accelerate business outcomes while maintaining robust security controls.   Topics discussed: AI's intent recognition versus traditional RPA systems. Implementing "AI fact sheets" for vendor evaluation. Building security checkpoints. Balancing employee productivity with data protection. Managing free consumer AI tools like ChatGPT. Developing AI acceptable use policies. Replacing tier-1 analysts with AI systems. Creating feedback loops for vulnerability categories. Evaluating AI vendor security frameworks. Predicting AI replacement timelines for security roles. Listen to more episodes:  Apple  Spotify  YouTube Website
    --------  
    45:38

Meer Technologie podcasts

Over AI Adoption Playbook

Welcome to The AI Adoption Playbook—where we explore real-world AI implementations at leading enterprises. Join host Ravin Thambapillai, CEO of Credal.ai, as he unpacks the technical challenges, architectural decisions, and deployment strategies shaping successful AI adoption. Each episode dives deep into concrete use cases with the engineers and ML platform teams making enterprise AI work at scale. Whether you’re building internal AI tools or leading GenAI initiatives, you’ll find actionable insights for moving from proof-of-concept to production.
Podcast website

Luister naar AI Adoption Playbook, De Grote Tech Show | BNR en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

AI Adoption Playbook: Podcasts in familie

Social
v7.23.7 | © 2007-2025 radio.de GmbH
Generated: 9/14/2025 - 10:23:00 AM