Powered by RND
PodcastsTechnologieWeaviate Podcast

Weaviate Podcast

Weaviate
Weaviate Podcast
Nieuwste aflevering

Beschikbare afleveringen

5 van 126
  • Sufficient Context with Hailey Joren - Weaviate Podcast #125!
    Hailey Joren is a Ph.D. student at UCSD! Hailey and collaborators at Duke University and Google have recently published Sufficient Context: A New Lens on Retrieval Augmented Generation Systems in ICLR 2025! There are so many interesting nuggets to this work! Firstly, it really helped me understand the difference between *relevant* search results and sufficient context for answering the question. Armed with this lens of looking at retrieved context, Hailey and collaborators make all sorts of interesting observations about the current state of Hallucination. RAG unfortunately makes the models far less likely to hallucinate, and the existing RAG benchmarks unfortunately do not emphasize retrieval adaptation well enough -- indicated by LLMs outputting correct answers despite insufficient context 35-62% of the time! However, reason for optimism! Hailey and team develop an autorater that can detect insufficient context 93% of the time! There are all sorts of interesting ideas around this paper! I really hope you find the podcast useful!
    --------  
    50:53
  • RAG Benchmarks with Nandan Thakur - Weaviate Podcast #124!
    Nandan Thakur is a Ph.D. student at the University of Waterloo! Nandan has worked on many of the most impactful works in Retrieval-Augmented Generation (RAG) and Information Retrieval. His work ranges from benchmarks such as BEIR, MIRACLE, TREC, and FreshStack, to improving the training of embedding models and re-rankings, and more!
    --------  
    1:04:46
  • MUVERA with Rajesh Jayaram and Roberto Esposito - Weaviate Podcast #123!
    Multi-vector retrieval offers richer, more nuanced search, but often comes with a significant cost in storage and computational overhead. How can we harness the power of multi-vector representations without breaking the bank? Rajesh Jayaram, the first author of the groundbreaking MUVERA algorithm from Google, and Roberto Esposito from Weaviate, who spearheaded its implementation, reveal how MUVERA tackles this critical challenge.Dive deep into MUVERA, a novel compression technique specifically designed for multi-vector retrieval. Rajesh and Roberto explain how it leverages contextualized token embeddings and innovative fixed dimensional encodings to dramatically reduce storage requirements while maintaining high retrieval accuracy. Discover the intricacies of quantization within MUVERA, the interpretability benefits of this approach, and how LSH clustering can play a role in topic modeling with these compressed representations.This conversation explores the core mechanics of efficient multi-vector retrieval, the challenges of benchmarking these advanced systems, and the evolving landscape of vector database schemas designed to handle such complex data. Rajesh and Roberto also share their insights on the future directions in artificial intelligence where efficient, high-dimensional data representation is paramount.Whether you're an AI researcher grappling with the scalability of vector search, an engineer building advanced retrieval systems, or fascinated by the cutting edge of information retrieval and AI frameworks, this episode delivers unparalleled insights directly from the source. You'll gain a fundamental understanding of MUVERA, practical considerations for its application in making multi-vector retrieval feasible, and a clear view of future directions in AI.
    --------  
    1:13:06
  • Patronus AI with Anand Kannappan - Weaviate Podcast #122!
    AI agents are getting more complex and harder to debug. How do you know what's happening when your agent makes 20+ function calls? What if you have a Multi-Agent System orchestrating several Agents? Anand Kannappan, co-founder of Patronus AI, reveals how their groundbreaking tool Percival transforms agent debugging and evaluation. Percival can instantly analyze complex agent traces, it pinpoints failures across 60 different modes, and it automatically suggests prompt fixes to improve performance. Anand unpacks several of these common failure modes. This includes the critical challenges of "context explosion" where agents process millions of tokens. He also explains domain adaptation for specific use cases, and the complex challenge of multi-agent orchestration. The paradigm of AI Evals is shifting from static evaluation to dynamic oversight! Also learn how Percival's memory architecture leverages both episodic and semantic knowledge with Weaviate!This conversation explores powerful concepts like process vs. outcome rewards and LLM-as-judge approaches. Anand shares his vision for "agentic supervision" where equally capable AI systems provide oversight for complex agent workflows. Whether you're building AI agents, evaluating LLM systems, or interested in how debugging autonomous systems will evolve, this episode delivers concrete techniques. You'll gain philosophical insights on evaluation and a roadmap for how evaluation must transform to keep pace with increasingly autonomous AI systems.
    --------  
    1:01:06
  • Haize Labs with Leonard Tang - Weaviate Podcast #121!
    How do you ensure your AI systems actually do what you expect them to do? Leonard Tang takes us deep into the revolutionary world of AI evaluation with concrete techniques you can apply today. Learn how Haize Labs is transforming AI testing through "scaling judge-time compute" - stacking weaker models to effectively evaluate stronger ones. Leonard unpacks the game-changing Verdict library that outperforms frontier models by 10-20% while dramatically reducing costs. Discover practical insights on creating contrastive evaluation sets that extract maximum signal from human feedback, implementing debate-based judging systems, and building custom reward models that align with enterprise needs. The conversation reveals powerful nuggets like using randomized agent debates to achieve consensus and lightweight guardrail models that run alongside inference. Whether you're developing AI applications or simply fascinated by how we'll ensure increasingly powerful AI systems perform as expected, this episode delivers immediate value with techniques you can implement right away, philosophical perspectives on AI safety, and a glimpse into the future of evaluation that will fundamentally shape how AI evolves.
    --------  
    54:15

Meer Technologie podcasts

Over Weaviate Podcast

Join Connor Shorten as he interviews machine learning experts and explores Weaviate use cases from users and customers.
Podcast website

Luister naar Weaviate Podcast, De Groene Nerds en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.20.1 | © 2007-2025 radio.de GmbH
Generated: 7/4/2025 - 4:02:18 AM