PodcastsZaken en persoonlijke financiënLatent Space: The AI Engineer Podcast

Latent Space: The AI Engineer Podcast

swyx + Alessio
Latent Space: The AI Engineer Podcast
Nieuwste aflevering

176 afleveringen

  • Latent Space: The AI Engineer Podcast

    [State of RL/Reasoning] IMO/IOI Gold, OpenAI o3/GPT-5, and Cursor Composer — Ashvin Nair, Cursor

    30-12-2025

    From Berkeley robotics and OpenAI's 2017 Dota-era internship to shipping RL breakthroughs on GPT-4o, o1, and o3, and now leading model development at Cursor, Ashvin Nair has done it all. We caught up with Ashvin at NeurIPS 2025 to dig into the inside story of OpenAI's reasoning team (spoiler: it went from a dozen people to 300+), why IOI Gold felt reachable in 2022 but somehow didn't change the world when o1 actually achieved it, how RL doesn't generalize beyond the training distribution (and why that means you need to bring economically useful tasks into distribution by co-designing products and models), the deeper lessons from the RL research era (2017–2022) and why most of it didn't pan out because the community overfitted to benchmarks, how Cursor is uniquely positioned to do continual learning at scale with policy updates every two hours and product-model co-design that keeps engineers in the loop instead of context-switching into ADHD hell, and his bet that the next paradigm shift is continual learning with infinite memory—where models experience something once (a bug, a mistake, a user pattern) and never forget it, storing millions of deployment tokens in weights without overloading capacity. We discuss: Ashvin's path: Berkeley robotics PhD → OpenAI 2017 intern (Dota era) → o1/o3 reasoning team → Cursor ML lead in three months Why robotics people are the most grounded at NeurIPS (they work with the real world) and simulation people are the most unhinged (Lex Fridman's take) The IOI Gold paradox: "If you told me we'd achieve IOI Gold in 2022, I'd assume we could all go on vacation—AI solved, no point working anymore. But life is still the same." The RL research era (2017–2022) and why most of it didn't pan out: overfitting to benchmarks, too many implicit knobs to tune, and the community rewarding complex ideas over simple ones that generalize Inside the o1 origin story: a dozen people, conviction from Ilya and Jakob Pachocki that RL would work, small-scale prototypes producing "surprisingly accurate reasoning traces" on math, and first-principles belief that scaled The reasoning team grew from ~12 to 300+ people as o1 became a product and safety, tooling, and deployment scaled up Why Cursor is uniquely positioned for continual learning: policy updates every two hours (online RL on tab), product and ML sitting next to each other, and the entire software engineering workflow (code, logs, debugging, DataDog) living in the product Composer as the start of product-model co-design: smart enough to use, fast enough to stay in the loop, and built by a 20–25 person ML team with high-taste co-founders who code daily The next paradigm shift: continual learning with infinite memory—models that experience something once (a bug, a user mistake) and store it in weights forever, learning from millions of deployment tokens without overloading capacity (trillions of pretraining tokens = plenty of room) Why off-policy RL is unstable (Ashvin's favorite interview question) and why Cursor does two-day work trials instead of whiteboard interviews The vision: automate software engineering as a process (not just answering prompts), co-design products so the entire workflow (write code, check logs, debug, iterate) is in-distribution for RL, and make models that never make the same mistake twice — Ashvin Nair Cursor: https://cursor.com X: https://x.com/ashvinnair_ Chapters 00:00:00 Introduction: From Robotics to Cursor via OpenAI 00:01:58 The Robotics to LLM Agent Transition: Why Code Won 00:09:11 RL Research Winter and Academic Overfitting 00:11:45 The Scaling Era and Moving Goalposts: IOI Gold Doesn't Mean AGI 00:21:30 OpenAI's Reasoning Journey: From Codex to O1 00:20:03 The Blip: Thanksgiving 2023 and OpenAI Governance 00:22:39 RL for Reasoning: The O-Series Conviction and Scaling 00:25:47 O1 to O3: Smooth Internal Progress vs External Hype Cycles 00:33:07 Why Cursor: Co-Designing Products and Models for Real Work 00:34:14 Composer and the Future: Online Learning Every Two Hours 00:35:15 Continual Learning: The Missing Paradigm Shift 00:44:00 Hiring at Cursor and Why Off-Policy RL is Unstable

  • Latent Space: The AI Engineer Podcast

    [State of AI Startups] Memory/Learning, RL Envs & DBT-Fivetran — Sarah Catanzaro, Amplify

    30-12-2025

    From investing through the modern data stack era (DBT, Fivetran, and the analytics explosion) to now investing at the frontier of AI infrastructure and applications at Amplify Partners, Sarah Catanzaro has spent years at the intersection of data, compute, and intelligence—watching categories emerge, merge, and occasionally disappoint. We caught up with Sarah live at NeurIPS 2025 to dig into the state of AI startups heading into 2026: why $100M+ seed rounds with no near-term roadmap are now the norm (and why that terrifies her), what the DBT-Fivetran merger really signals about the modern data stack (spoiler: it's not dead, just ready for IPO), how frontier labs are using DBT and Fivetran to manage training data and agent analytics at scale, why data catalogs failed as standalone products but might succeed as metadata services for agents, the consumerization of AI and why personalization (memory, continual learning, K-factor) is the 2026 unlock for retention and growth, why she thinks RL environments are a fad and real-world logs beat synthetic clones every time, and her thesis for the most exciting AI startups: companies that marry hard research problems (RAG, rule-following, continual learning) with killer applications that were simply impossible before. We discuss: The DBT-Fivetran merger: not the death of the modern data stack, but a path to IPO scale (targeting $600M+ combined revenue) and a signal that both companies were already winning their categories How frontier labs use data infrastructure: DBT and Fivetran for training data curation, agent analytics, and managing increasingly complex interactions—plus the rise of transactional databases (RocksDB) and efficient data loading (Vortex) for GPU-bound workloads Why data catalogs failed: built for humans when they should have been built for machines, focused on discoverability when the real opportunity was governance, and ultimately subsumed as features inside Snowflake, DBT, and Fivetran The $100M+ seed phenomenon: raising massive rounds at billion-dollar valuations with no 6-month roadmap, seven-day decision windows, and founders optimizing for signal ("we're a unicorn") over partnership or dilution discipline Why world models are overhyped but underspecified: three competing definitions, unclear generalization across use cases (video games ≠ robotics ≠ autonomous driving), and a research problem masquerading as a product category The 2026 theme: consumerization of AI via personalization—memory management, continual learning, and solving retention/churn by making products learn skills, preferences, and adapt as the world changes (not just storing facts in cursor rules) Why RL environments are a fad: labs are paying 7–8 figures for synthetic clones when real-world logs, traces, and user activity (à la Cursor) are richer, cheaper, and more generalizable Sarah's investment thesis: research-driven applications that solve hard technical problems (RAG for Harvey, rule-following for Sierra, continual learning for the next killer app) and unlock experiences that were impossible before Infrastructure bets: memory, continual learning, stateful inference, and the systems challenges of loading/unloading personalized weights at scale Why K-factor and growth fundamentals matter again: AI felt magical in 2023–2024, but as the magic fades, retention and virality are back—and most AI founders have never heard of K-factor — Sarah Catanzaro X: https://x.com/sarahcat21 Amplify Partners: https://amplifypartners.com/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Sarah Catanzaro's Journey from Data to AI 00:01:02 The DBT-Fivetran Merger: Not the End of the Modern Data Stack 00:05:26 Data Catalogs and What Went Wrong 00:08:16 Data Infrastructure at AI Labs: Surprising Insights 00:10:13 The Crazy Funding Environment of 2024-2025 00:17:18 World Models: Hype, Confusion, and Market Potential 00:18:59 Memory Management and Continual Learning: The Next Frontier 00:23:27 Agent Environments: Just a Fad? 00:25:48 The Perfect AI Startup: Research Meets Application 00:28:02 Closing Thoughts and Where to Find Sarah

  • Latent Space: The AI Engineer Podcast

    One Year of MCP — with David Soria Parra and AAIF leads from OpenAI, Goose, Linux Foundation

    27-12-2025

    One year ago, Anthropic launched the Model Context Protocol (MCP)—a simple, open standard to connect AI applications to the data and tools they need. Today, MCP has exploded from a local-only experiment into the de facto protocol for agentic systems, adopted by OpenAI, Microsoft, Google, Block, and hundreds of enterprises building internal agents at scale. And now, MCP is joining the newly formed Agentic AI Foundation (AAIF) under the Linux Foundation, alongside Block's Goose coding agent, with founding members spanning the biggest names in AI and cloud infrastructure. We sat down with David Soria Parra (MCP lead, Anthropic), Nick Cooper (OpenAI), Brad Howes (Block / Goose), and Jim Zemlin (Linux Foundation CEO) to dig into the one-year journey of MCP—from Thanksgiving hacking sessions and the first remote authentication spec to long-running tasks, MCP Apps, and the rise of agent-to-agent communication—and the behind-the-scenes story of how three competitive AI labs came together to donate their protocols and agents to a neutral foundation, why enterprises are deploying MCP servers faster than anyone expected (most of it invisible, internal, and at massive scale), what it takes to design a protocol that works for both simple tool calls and complex multi-agent orchestration, how the foundation will balance taste-making (curating meaningful projects) with openness (avoiding vendor lock-in), and the 2025 vision: MCP as the communication layer for asynchronous, long-running agents that work while you sleep, discover and install their own tools, and unlock the next order of magnitude in AI productivity. We discuss: The one-year MCP journey: from local stdio servers to remote HTTP streaming, OAuth 2.1 authentication (and the enterprise lessons learned), long-running tasks, and MCP Apps (iframes for richer UI) Why MCP adoption is exploding internally at enterprises: invisible, internal servers connecting agents to Slack, Linear, proprietary data, and compliance-heavy workflows (financial services, healthcare) The authentication evolution: separating resource servers from identity providers, dynamic client registration, and why the March spec wasn't enterprise-ready (and how June fixed it) How Anthropic dogfoods MCP: internal gateway, custom servers for Slack summaries and employee surveys, and why MCP was born from "how do I scale dev tooling faster than the company grows?" Tasks: the new primitive for long-running, asynchronous agent operations—why tools aren't enough, how tasks enable deep research and agent-to-agent handoffs, and the design choice to make tasks a "container" (not just async tools) MCP Apps: why iframes, how to handle styles and branding, seat selection and shopping UIs as the killer use case, and the collaboration with OpenAI to build a common standard The registry problem: official registry vs. curated sub-registries (Smithery, GitHub), trust levels, model-driven discovery, and why MCP needs "npm for agents" (but with signatures and HIPAA/financial compliance) The founding story of AAIF: how Anthropic, OpenAI, and Block came together (spoiler: they didn't know each other were talking to Linux Foundation), why neutrality matters, and how Jim Zemlin has never seen this much day-one inbound interest in 22 years — David Soria Parra (Anthropic / MCP) MCP: https://modelcontextprotocol.io https://uk.linkedin.com/in/david-soria-parra-4a78b3a https://x.com/dsp_ Nick Cooper (OpenAI) X: https://x.com/nicoaicopr Brad Howes (Block / Goose) Goose: https://github.com/block/goose Jim Zemlin (Linux Foundation) LinkedIn: https://www.linkedin.com/in/zemlin/ Agentic AI Foundation https://agenticai.foundation Chapters 00:00:00 Introduction: MCP's First Year and Foundation Launch 00:01:17 MCP's Journey: From Launch to Industry Standard 00:02:06 Protocol Evolution: Remote Servers and Authentication 00:08:52 Enterprise Authentication and Financial Services 00:11:42 Transport Layer Challenges: HTTP Streaming and Scalability 00:15:37 Standards Development: Collaboration with Tech Giants 00:34:27 Long-Running Tasks: The Future of Async Agents 00:30:41 Discovery and Registries: Building the MCP Ecosystem 00:30:54 MCP Apps and UI: Beyond Text Interfaces 00:26:55 Internal Adoption: How Anthropic Uses MCP 00:23:15 Skills vs MCP: Complementary Not Competing 00:36:16 Community Events and Enterprise Learnings 01:03:31 Foundation Formation: Why Now and Why Together 01:07:38 Linux Foundation Partnership: Structure and Governance 01:11:13 Goose as Reference Implementation 01:17:28 Principles Over Roadmaps: Composability and Quality 01:21:02 Foundation Value Proposition: Why Contribute 01:27:49 Practical Investments: Events, Tools, and Community 01:34:58 Looking Ahead: Async Agents and Real Impact

  • Latent Space: The AI Engineer Podcast

    Steve Yegge's Vibe Coding Manifesto: Why Claude Code Isn't It & What Comes After the IDE

    26-12-2025

    Note: Steve and Gene’s talk on Vibe Coding and the post IDE world was one of the top talks of AIE CODE: https://www.youtube.com/watch?v=7Dtu2bilcFs&t=1019s&pp=0gcJCU0KAYcqIYzv From building legendary platforms at Google and Amazon to authoring one of the most influential essays on AI-powered development (Revenge of the Junior Developer, quoted by Dario Amodei himself), Steve Yegge has spent decades at the frontier of software engineering—and now he's leading the charge into what he calls the "factory farming" era of code. After stints at SourceGraph and building Beads (a purely vibe-coded issue tracker with tens of thousands of users), Steve co-authored The Vibe Coding Book and is now building VC (VibeCoder), an agent orchestration dashboard designed to move developers from writing code to managing fleets of AI agents that coordinate, parallelize, and ship features while you sleep. We sat down with Steve at AI Engineer Summit to dig into why Claude Code, Cursor, and the entire 2024 stack are already obsolete, what it actually takes to trust an agent after 2,000 hours of practice (hint: they will delete your production database if you anthropomorphize them), why the real skill is no longer writing code but orchestrating agents like a NASCAR pit crew, how merging has become the new wall that every 10x-productive team is hitting (and why one company's solution is literally "one engineer per repo"), the rise of multi-agent workflows where agents reserve files, message each other via MCP, and coordinate like a little village, why Steve believes if you're still using an IDE to write code by January 1st, you're a bad engineer, how the 12–15 year experience bracket is the most resistant demographic (and why their identity is tied to obsolete workflows), the hidden chaos inside OpenAI, Anthropic, and Google as they scale at breakneck speed, why rewriting from scratch is now faster than refactoring for a growing class of codebases, and his 2025 prediction: we're moving from subsistence agriculture to John Deere-scale factory farming of code, and the Luddite backlash is only just beginning. We discuss: Why Claude Code, Cursor, and agentic coding tools are already last year's tech—and what comes next: agent orchestration dashboards where you manage fleets, not write lines The 2,000-hour rule: why it takes a full year of daily use before you can predict what an LLM will do, and why trust = predictability, not capability Steve's hot take: if you're still using an IDE to develop code by January 1st, 2025, you're a bad engineer—because the abstraction layer has moved from models to full-stack agents The demographic most resistant to vibe coding: 12–15 years of experience, senior engineers whose identity is tied to the way they work today, and why they're about to become the interns Why anthropomorphizing LLMs is the biggest mistake: the "hot hand" fallacy, agent amnesia, and how Steve's agent once locked him out of prod by changing his password to "fix" a problem Should kids learn to code? Steve's take: learn to vibe code—understand functions, classes, architecture, and capabilities in a language-neutral way, but skip the syntax The 2025 vision: "factory farming of code" where orchestrators run Cloud Code, scrub output, plan-implement-review-test in loops, and unlock programming for non-programmers at scale — Steve Yegge X: https://x.com/steve_yegge Substack (Stevie's Tech Talks): https://steve-yegge.medium.com/ GitHub (VC / VibeCoder): https://github.com/yegge-labs Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Steve Yegge on Vibe Coding and AI Engineering 00:00:59 The Backlash: Who Resists Vibe Coding and Why 00:04:26 The 2000 Hour Rule: Building Trust with AI Coding Tools 00:03:31 The January 1st Deadline: IDEs Are Becoming Obsolete 00:02:55 10X Productivity at OpenAI: The Performance Review Problem 00:07:49 The Hot Hand Fallacy: When AI Agents Betray Your Trust 00:11:12 Claude Code Isn't It: The Need for Agent Orchestration 00:15:20 The Orchestrator Revolution: From Cloud Code to Agent Villages 00:18:46 The Merge Wall: The Biggest Unsolved Problem in AI Coding 00:26:33 Never Rewrite Your Code - Until Now: Joel Spolsky Was Wrong 00:22:43 Factory Farming Code: The John Deere Era of Software 00:29:27 Google's Gemini Turnaround and the AI Lab Chaos 00:33:20 Should Your Kids Learn to Code? The New Answer 00:34:59 Code MCP and the Gossip Rate: Latest Vibe Coding Discoveries

  • Latent Space: The AI Engineer Podcast

    ⚡️GPT5-Codex-Max: Training Agents with Personality, Tools & Trust — Brian Fioca + Bill Chen, OpenAI

    26-12-2025

    From the frontlines of OpenAI's Codex and GPT-5 training teams, Bryan and Bill are building the future of AI-powered coding—where agents don't just autocomplete, they architect, refactor, and ship entire features while you sleep. We caught up with them at AI Engineer Conference right after the launch of Codex Max, OpenAI's newest long-running coding agent designed to work for 24+ hours straight, manage its own context, and spawn sub-agents to parallelize work across your entire codebase. We sat down with Bryan and Bill to dig into what it actually takes to train a model that developers trust—why personality, communication, and planning matter as much as raw capability, how Codex is trained with strong opinions about tools (it loves rg over grep, seriously), why the abstraction layer is moving from models to full-stack agents you can plug into VS Code or Zed, how OpenAI partners co-develop tool integrations and discover unexpected model habits (like renaming tools to match Codex's internal training), the rise of applied evals that measure real-world impact instead of academic benchmarks, why multi-turn evals are the next frontier (and Bryan's "job interview eval" idea), how coding agents are breaking out of code into personal automation, terminal workflows, and computer use, and their 2026 vision: coding agents trusted enough to handle the hardest refactors at any company, not just top-tier firms, and general enough to build integrations, organize your desktop, and unlock capabilities you'd never get access to otherwise. We discuss: What Codex Max is: a long-running coding agent that can work 24+ hours, manage its own context window, and spawn sub-agents for parallel work Why the name "Max": maximalist, maximization, speed and endurance—it's simply better and faster for the same problems Training for personality: communication, planning, context gathering, and checking your work as behavioral characteristics, not just capabilities How Codex develops habits like preferring rg over grep, and why renaming tools to match its training (e.g., terminal-style naming) dramatically improves tool-call performance The split between Codex (opinionated, agent-focused, optimized for the Codex harness) and GPT-5 (general, more durable across different tools and modalities) Why the abstraction layer is moving up: from prompting models to plugging in full agents (Codex, GitHub Copilot, Zed) that package the entire stack The rise of sub-agents and agents-using-agents: Codex Max spawning its own instances, handing off context, and parallelizing work across a codebase How OpenAI works with coding partners on the bleeding edge to co-develop tool integrations and discover what the model is actually good at The shift to applied evals: capturing real-world use cases instead of academic benchmarks, and why ~50% of OpenAI employees now use Codex daily Why multi-turn evals are the next frontier: LM-as-a-judge for entire trajectories, Bryan's "job interview eval" concept, and the need for a batch multi-turn eval API How coding agents are breaking out of code: personal automation, organizing desktops, terminal workflows, and "Devin for non-coding" use cases Why Slack is the ultimate UI for work, and how coding agents can become your personal automation layer for email, files, and everything in between The 2026 vision: more computer use, more trust, and coding agents capable enough that any company can access top-tier developer capabilities, not just elite firms — Bryan & Bill (OpenAI Codex Team) http://x.com/bfioca https://x.com/realchillben OpenAI Codex: https://openai.com/index/openai-codex/ Where to find Latent Space X: https://x.com/latentspacepod Substack: https://www.latent.space/ Chapters 00:00:00 Introduction: Latent Space Listeners at AI Engineer Code 00:01:27 Codex Max Launch: Training for Long-Running Coding Agents 00:03:01 Model Personality and Trust: Communication, Planning, and Self-Checking 00:05:20 Codex vs GPT-5: Opinionated Agents vs General Models 00:07:47 Tool Use and Model Habits: The Ripgrep Discovery 00:09:16 Personality Design: Verbosity vs Efficiency in Coding Agents 00:11:56 The Agent Abstraction Layer: Building on Top of Codex 00:14:08 Sub-Agents and Multi-Agent Patterns: The Future of Composition 00:16:11 Trust and Adoption: OpenAI Developers Using Codex Daily 00:17:21 Applied Evals: Real-World Testing vs Academic Benchmarks 00:19:15 Multi-Turn Evals and the Job Interview Pattern 00:21:35 Feature Request: Batch Multi-Turn Eval API 00:22:28 Beyond Code: Personal Automation and Computer Use 00:24:51 Vision-Native Agents and the UI Integration Challenge 00:25:02 2026 Predictions: Trust, Computer Use, and Democratized Excellence

Meer Zaken en persoonlijke financiën podcasts

Over Latent Space: The AI Engineer Podcast

The podcast by and for AI Engineers! In 2024, over 2 million readers and listeners came to Latent Space to hear about news, papers and interviews in Software 3.0. We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, Anthropic, Gemini, Meta (Soumith Chintala), Sierra (Bret Taylor), tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al. Full show notes always on https://latent.space
Podcast website

Luister naar Latent Space: The AI Engineer Podcast, Doorzetters | met Ruud Hendriks en Richard Bross en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/31/2025 - 6:50:01 AM