PodcastsTechnologieThe Neuron: AI Explained

The Neuron: AI Explained

The Neuron
The Neuron: AI Explained
Nieuwste aflevering

58 afleveringen

  • The Neuron: AI Explained

    This AI Agent Builds Better Code Than Most Developers (Factory AI)

    27-1-2026 | 56 Min.
    Autonomous coding agents are moving from demos to real production workflows. In this episode, Factory AI co-founder and CTO Eno Reyes explains what "Droids" really are—fully autonomous agents that can take tickets, modify real codebases, run tests, and work inside existing dev workflows.

    We dig into Factory's context compression research (which outperformed both OpenAI and Anthropic), what makes a codebase "agent-ready," and why Stanford research found that the ONLY predictor of AI success was codebase quality—not adoption rates or token usage.

    Whether you're a developer curious about autonomous coding tools or just want to understand where AI engineering is headed, this episode is packed with practical insights.

    🔗 Try Factory AI: https://factory.ai

    📰 Subscribe to The Neuron newsletter: https://theneuron.ai

    📖 Resources mentioned:
    • Factory's compression research: https://factory.ai/news/evaluating-compression
  • The Neuron: AI Explained

    OpenAI Researcher Explains How AI Hides Its Thinking (w/ OpenAI’s Bowen Baker)

    23-1-2026 | 55 Min.
    AI reasoning models don’t just give answers — they plan, deliberate, and sometimes try to cheat.

    In this episode of The Neuron, we’re joined by Bowen Baker, Research Scientist at OpenAI, to explore whether we can monitor AI reasoning before things go wrong — and why that transparency may not last forever.

    Bowen walks us through real examples of AI reward hacking, explains why monitoring chain-of-thought is often more effective than checking outputs, and introduces the idea of a “monitorability tax” — trading raw performance for safety and transparency.

    We also cover:
    Why smaller models thinking longer can be safer than bigger models

    How AI systems learn to hide misbehavior

    Why suppressing “bad thoughts” can backfire

    The limits of chain-of-thought monitoring

    Bowen’s personal view on open-source AI and safety risks

    If you care about how AI actually works — and what could go wrong — this conversation is essential.

    Resources:
    Title URL
    Evaluating chain-of-thought monitorability | OpenAI https://openai.com/index/evaluating-chain-of-thought-monitorability/
    Understanding neural networks through sparse circuits | OpenAI https://openai.com/index/understanding-neural-networks-through-sparse-circuits/
    OpenAI's alignment blog: https://alignment.openai.com/
    👉 Subscribe for more interviews with the people building AI
    👉 Join the newsletter at https://theneuron.ai
  • The Neuron: AI Explained

    The Hidden Cost of AI Agents No One Talks About

    20-1-2026 | 1 u.
    Everyone is rushing to build AI agents — but most companies are setting themselves up for failure.

    In this episode of The Neuron, Darin Patterson, VP of Market Strategy at Make, explains why agentic AI only works if your automation foundation is solid first. We break down when to use deterministic workflows vs AI agents, how to avoid fragile automation sprawl, and why visibility into your entire automation landscape is now mission-critical.

    You’ll see real examples of building agents in Make, how Model Context Protocol (MCP) fits into modern workflows, and why orchestration — not hype — is the real unlock for scaling AI safely inside organizations.

    Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
  • The Neuron: AI Explained

    Why IBM Wants AI to Be Boring

    13-1-2026 | 53 Min.
    IBM just released Granite 4.0, a new family of open language models designed to be fast, memory-efficient, and enterprise-ready — and it represents a very different philosophy from today’s frontier AI race.

    In this episode of The Neuron, IBM Research’s David Cox joins us to unpack why IBM treats AI models as tools rather than entities, how hybrid architectures dramatically reduce memory and cost, and why openness, transparency, and external audits matter more than ever for real-world deployment.

    We dive into long-context efficiency, agent safety, LoRA adapters, on-device AI, voice interfaces, and why the future of AI may look a lot more boring — in the best possible way.

    If you’re building AI systems for production, agents, or enterprise workflows, this conversation is required listening.

    Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
  • The Neuron: AI Explained

    This AI Grows a Brain During Training (Pathway’s AI w/ Zuzanna Stamirowska)

    06-1-2026 | 48 Min.
    Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world’s first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.

    Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning.

    We explore:
    • Why Transformers lack real memory and time awareness
    • How BDH uses brain-like neurons, synapses, and emergent structure
    • How models can “get bored,” adapt, and strengthen connections
    • Why Pathway sees reasoning — not language — as the core of intelligence
    • How BDH enables infinite context, live learning, and interpretability
    • Why gluing two trained models together actually works in BDH
    • The path to AGI through generalization, not scaling
    • Real-world early adopters (Formula 1, NATO, French Postal Service)
    • Safety, reversibility, checkpointing, and building predictable behavior
    • Why this architecture could power the next era of scientific innovation

    From brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.

    If you want a window into what comes after LLMs, this interview is essential.

    Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

Meer Technologie podcasts

Over The Neuron: AI Explained

The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube. Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
Podcast website

Luister naar The Neuron: AI Explained, AI Report en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies

The Neuron: AI Explained: Podcasts in familie

Social
v8.3.1 | © 2007-2026 radio.de GmbH
Generated: 1/28/2026 - 7:14:22 PM