The AI Fundamentalists

Dr. Andrew Clark & Sid Mangalik
The AI Fundamentalists
Nieuwste aflevering

45 afleveringen

  • The AI Fundamentalists

    Metaphysics and modern AI: What is causality?

    27-1-2026 | 36 Min.
    In this episode of our series about Metaphysics and modern AI, we break causality down to first principles and explain how to tell factual mechanisms from convincing correlations. From gold-standard Randomized Control Trials (RCT) to natural experiments and counterfactuals, we map the tools that build trustworthy models and safer AI.
    Defining causes, effects, and common causal structures
    Gestalt theory: Why correlation misleads and how pattern-seeking tricks us
    Statistical association vs causal explanation
    RCTs and why randomization matters
    Natural experiments as ethical, scalable alternatives
    Judea Pearl’s do-calculus, counterfactuals, and first-principles models
    Limits of causality, sample size, and inference
    Building resilient AI with causal grounding and governance

    This is the fourth episode in our metaphysics series. Each topic in the series is leading to the fundamental question, "Should AI try to think?"
    Check out previous episodes:
    Series Intro
    What is reality?
    What is space and time?
    If conversations like this sharpen your curiosity and help you think more clearly about complex systems, then step away from your keyboard and enjoy this journey with us.

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    Why validity beats scale when building multi‑step AI systems

    06-1-2026 | 40 Min.
    In this episode, Dr. Sebastian (Seb) Benthall joins us to discuss research from his and Andrew's paper entitled “Validity Is What You Need” for agentic AI that actually works in the real world. 
    Our discussion connects systems engineering, mechanism design, and requirements to multi‑step AI that creates enterprise impact to achieve measurable outcomes.
    Defining agentic AI beyond LLM hype
    Limits of scale and the need for multi‑step control
    Tool use, compounding errors, and guardrails
    Systems engineering patterns for AI reliability
    Principal–agent framing for governance
    Mechanism design for multi‑stakeholder alignment
    Requirements engineering as the crux of validity
    Hybrid stacks: LLM interface, deterministic solvers
    Regression testing through model swaps and drift
    Moving from universal copilots to fit‑for‑purpose agents
    You can also catch more of Seb's research on our podcast. Tune in to Contextual integrity and differential privacy: Theory versus application.

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    2025 AI review: Why LLMs stalled and the outlook for 2026

    22-12-2025 | 42 Min.
    Here it is! We review the year where scaling large AI models hit its ceiling, Google reclaimed momentum with efficient vertical integration, and the market shifted from hype to viability. 
    Join us as we talk about why human-in-the-loop is failing, why generative AI agents validating other agents compounds errors, and how small expert data quietly beat the big models.

    • Google’s resurgence with Gemini 3.0 and TPU-driven efficiency
    • Monetization pressures and ads in co-pilot assistants
    • Diminishing returns from LLM scaling
    • Human-in-the-loop pitfalls and incentives
    • Agents vs validation and compounding error
    • Small, high-quality data outperforming synthetic
    • Expert systems, causality, and interpretability
    • Research trends return toward statistical rigor
    • 2026 outlook for ROI, governance, and trust

    We remain focused on the responsible use of AI. And while the market continues to adjust expectations for return on investment from AI, we're excited to see companies exploring "return on purpose" as the new foray into transformative AI systems for their business. 

    What are you excited about for AI in 2026? 

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    Big data, small data, and AI oversight with David Sandberg

    09-12-2025 | 49 Min.
    In this episode, we look at the actuarial principles that make models safer: parallel modeling, small data with provenance, and real-time human supervision. To help us, long-time insurtech and startup advisor David Sandberg, FSA, MAAA, CERA, joins us to share more about his actuarial expertise in data management and AI.

    We also challenge the hype around AI by reframing it as a prediction machine and putting human judgment at the beginning, middle, and end. By the end, you might think about “human-in-the-loop” in a whole new way.

    • Actuarial valuation debates and why parallel models win
    • AI’s real value: enhance and accelerate the growth of human capital
    • Transparency, accountability, and enforceable standards
    • Prediction versus decision and learning from actual-to-expected
    • Small data as interpretable, traceable fuel for insight
    • Drift, regime shifts, and limits of regression and LLMs
    • Mapping decisions, setting risk appetite, and enterprise risk management (ERM) for AI
    • Where humans belong: the beginning, middle, and end of the system
    • Agentic AI complexity versus validated end-to-end systems
    • Training judgment with tools that force critique and citation

    Cultural references:
    Foundation, AppleTV
    The Feeling of Power, Isaac Asimov
    Player Piano, Kurt Vonnegut
    For more information, see Actuarial and data science: Bridging the gap.

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
  • The AI Fundamentalists

    Metaphysics and modern AI: What is space and time?

    11-11-2025 | 38 Min.
    We explore how space and time form a single fabric, testing our daily beliefs through questions about free-fall, black holes, speed, and momentum to reveal what models get right and where they break. 
    To help us, we’re excited to have our friend David Theriault, a science and sci-fi afficionado; and our resident astrophysicist, Rachel Losacco, to talk about practical exploration in space and time. They'll even unpack a few concerns they have about how space and time were depicted in the movie Interstellar (2014).
    Highlights:
    • Introduction: Why fundamentals beat shortcuts in science and AI
    • Time as experience versus physical parameter
    • Plato’s ideals versus Aristotle’s change as framing tools
    • Free-fall, G-forces, and what we actually feel
    • Gravity wells, curvature, and moving through space-time
    • Black holes, tidal forces, and spaghettification
    • Momentum and speed: Laser probe, photon momentum, and braking limits
    • Doppler shifts, time dilation, and length contraction
    • Why light’s speed stays constant across frames
    • Modeling causality and preparing for the next paradigm

    This episode about space and time is the second in our series about metaphysics and modern AI. Each topic in the series is leading to the fundamental question, "Should AI try to think?" 
    Step away from your keyboard and enjoy this journey with us.

    Previous episodes:
    Introduction: Metaphysics and modern AI
    What is reality?

    What did you think? Let us know.
    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
    LinkedIn - Episode summaries, shares of cited articles, and more.
    YouTube - Was it something that we said? Good. Share your favorite quotes.
    Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

Meer Zaken en persoonlijke financiën podcasts

Over The AI Fundamentalists

A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.
Podcast website

Luister naar The AI Fundamentalists, Het Beurscafé en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v8.3.1 | © 2007-2026 radio.de GmbH
Generated: 2/1/2026 - 3:08:17 PM