PodcastsTechnologieMachine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Nieuwste aflevering

242 afleveringen

  • Machine Learning Street Talk (MLST)

    Your Brain is Running a Simulation Right Now [Max Bennett]

    30-12-2025 | 3 u. 17 Min.

    Tim sits down with Max Bennett to explore how our brains evolved over 600 million years—and what that means for understanding both human intelligence and AI.Max isn't a neuroscientist by training. He's a tech entrepreneur who got curious, started reading, and ended up weaving together three fields that rarely talk to each other: comparative psychology (what different animals can actually do), evolutionary neuroscience (how brains changed over time), and AI (what actually works in practice).*Your Brain Is a Guessing Machine*You don't actually "see" the world. Your brain builds a simulation of what it *thinks* is out there and just uses your eyes to check if it's right. That's why optical illusions work—your brain is filling in a triangle that isn't there, or can't decide if it's looking at a duck or a rabbit.*Rats Have Regrets**Chimps Are Machiavellian**Language Is the Human Superpower**Does ChatGPT Think?*(truncated description, more on rescript)Understanding how the brain evolved isn't just about the past. It gives us clues about:- What's actually different between human intelligence and AI- Why we're so easily fooled by status games and tribal thinking- What features we might want to build into—or leave out of—future AI systemsGet Max's book:https://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343Rescript: https://app.rescript.info/public/share/R234b7AXyDXZusqQ_43KMGsUSvJ2TpSz2I3emnI6j9A---TIMESTAMPS:00:00:00 Introduction: Outsider's Advantage & Neocortex Theories00:11:34 Perception as Inference: The Filling-In Machine00:19:11 Understanding, Recognition & Generative Models00:36:39 How Mice Plan: Vicarious Trial & Error00:46:15 Evolution of Self: The Layer 4 Mystery00:58:31 Ancient Minds & The Social Brain: Machiavellian Apes01:19:36 AI Alignment, Instrumental Convergence & Status Games01:33:07 Metacognition & The IQ Paradox01:48:40 Does GPT Have Theory of Mind?02:00:40 Memes, Language Singularity & Brain Size Myths02:16:44 Communication, Language & The Cyborg Future02:44:25 Shared Fictions, World Models & The Reality Gap---REFERENCES:Person:[00:00:05] Karl Friston (UCL)https://www.youtube.com/watch?v=PNYWi996Beg[00:00:06] Jeff Hawkinshttps://www.youtube.com/watch?v=6VQILbDqaI4[00:12:19] Hermann von Helmholtzhttps://plato.stanford.edu/entries/hermann-helmholtz/[00:38:34] David Redish (U. Minnesota)https://redishlab.umn.edu/[01:10:19] Robin Dunbarhttps://www.psy.ox.ac.uk/people/robin-dunbar[01:15:04] Emil Menzelhttps://www.sciencedirect.com/bookseries/behavior-of-nonhuman-primates/vol/5/suppl/C[01:19:49] Nick Bostromhttps://nickbostrom.com/superintelligentwill.pdfConcept/Framework:[00:05:04] Active Inferencehttps://www.youtube.com/watch?v=KkR24ieh5OwPaper:[00:35:59] Predictions not commands [Rick A Adams]https://pubmed.ncbi.nlm.nih.gov/23129312/Book:[01:28:27] The Status Gamehttps://www.amazon.com/Status-Game-Human-Life-Play/dp/000835[01:25:42] The Elephant in the Brainhttps://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995[02:00:40] The Selfish Genehttps://amazon.com/dp/0198788606[03:09:37] The Three-Body Problemhttps://amazon.com/dp/0765377063hanged/dp/1541674987:[02:14:25] The Language Gamehttps://www.amazon.com/Language-Game-Improvisation-Created-C[02:54:40] The Evolution of Languagehttps://www.amazon.com/Evolution-Language-Approaches/dp/0521

  • Machine Learning Street Talk (MLST)

    The 3 Laws of Knowledge [César Hidalgo]

    27-12-2025 | 1 u. 37 Min.

    César Hidalgo has spent years trying to answer a deceptively simple question: What is knowledge, and why is it so hard to move around?We all have this intuition that knowledge is just... information. Write it down in a book, upload it to GitHub, train an AI on it—done. But César argues that's completely wrong. Knowledge isn't a thing you can copy and paste. It's more like a living organism that needs the right environment, the right people, and constant exercise to survive.Guest: César Hidalgo, Director of the Center for Collective Learning1. Knowledge Follows Laws (Like Physics)2. You Can't Download Expertise3. Why Big Companies Fail to Adapt4. The "Infinite Alphabet" of EconomiesIf you think AI can just "copy" human knowledge, or that development is just about throwing money at poor countries, or that writing things down preserves them forever—this conversation will change your mind. Knowledge is fragile, specific, and collective. It decays fast if you don't use it. The Infinite Alphabet [César A. Hidalgo]https://www.penguin.co.uk/books/458054/the-infinite-alphabet-by-hidalgo-cesar-a/9780241655672https://x.com/cesifotiRescript link. https://app.rescript.info/public/share/eaBHbEo9xamwbwpxzcVVm4NQjMh7lsOQKeWwNxmw0JQ---TIMESTAMPS:00:00:00 The Three Laws of Knowledge00:02:28 Rival vs. Non-Rival: The Economics of Ideas00:05:43 Why You Can't Just 'Download' Knowledge00:08:11 The Detective Novel Analogy00:11:54 Collective Learning & Organizational Networks00:16:27 Architectural Innovation: Amazon vs. Barnes & Noble00:19:15 The First Law: Learning Curves00:23:05 The Samuel Slater Story: Treason & Memory00:28:31 Physics of Knowledge: Joule's Cannon00:32:33 Extensive vs. Intensive Properties00:35:45 Knowledge Decay: Ise Temple & Polaroid00:41:20 Absorptive Capacity: Sony & Donetsk00:47:08 Disruptive Innovation & S-Curves00:51:23 Team Size & The Cost of Innovation00:57:13 Geography of Knowledge: Vespa's Origin01:04:34 Migration, Diversity & 'Planet China'01:12:02 Institutions vs. Knowledge: The China Story01:21:27 Economic Complexity & The Infinite Alphabet01:32:27 Do LLMs Have Knowledge?---REFERENCES:Book:[00:47:45] The Innovator's Dilemma (Christensen)https://www.amazon.com/Innovators-Dilemma-Revolutionary-Change-Business/dp/0062060244[00:55:15] Why Greatness Cannot Be Plannedhttps://amazon.com/dp/3319155237[01:35:00] Why Information Growshttps://amazon.com/dp/0465048994Paper:[00:03:15] Endogenous Technological Change (Romer, 1990)https://web.stanford.edu/~klenow/Romer_1990.pdf[00:03:30] A Model of Growth Through Creative Destruction (Aghion & Howitt, 1992)https://dash.harvard.edu/server/api/core/bitstreams/7312037d-2b2d-6bd4-e053-0100007fdf3b/content[00:14:55] Organizational Learning: From Experience to Knowledge (Argote & Miron-Spektor, 2011)https://www.researchgate.net/publication/228754233_Organizational_Learning_From_Experience_to_Knowledge[00:17:05] Architectural Innovation (Henderson & Clark, 1990)https://www.researchgate.net/publication/200465578_Architectural_Innovation_The_Reconfiguration_of_Existing_Product_Technologies_and_the_Failure_of_Established_Firms[00:19:45] The Learning Curve Equation (Thurstone, 1916)https://dn790007.ca.archive.org/0/items/learningcurveequ00thurrich/learningcurveequ00thurrich.pdf[00:21:30] Factors Affecting the Cost of Airplanes (Wright, 1936)https://pdodds.w3.uvm.edu/research/papers/others/1936/wright1936a.pdf[00:52:45] Are Ideas Getting Harder to Find? (Bloom et al.)https://web.stanford.edu/~chadj/IdeaPF.pdf[01:33:00] LLMs/ Emergencehttps://arxiv.org/abs/2506.11135Person:[00:25:30] Samuel Slaterhttps://en.wikipedia.org/wiki/Samuel_Slater[00:42:05] Masaru Ibuka (Sony)https://www.sony.com/en/SonyInfo/CorporateInfo/History/SonyHistory/1-02.html

  • Machine Learning Street Talk (MLST)

    "I Desperately Want To Live In The Matrix" - Dr. Mike Israetel

    24-12-2025 | 2 u. 55 Min.

    This is a lively, no-holds-barred debate about whether AI can truly be intelligent, conscious, or understand anything at all — and what happens when (or if) machines become smarter than us.Dr. Mike Israetel is a sports scientist, entrepreneur, and co-founder of RP Strength (a fitness company). He describes himself as a "dilettante" in AI but brings a fascinating outsider's perspective.Jared Feather (IFBB Pro bodybuilder and exercise physiologist)The Big Questions:1. When is superintelligence coming?2. Does AI actually understand anything?3. The Simulation Debate (The Spiciest Part)4. Will AI kill us all? (The Doomer Debate)5. What happens to human jobs and purpose?6. Do we need suffering?Mikes channel: https://www.youtube.com/channel/UCfQgsKhHjSyRLOp9mnffqVgRESCRIPT INTERACTIVE PLAYER: https://app.rescript.info/public/share/GVMUXHCqctPkXH8WcYtufFG7FQcdJew_RL_MLgMKU1U---TIMESTAMPS:00:00:00 Introduction & Workout Demo00:04:15 ASI Timelines & Definitions00:10:24 The Embodiment Debate00:18:28 Neutrinos & Abstract Knowledge00:25:56 Can AI Learn From YouTube?00:31:25 Diversity of Intelligence00:36:00 AI Slop & Understanding00:45:18 The Simulation Argument: Fire & Water00:58:36 Consciousness & Zombies01:04:30 Do Reasoning Models Actually Reason?01:12:00 The Live Learning Problem01:19:15 Superintelligence & Benevolence01:28:59 What is True Agency?01:37:20 Game Theory & The "Kill All Humans" Fallacy01:48:05 Regulation & The China Factor01:55:52 Mind Uploading & The Future of Love02:04:41 Economics of ASI: Will We Be Useless?02:13:35 The Matrix & The Value of Suffering02:17:30 Transhumanism & Inequality02:21:28 Debrief: AI Medical Advice & Final Thoughts---REFERENCES:Paper:[00:10:45] Alchemy and Artificial Intelligence (Dreyfus)https://www.rand.org/content/dam/rand/pubs/papers/2006/P3244.pdf[00:10:55] The Chinese Room Argument (John Searle)https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf[00:11:05] The Symbol Grounding Problem (Stephen Harnad)https://arxiv.org/html/cs/9906002[00:23:00] Attention Is All You Needhttps://arxiv.org/abs/1706.03762[00:45:00] GPT-4 Technical Reporthttps://arxiv.org/abs/2303.08774[01:45:00] Anthropic Agentic Misalignment Paperhttps://www.anthropic.com/research/agentic-misalignment[02:17:45] Retatrutidehttps://pubmed.ncbi.nlm.nih.gov/37366315/Organization:[00:15:50] CERNhttps://home.cern/[01:05:00] METR Long Horizon Evaluationshttps://evaluations.metr.org/MLST Episode:[00:23:10] MLST: Llion Jones - Inventors' Remorsehttps://www.youtube.com/watch?v=DtePicx_kFY[00:50:30] MLST: Blaise Agüera y Arcas Interviewhttps://www.youtube.com/watch?v=rMSEqJ_4EBk[01:10:00] MLST: David Krakauerhttps://www.youtube.com/watch?v=dY46YsGWMIcEvent:[00:23:40] ARC Prize/Challengehttps://arcprize.org/Book:[00:24:45] The Brain Abstractedhttps://www.amazon.com/Brain-Abstracted-Simplification-Philosophy-Neuroscience/dp/0262548046[00:47:55] Pamela McCorduckhttps://www.amazon.com/Machines-Who-Think-Artificial-Intelligence/dp/1568812051[01:23:15] The Singularity Is Nearer (Ray Kurzweil)https://www.amazon.com/Singularity-Nearer-Ray-Kurzweil-ebook/dp/B08Y6FYJVY[01:27:35] A Fire Upon The Deep (Vernor Vinge)https://www.amazon.com/Fire-Upon-Deep-S-F-MASTERWORKS-ebook/dp/B00AVUMIZE/[02:04:50] Deep Utopia (Nick Bostrom)https://www.amazon.com/Deep-Utopia-Meaning-Solved-World/dp/1646871642[02:05:00] Technofeudalism (Yanis Varoufakis)https://www.amazon.com/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1685891241Visual Context Needed:[00:29:40] AT-AT Walker (Star Wars)https://starwars.fandom.com/wiki/All_Terrain_Armored_TransportPerson:[00:33:15] Andrej Karpathyhttps://karpathy.ai/Video:[01:40:00] Mike Israetel vs Liron Shapira AI Doom Debatehttps://www.youtube.com/watch?v=RaDWSPMdM4oCompany:[02:26:30] Examine.comhttps://examine.com/

  • Machine Learning Street Talk (MLST)

    Making deep learning perform real algorithms with Category Theory (Andrew Dudzik, Petar Velichkovich, Taco Cohen, Bruno Gavranović, Paul Lessard)

    22-12-2025 | 43 Min.

    We often think of Large Language Models (LLMs) as all-knowing, but as the team reveals, they still struggle with the logic of a second-grader. Why can’t ChatGPT reliably add large numbers? Why does it "hallucinate" the laws of physics? The answer lies in the architecture. This episode explores how *Category Theory* —an ultra-abstract branch of mathematics—could provide the "Periodic Table" for neural networks, turning the "alchemy" of modern AI into a rigorous science.In this deep-dive exploration, *Andrew Dudzik*, *Petar Velichkovich*, *Taco Cohen*, *Bruno Gavranović*, and *Paul Lessard* join host *Tim Scarfe* to discuss the fundamental limitations of today’s AI and the radical mathematical framework that might fix them.TRANSCRIPT:https://app.rescript.info/public/share/LMreunA-BUpgP-2AkuEvxA7BAFuA-VJNAp2Ut4MkMWk---Key Insights in This Episode:* *The "Addition" Problem:* *Andrew Dudzik* explains why LLMs don't actually "know" math—they just recognize patterns. When you change a single digit in a long string of numbers, the pattern breaks because the model lacks the internal "machinery" to perform a simple carry operation.* *Beyond Alchemy:* deep learning is currently in its "alchemy" phase—we have powerful results, but we lack a unifying theory. Category Theory is proposed as the framework to move AI from trial-and-error to principled engineering. [00:13:49]* *Algebra with Colors:* To make Category Theory accessible, the guests use brilliant analogies—like thinking of matrices as *magnets with colors* that only snap together when the types match. This "partial compositionality" is the secret to building more complex internal reasoning. [00:09:17]* *Synthetic vs. Analytic Math:* *Paul Lessard* breaks down the philosophical shift needed in AI research: moving from "Analytic" math (what things are made of) to "Synthetic" math [00:23:41]---Why This Matters for AGIIf we want AI to solve the world's hardest scientific problems, it can't just be a "stochastic parrot." It needs to internalize the rules of logic and computation. By imbuing neural networks with categorical priors, researchers are attempting to build a future where AI doesn't just predict the next word—it understands the underlying structure of the universe.---TIMESTAMPS:00:00:00 The Failure of LLM Addition & Physics00:01:26 Tool Use vs Intrinsic Model Quality00:03:07 Efficiency Gains via Internalization00:04:28 Geometric Deep Learning & Equivariance00:07:05 Limitations of Group Theory00:09:17 Category Theory: Algebra with Colors00:11:25 The Systematic Guide of Lego-like Math00:13:49 The Alchemy Analogy & Unifying Theory00:15:33 Information Destruction & Reasoning00:18:00 Pathfinding & Monoids in Computation00:20:15 System 2 Reasoning & Error Awareness00:23:31 Analytic vs Synthetic Mathematics00:25:52 Morphisms & Weight Tying Basics00:26:48 2-Categories & Weight Sharing Theory00:28:55 Higher Categories & Emergence00:31:41 Compositionality & Recursive Folds00:34:05 Syntax vs Semantics in Network Design00:36:14 Homomorphisms & Multi-Sorted Syntax00:39:30 The Carrying Problem & Hopf FibrationsPetar Veličković (GDM)https://petar-v.com/Paul Lessardhttps://www.linkedin.com/in/paul-roy-lessard/Bruno Gavranovićhttps://www.brunogavranovic.com/Andrew Dudzik (GDM)https://www.linkedin.com/in/andrew-dudzik-222789142/---REFERENCES:Model:[00:01:05] Veohttps://deepmind.google/models/veo/[00:01:10] Geniehttps://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/Paper:[00:04:30] Geometric Deep Learning Blueprinthttps://arxiv.org/abs/2104.13478https://www.youtube.com/watch?v=bIZB1hIJ4u8[00:16:45] AlphaGeometryhttps://arxiv.org/abs/2401.08312[00:16:55] AlphaCodehttps://arxiv.org/abs/2203.07814[00:17:05] FunSearchhttps://www.nature.com/articles/s41586-023-06924-6[00:37:00] Attention Is All You Needhttps://arxiv.org/abs/1706.03762[00:43:00] Categorical Deep Learninghttps://arxiv.org/abs/2402.15332

  • Machine Learning Street Talk (MLST)

    Are AI Benchmarks Telling The Full Story? [SPONSORED] (Andrew Gordon and Nora Petrova - Prolific)

    20-12-2025 | 16 Min.

    Is a car that wins a Formula 1 race the best choice for your morning commute? Probably not. In this sponsored deep dive with Prolific, we explore why the same logic applies to Artificial Intelligence. While models are currently shattering records on technical exams, they often fail the most important test of all: **the human experience.**Why High Benchmark Scores Don’t Mean Better AIJoining us are **Andrew Gordon** (Staff Researcher in Behavioral Science) and **Nora Petrova** (AI Researcher) from **Prolific**. They reveal the hidden flaws in how we currently rank AI and introduce a more rigorous, "humane" way to measure whether these models are actually helpful, safe, and relatable for real people.---Key Insights in This Episode:* *The F1 Car Analogy:* Andrew explains why a model that excels at the "Humanities Last Exam" might be a nightmare for daily use. Technical benchmarks often ignore the nuances of human communication and adaptability.* *The "Wild West" of AI Safety:* As users turn to AI for sensitive topics like mental health, Nora highlights the alarming lack of oversight and the "thin veneer" of safety training—citing recent controversial incidents like Grok-3’s "Mecha Hitler."* *Fixing the "Leaderboard Illusion":* The team critiques current popular rankings like Chatbot Arena, discussing how anonymous, unstratified voting can lead to biased results and how companies can "game" the system.* *The Xbox Secret to AI Ranking:* Discover how Prolific uses *TrueSkill*—the same algorithm Microsoft developed for Xbox Live matchmaking—to create a fairer, more statistically sound leaderboard for LLMs.* *The Personality Gap:* Early data from the **Humane Leaderboard** suggests that while AI is getting smarter, it is actually performing *worse* on metrics like personality, culture, and "sycophancy" (the tendency for models to become annoying "people-pleasers").---About the HUMAINE LeaderboardMoving beyond simple "A vs. B" testing, the researchers discuss their new framework that samples participants based on *census data* (Age, Ethnicity, Political Alignment). By using a representative sample of the general public rather than just tech enthusiasts, they are building a standard that reflects the values of the real world.*Are we building models for benchmarks, or are we building them for humans? It’s time to change the scoreboard.*Rescript link:https://app.rescript.info/public/share/IDqwjY9Q43S22qSgL5EkWGFymJwZ3SVxvrfpgHZLXQc---TIMESTAMPS:00:00:00 Introduction & The Benchmarking Problem00:01:58 The Fractured State of AI Evaluation00:03:54 AI Safety & Interpretability00:05:45 Bias in Chatbot Arena00:06:45 Prolific's Three Pillars Approach00:09:01 TrueSkill Ranking & Efficient Sampling00:12:04 Census-Based Representative Sampling00:13:00 Key Findings: Culture, Personality & Sycophancy---REFERENCES:Paper:[00:00:15] MMLUhttps://arxiv.org/abs/2009.03300[00:05:10] Constitutional AIhttps://arxiv.org/abs/2212.08073[00:06:45] The Leaderboard Illusionhttps://arxiv.org/abs/2504.20879[00:09:41] HUMAINE Framework Paperhttps://huggingface.co/blog/ProlificAI/humaine-frameworkCompany:[00:00:30] Prolifichttps://www.prolific.com[00:01:45] Chatbot Arenahttps://lmarena.ai/Person:[00:00:35] Andrew Gordonhttps://www.linkedin.com/in/andrew-gordon-03879919a/[00:00:45] Nora Petrovahttps://www.linkedin.com/in/nora-petrova/Event:Algorithm:[00:09:01] Microsoft TrueSkillhttps://www.microsoft.com/en-us/research/project/trueskill-ranking-system/Leaderboard:[00:09:21] Prolific HUMAINE Leaderboardhttps://www.prolific.com/humaine[00:09:31] HUMAINE HuggingFace Spacehttps://huggingface.co/spaces/ProlificAI/humaine-leaderboard[00:10:21] Prolific AI Leaderboard Portalhttps://www.prolific.com/leaderboardDataset:[00:09:51] Prolific Social Reasoning RLHF Datasethttps://huggingface.co/datasets/ProlificAI/social-reasoning-rlhfOrganization:[00:10:31] MLCommonshttps://mlcommons.org/

Meer Technologie podcasts

Over Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Podcast website

Luister naar Machine Learning Street Talk (MLST), De Grote Tech Show | BNR en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/30/2025 - 5:42:40 PM