Eye On A.I.

Craig S. Smith
Eye On A.I.
Nieuwste aflevering

340 afleveringen

  • Eye On A.I.

    #339 Eamonn Maguire: Your Child Has a Data Profile Before They're Born

    28-04-2026 | 45 Min.
    What if your child already has a data profile, and they haven't even been born yet?
    In this episode of Eye on AI, Craig Smith sits down with Eamonn Maguire, Director of Engineering for AI and ML at Proton, to explore one of the most urgent and underappreciated questions in the age of AI: who owns your data, who is building a profile on you, and what can actually be done about it?
    Eamonn brings a rare combination of depth and range to this conversation. With a PhD from Oxford, a postdoc at CERN, and years at Facebook engineering ML systems to detect internal and external threats, he now leads Proton's AI efforts, including Lumo, their end-to-end encrypted alternative to ChatGPT. He makes a compelling case that the surveillance economy is not just a privacy problem but a behavioral one, where the systems profiling you are not only observing who you are but actively shaping who you become.
    We get into how just three data points are enough for advertisers to infer your age, political leanings, religion, and spending habits. We discuss why trusting mainstream AI platforms with sensitive data is a structural problem, not just a policy one, and why the AI labs with the best models got there by acquiring the most data, often with little regard for copyright law. Eamonn also breaks down the difference between truly open models and open washing, and explains how Proton builds AI that is genuinely private by design, with local indexing, encrypted memory, and user-controlled data sharing.
    Then there is Born Private, Proton's initiative to give children a private digital identity from birth. It sounds simple on the surface, but the conversation it opens up is anything but. Data collection on your child begins before they are born, the moment a parent emails a gynecologist or a fertility clinic. Eamonn argues that until we start thinking about privacy the way we think about other rights, from the very beginning, the surveillance machine will always have a head start.
    Subscribe for more conversations with the people building the future of AI and emerging technology.
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on AI on X: https://x.com/EyeOn_AI
     
    Timestamp:
    (00:00) Introduction and Meet Eamonn Maguire
    (00:38) From Bioinformatics to CERN to Facebook: Eamonn's Career Arc
    (05:23) How Proton Started in the CERN Cafeteria
    (09:23) What Mainstream AI Platforms Actually Do With Your Data
    (13:00) Copyright, Training Data, and Why Big Labs Can't Be Trusted
    (15:10) Open Models vs Open Washing: What Truly Open AI Looks Like
    (24:22) How Lumo Works: Encrypted Memory and No Data Leakage
    (31:18) Born Private: Reserving a Private Email Address at Birth
    (33:00) How Data Profiling Starts Before Your Child Is Born
    (34:26) How Three Data Points Become a Complete Profile
    (39:07) Molly Russell and the Consequences of Algorithmic Profiling
    (53:55) The Full Proton Ecosystem: Mail, VPN, Drive, Lumo, and Workspace
  • Eye On A.I.

    #338 Amith Singhee: Can India Catch Up in AI? IBM's Amith Singhee on What It Will Take

    24-04-2026 | 46 Min.
    What if the country that trains the world's engineers finally built the infrastructure to match its talent?
    In this episode of Eye on AI, Craig Smith sits down with Amith Singhee, Director of IBM Research India and CTO of IBM India and South Asia, to explore where India actually stands in the global AI race and what it will take to close the gap.
    Amith gives an honest, ground-level assessment of why India has been slow to compete. The talent has always been there. But until recently, the investment, the compute infrastructure, and the institutional intent hadn't come together in a sustained, coordinated way. That's changing, and Amith explains exactly what's different now.
    He walks through IBM Research India's 27-year presence in the country, the research it's doing on foundation models, hybrid cloud AI deployment, agentic systems, and quantum computing. He also explains why building AI from India doesn't just help India. Working with less data, less compute, and more linguistic diversity forces better engineering and makes IBM's models more generalizable for the entire world.
    We also get deep into the technical frontier. Why catastrophic forgetting is one of the key unsolved problems standing between current AI and anything more capable. How IBM is already shipping continual learning in practice through its COBOL modernization tools, helping enterprises decode decades of legacy code before the engineers who wrote it are gone. And why agentic AI, for all the hype, still has a mountain of unglamorous enterprise engineering left to climb before it becomes truly reliable.
    Plus, what Amith would tell an 18-year-old engineer in India today about what skills will actually matter in an AI-driven world.
    Subscribe for more conversations with the people shaping the future of AI and emerging technology.
     
    Stay Updated: 
    Craig Smith on X: https://x.com/craigss 
    Eye on A.I. on X: https://x.com/EyeOn_AI
     
    (00:00) Introduction and Amith Singhee's Background 
    (06:26) Why IBM Set Up Research in India 
    (11:45) Can India Compete in AI 
    (15:18) How IBM Collaborates With Indian Universities 
    (19:25) Why India Has Been Slow in AI 
    (24:50) IBM's Hybrid Cloud AI Research Focus 
    (27:34) How Data Scarcity in India Makes Better AI 
    (31:18) Fine-Tuning Models Without Losing General Knowledge 
    (35:03) Continual Learning and Catastrophic Forgetting 
    (38:25) COBOL and Legacy Code Modernization 
    (42:11) Agentic AI Hype vs Enterprise Reality 
    (48:09) What Young Engineers Should Study Today
  • Eye On A.I.

    #337 Debdas Sen: Why AI Without ROI Will Die (Again)

    23-04-2026 | 51 Min.
    What does it actually take to prove that AI delivers real value in the industries that keep the world running?
    In this episode of Eye on AI, Craig Smith sits down with Debdas Sen, CEO of TCG Digital and Joint Managing Director of Lummus Digital, to explore what serious enterprise AI looks like when it is applied to some of the most complex, high-stakes problems on the planet. Problems like compressing years of catalyst research into weeks, predicting refinery failures before they happen, and accelerating drug development timelines that could determine how long a life-saving medicine takes to reach patients.
    Debdas has spent nearly 30 years in data and AI, living through every hype cycle from the data warehousing era of 1997 to today's agentic revolution. He makes a compelling case that the AI community has one defining job right now: prove the ROI, or risk another AI winter.
    We also get into what makes TCG Digital's platform mcube™ different. It is not a horizontal tool. It is a domain-first, agentic AI ecosystem built for the kinds of massive, multi-variable problems that horizontal platforms cannot touch. Debdas breaks down how mcube™ bridges legacy enterprise infrastructure with cutting-edge agentic systems, why hybrid modeling beats pure AI in energy and life sciences, and how the platform keeps private enterprise data protected while still drawing on the best of what public LLMs have to offer.
    Finally, Debdas shares where he sees the industry heading next, a future where agents from different providers can reason together in a neutral space, where inference and reasoning keep improving, and where the companies that go deepest into domain will pull furthest ahead.
    Subscribe for more conversations with the people building the future of AI and emerging technology.
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on AI on X: https://x.com/EyeOn_AI
    TCG Digital Website: https://www.tcgdigital.com/
    TCG Digital on LinkedIn: https://www.linkedin.com/company/tcgdigital/ 
     
     
    (00:00) Introduction and Meet Debdas Sen
    (01:30) 30 Years in Data and AI: From Data Warehousing to Agentic Systems
    (03:02) What TCG Digital Actually Does (04:32) Inside mcube™: How the Platform Works
    (10:06) Domain vs Horizontal: Why Specificity Wins in Enterprise AI
    (18:29) Catalyst R&D: Collapsing 12 Months of Research Into One
    (30:38) Predicting Plant Failures Before They Happen
    (36:51) Solving the Trust and Hallucination Problem in Enterprise AI
    (44:51) The Six-Layer Architecture of mcube™
    (47:05) What Is Genuinely New About Agentic AI
    (49:22) What Young People Should Study to Work in Serious AI
    (53:14) Velocity to Value: Why ROI Must Be Tracked From Day One
  • Eye On A.I.

    #336 Professor Mausam: Why India Is Losing the AI Race and What It Will Take to Catch Up

    20-04-2026 | 1 u.
    What if the country that produces the world's top AI talent finally figured out how to keep it?
    In this episode of Eye on AI, Craig Smith sits down with Professor Mausam, one of India's leading AI researchers, AAAI Fellow, and founding head of the Yardi School of Artificial Intelligence at IIT Delhi, to get an honest and unflinching diagnosis of why India has fallen so far behind the US and China in artificial intelligence and what it will actually take to close that gap.
    Mausam breaks down the structural story behind India's deficit. A pipeline of world-class students that gets exported abroad the moment it graduates. A professor shortage so severe that IIT Delhi's entire School of AI has hired only five new faculty members in five years. A government AI mission with the right instincts but not enough speed or boldness. And a brain drain made worse by the very thing India is proud of, its English fluency, which makes its talent the easiest in the world to absorb and the hardest to bring back.
    Mausam walks through the full picture. How China built its research dominance not through students but through aggressively repatriating senior researchers with real salaries, real lab resources, and real authority to build research cultures from scratch. Why the AlexNet moment in 2012 was actually an equalizer that gave China's fledgling ecosystem a surprise advantage over more established Western research groups. How India's JEE coaching culture and IIT bottleneck are symptoms of a scarcity of quality institutions rather than a broken exam. What the government's AI mission is getting right on compute, data, and sectoral focus, and where the critical gaps remain. And why Mausam believes that bringing one hundred top professors back to India would do more for the country's AI future than any single government program or funding initiative.
    We also get into the harder questions. Whether AI degrees belong at the undergraduate level or should sit on top of a computer science foundation. Why Mausam no longer holds an optimistic view on AI's impact on software jobs and why he thinks Geoff Hinton's point about plumbers has merit. And what it would actually take for a democracy of 1.4 billion people to stop training the world's AI leaders and start keeping them.
    Subscribe for more conversations with the researchers, builders, and policymakers shaping the future of artificial intelligence.
    Stay Updated: 
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI
     
    (00:00) Introduction: India's AI Gap and Professor Mausam's Background
    (02:30) Building the Yardi School of AI at IIT Delhi
    (07:44) How Far China Has Pulled Ahead in AI Research
    (12:55) Why India Could Not Follow China's Playbook
    (29:18) The JEE System, Coaching Culture, and the IIT Bottleneck
    (30:37) AI Degrees, Job Market Realities, and the Future of Work
    (44:18) The Real Problem Is Professors, Not Students
    (48:07) Big Tech Labs in India: Helpful but Not at Scale
    (51:46) The Government AI Mission: Progress and Gaps
    (55:20) The Compute and Data Infrastructure Problem
    (59:54) Can India Close the Gap Before It Is Too Late
  • Eye On A.I.

    #335 Sriram Raghavan: Why IBM Is Betting Everything on Small AI Models

    19-04-2026 | 1 u.
    Why IBM Is Betting Everything on Small AI Models
    In this episode of Eye on AI, Craig Smith sits down with Sriram Raghavan, Vice President of AI at IBM Research, to explore one of the most important debates in enterprise AI right now. Do you actually need a massive model to get world class results? IBM's answer is no, and Sriram breaks down exactly why.
    Sriram explains why IBM chose to train its Granite models directly using reinforcement learning rather than distilling from larger models like most of the industry. The reason goes beyond performance. It comes down to data lineage, safety alignment, and a belief that small, efficient models are the only sustainable path for enterprises running AI across hybrid cloud environments.
    We get into the full technical stack behind that bet. How data quality has replaced model size as the real competitive advantage. Why parameter count is becoming the wrong metric entirely. How IBM's inference time scaling techniques allow an 8 billion parameter model to match the performance of GPT-4o and Claude 3.5 on code and math benchmarks. And why IBM is pioneering a new concept called Generative Computing, which treats AI models not as prompt receivers but as programmable computing elements with runtimes, modular LoRA adapters, and proper programming abstractions.
    Sriram also shares where IBM Research is headed next, including breakthroughs in continuous learning, agent orchestration, and making unstructured enterprise data actually usable at scale.
     
    Subscribe for more conversations with the people building the future of AI and emerging technology.
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI
     
    (00:00) Why IBM Skips Distillation and Trains Small Models Directly 
    (04:50) Did We Even Need Giant AI Models in the First Place?
    (08:12) How Data Quality Became the New Competitive Moat
    (11:54) Why Parameter Count Is the Wrong Way to Measure a Model
    (15:36) Reinforcement Learning Without Losing Broad Capabilities
    (22:05) Inference Time Scaling: Getting Big Model Results From Small Models
    (28:12) Generative Computing: Treating AI as a Programming Element
    (36:40) Why IBM Open Sources and How Small Models Make It Sustainable
    (41:25) The Path to Continuous Learning Without Rewriting Weights
    (51:00) IBM's Full Roadmap: Models, Data, and Agents

Meer Technologie podcasts

Over Eye On A.I.

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
Podcast website

Luister naar Eye On A.I., Bright Podcast en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies