Semi Doped

Vikram Sekar and Austin Lyons
Semi Doped
Nieuwste aflevering

19 afleveringen

  • Semi Doped

    NVIDIA's Marvell Strategy, Is Memory Different This Time?, Intel's Ireland Fab

    03-04-2026 | 42 Min.
    In this episode, Austin and Vik analyze NVIDIA's $2 billion investment in Marvell NVLink Fusion, exploring its implications for AI infrastructure, interconnect protocols, and the broader chip ecosystem. They also discuss the current memory market surge, DRAM pricing, and Intel's strategic fab buyback, providing deep insights into industry trends and future directions.

    On Substack
    Vik: https://www.viksnewsletter.com/
    Austin: https://www.chipstrat.com/

    Chapters

    00:00 NVIDIA's $2 Billion Investment in Marvell
    20:11 The Memory Market Crisis
    20:16 The Future of Memory Pricing and Consumer Impact
    22:55 The Cycle of Supply and Demand in Memory
    27:23 AI's Impact on Memory Demand
    31:46 Long-Term Agreements and Market Stability
    35:07 Intel's Strategic Fab Buyback
    40:44 Monopoly Analogy: Intel's Market Strategy
  • Semi Doped

    ARM AGI CPU has entered the chat, TurboQuant thrashes memory stocks

    27-03-2026 | 52 Min.
    In this episode, Austin and Vik analyze recent developments in GloFo patent lawsuits, the impact of TurboQuant on AI inference, and ARM's strategic move into silicon for agentic AI workloads.

    Read Vik's substack: https://www.viksnewsletter.com
    Read Austin's substack: https://www.chipstrat.com

    Chapters

    00:00 Patent Wars in Semiconductor Industry
    07:14 Understanding TurboQuant and Its Implications
    24:42 Innovations in Memory Management
    28:00 The Rise of ARM AGI CPUs
    32:56 Agentic AI and CPU Compatibility
    39:54 Performance Metrics in Agentic AI
    44:52 ARM's Market Timing and Challenges
  • Semi Doped

    MicroLEDs Ain’t Dead, Micron Snags Vera Rubin

    20-03-2026 | 43 Min.
    Austin and Vik break down a packed week in semiconductors, covering GTC, OFC, and Micron earnings. The conversation kicks off with Jensen Huang's bold claim that engineers should spend $250K/year on AI tokens, and whether companies will buy tokens or token generators (i.e., on-prem hardware like the Dell Pro Max with GB300). They dig into the CapEx vs OpEx tradeoffs, data security concerns, and how sharing GPU resources might end up looking a lot like the old EDA license model.

    Next up: Micron crushed earnings and appears to be designed into Vera Rubin for HBM4 — despite months of rumors saying otherwise. Austin and Vik unpack the nuance around HBM pin speeds, memory node base dies, and what Micron's massive new fab investments in Taiwan, Singapore, Idaho, and New York mean for the memory cycle.

    The back half of the episode dives into optical interconnects for AI scale-up. A new industry consortium (OCI-MSA) has formed with Meta, Broadcom, NVIDIA, and OpenAI to standardize optical components. Vik explains why traditional indium phosphide lasers might be overkill for short-reach scale-up, and makes the case for micro LEDs — a "slow but wide" approach that could fill the gap between copper and conventional optics. They also touch on Credo's expanding product portfolio (and the infamous purple-to-orange cable saga), plus Lumentum's new VCSEL work for scale-up.

    Vik - https://www.viksnewsletter.com/
    Austin - https://www.chipstrat.com/

    CHAPTERS
    0:00 Intro & GTC/OFC Conference Overload
    2:09 Jensen's $250K Token Budget Per Engineer
    5:08 On-Prem Inference vs. Cloud Token Spending (Dell Pro Max, CapEx vs OpEx)
    6:44 Sharing GPU Resources Like EDA Licenses
    8:16 Data Security & On-Prem Privacy Concerns
    9:53 Matthew Berman's Fine-Tuned Open Claw Agent
    10:35 Vik Sets Up Open Claw on a Home Server
    11:53 Always Be Clauden (ABC) – Managing Agents from Your Phone
    13:34 Micron Earnings & HBM4 in Vera Rubin
    16:39 HBM Pin Speeds & the Micron Design-In Debate
    20:17 Micron's New Fab Investments & Memory Cycle Fears
    23:49 Why AI Drives a Step Change in Memory Demand
    26:30 Optical Compute Interconnect MSA (OCI-MSA)
    29:48 Scale-Up Optics: Do We Need New Technology?
    30:58 Micro LEDs – The "Slow but Wide" Approach
    35:45 Micro LEDs vs. Copper vs. Traditional Optics
    36:55 Credo's Product Spectrum & the Purple Cable Story
    39:31 VCSELs & Lumentum's 1060nm Scale-Up Play
  • Semi Doped

    Quick Takes: Nvidia Keynote at GTC

    17-03-2026 | 58 Min.
    Vik and Austin unpack the Nvidia GTC keynote with fresh, top-of-mind takes while trying to breakdown key announcements, what matters and what doesn't. They discuss Groq's LPX, optics+copper for scale up, new CPU requirements, CPO for networking, and what agents means for software, and much, much, more.

    Check out Austin's substack: https://www.chipstrat.com
    Check out Vik's substack: https://www.viksnewsletter.com

    Chapters

    00:00 Introduction and Keynote Context
    03:18 Keynote Highlights and Gaming Innovations
    06:18 Generative AI: The Three Eras
    09:28 Inference: The New Revenue Generator
    12:21 NVIDIA's Tiered Approach to AI Models
    15:30 The Grok Chip and Its Role
    18:35 Vera Rubin System: A Full Data Center
    21:18 CPU Demand and Performance
    24:31 Networking Innovations and Future Directions
    32:32 Innovations in PCB Technology
    34:06 Scaling GPU Systems
    36:57 Understanding the STX Rack and AI Storage
    38:23 The Rosa CPU and Its Significance
    40:07 Digital Twin Platforms and AI Factories
    43:53 NVIDIA's New Software Innovations
    47:09 The Future of Token Budgets in AI
    54:15 Balancing CapEx and OpEx in AI Deployments
  • Semi Doped

    Meta's Inference Accelerator & Applied Optoelectronics (AAOI)

    13-03-2026 | 1 u. 1 Min.
    Austin recaps moderating an agentic AI panel at Synopsys Converge, then gives an in-depth technical breakdown of Meta's MTIA custom silicon. Why they're building it, how chiplets let them ship a new chip every 6 months, and how the roadmap is shifting toward gen AI inference. Vik digs into Applied Optoelectronics (AAOI), the vertically integrated Texas laser shop whose stock went from $1.48 to $100+, and whether history is about to rhyme.                     

    Austin Lyons: https://www.chipstrat.com
    Vik Sekar: https://www.viksnewsletter.com/
                                                                                                                                      
    Topics covered:
    • Agentic AI in chip design — how it changes roles for junior and senior engineers
    • Optical circuit switching and what it means for Arista's business model
    • Meta's ad-serving pipeline: Andromeda, Lattice, and the GEM foundation model
    • Why custom silicon (MTIA) makes sense at Meta's scale
    • MTIA chiplet strategy — 4 generations in 2 years
    • AAOI's vertical integration, Amazon's $4B warrant deal, and the 2017 parallel

    Chapters:
    0:00 Intro
    1:26 Synopsys Converge — Agentic AI Panel
    9:44 Vik's Article: Optical Circuit Switching & Arista
    14:43 Meta MTIA — A New Chip Every 6 Months
    21:32 Why Custom Silicon Makes Sense for Meta
    27:22 MTIA Chiplet Strategy & Roadmap
    33:56 Gen AI Fits Meta's Business Model
    36:31 How Meta Ships Chips So Fast
    40:30 Applied Optoelectronics (AAOI) Deep Dive
    45:02 Amazon's $4B Warrant Deal
    48:54 Can AAOI's Lasers Compete with Lumentum?
    53:16 AAOI's Aggressive Capacity Buildout
    55:35 History Rhymes: AAOI's 2017 Boom & Bust
    1:00:55 Wrap-Up

    #semiconductors #chips #tech #meta #MTIA #AAOI #optics #inference #AI

Meer Technologie podcasts

Over Semi Doped

The business and technology of semiconductors. Alpha for engineers and investors alike.
Podcast website

Luister naar Semi Doped, De Technoloog | BNR en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies