Powered by RND
PodcastsNieuwsVoices of Video

Voices of Video

NETINT Technologies
Voices of Video
Nieuwste aflevering

Beschikbare afleveringen

5 van 57
  • Your Sports Car Is Cool, But The Taxi Wins On Power Bills
    What if your heaviest video jobs spun up in seconds, sipped power, and scaled wherever your viewers are?In this episode, we run NETINT VPUs inside Akamai Cloud and push them across live and just-in-time workflows—multi-codec ABR (AV1/H.264/HEVC), synchronization, DRM, and low-latency packaging included.We start with deployment trade-offs: on-prem cards (control like a tuned sports car), cloud resources (on-demand like a taxi), portable containers, and Kubernetes for orchestration and autoscaling. With VPUs available in-region on Akamai, you cut CPU burn, lower watts per stream, and keep compute close to contribution or audience—ideal for local ingest, regional ad splicing, anti-piracy, and edge turnarounds.Then we get hands-on. Scalstrm launches a live channel with a single API call—multicast in, three profiles out, catch-up enabled—in a couple of seconds. Advanced toggles cover time-shift TV, HLS/DASH, low-latency, trick play, iframe playlists, DRM, and ad insertion. Robust monitoring and analytics surface sync issues early to avoid blind troubleshooting. For VOD, we flip to just-in-time: store the top profile, regenerate lower rungs on demand, and save ~50–60% storage—while enabling instant ad asset playout.For builders, we walk the Kubernetes path: provision a cluster in Frankfurt, label nodes for NETINT VPUs, deploy drivers from Git, wire up object storage, and run a pod that watches a bucket and invokes FFmpeg with hardware acceleration. We generate Apple’s ABR ladder across AV1/H.264/HEVC and finish a 5.5-minute asset in under four minutes—setup included—while power draw rises smoothly from idle without spikes.If you care about power efficiency, global scale, and faster launches, this is a blueprint you can reuse today. Share it with the teammate who lives in FFmpeg, and tell us which part you want open-sourced next.Key TakeawaysDeployment models: on-prem, cloud, containers, Kubernetes—when each makes senseWhy VPUs: higher density, lower power per stream, sustainability benefitsAkamai reach: edge and cloud tightly coupled for minimal latencyScalstrm live demo: API setup → multicast in → three profiles out → ready in secondsAdvanced features: sync, time-shift TV, DRM, low-latency, trick play, iframe playlists, ad insertionObservability: monitoring/analytics to reduce tickets and speed root-causeJust-in-time VOD: keep highest profile, regenerate lower rungs on demand (~50–60% storage savings)Kubernetes workflow: drivers, node labels, buckets, FFmpeg with NETINT accelerationPerformance proof: multi-codec ABR in minutes, end-to-end📄 Download the presentation → 💡 Get $500 credit to test on Akamai →Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    22:40
  • Cloud Bills Made You Cry? Gamers Already Fixed That
    Ever notice how interactive video feels great one moment and laggy the next? We dig into why - and what it takes to make streams feel as immediate and fair as a top-tier multiplayer game.Coming from a gaming-first background, we talk candidly about round-trip latency, jitter, and why 30 ms one way is the magic threshold for experiences where people don’t just watch, but participate.We walk through the hard lessons of early cloud gaming, from capex-heavy builds to routing realities, and show how those same insights are now reshaping streaming:Low-latency global networks with real-time visibilityDDoS resilience without five-layer ticket gauntletsPredictable transport and proximity that let teams deploy their own edge stacks and own performanceThe result is a model in which encoding density, session stability, and viewer happiness are measurable and repeatable, without runaway cloud costs.We also unpack a practical hybrid strategy: keep always-on, latency-sensitive workloads on dedicated infrastructure (where you can tune kernel, NICs, and accelerators), and use the cloud for bursts or experiments.AI adds another dimension - inference near the session, VPUs for real-time AV1/HEVC, GPUs for rendering, and the ability to attach the right accelerator in the right region on demand.As streaming and gaming continue to merge - think reward-enabled streams, Discord watch-togethers, or VR rendered in the cloud - the lesson is clear:Be where your users are. Keep round trips tight. Control your own cost and quality.We cover:• Gaming-born low-latency infrastructure for streaming • Lessons from early cloud gaming and unit economics • Why round-trip latency and jitter define interactive QoE • DDoS resilience and transparent incident response • CDN roles vs. building on low-latency IaaS • Hybrid strategy for cost control and sovereignty • VPUs/GPUs for encoding, cloud gaming, and AI inference • Streaming–gaming convergence across Twitch, Discord, and VR • How to test and scale with on-demand regional hardwareIf you’re exploring next-gen video encoding or interactive streaming, check out NETINT’s VPU lineup - built for real-time video at scale.If this resonates, subscribe, share with a teammate who owns QoE, and leave a quick review to help others find the show. Got a use case or question? Reach out - let’s dig in together.Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    19:33
  • From Campbell to Codensity: A Practical Hero’s Journey in Video Encoding
    What if a hardware roadmap could read like a myth? We take Joseph Campbell’s Hero’s Journey and map it to a concrete engineering pivot - from life in the ordinary world of CPU/GPU encoding to a high-density, power-efficient future with NETINT’s Codensity G5-based VPUs. We talk through the initial reluctance to touch specialized hardware, the mentors and SDKs that changed our minds, and the exact moment we crossed the threshold by installing drivers, testing real inputs, and pushing the cards into live workflows.From there, the plot thickens: allies like Norsk Video, Supermicro, Gigabyte, and Akamai helped us scale, while enemies showed up as driver quirks, 4:2:0 vs. 4:2:2 trade-offs, and new mental models that don’t behave like CPUs or GPUs. The dragon’s den wasn’t a competitor - it was public procurement. Tenders forced us to design for variability, not one-size-fits-all. That pressure shaped the treasure we brought back: four NETINT form factors that express the same transcoding engine in different ways.We break down where each fits:·       PCIe T1A - broad compatibility·       T2A - dual-ASIC throughput·       U.2 T1U - extreme density when vendor policies allow·       M.2 T1M - tiny blade for edge and contribution with PoE, low power, and surprising capacityWe share the software split that actually works in production: NORSK for live and live-to-file pipelines, FFmpeg for VOD encoding - plus how a composable media architecture runs both on-prem and in the cloud. With Akamai’s NETINT-enabled compute options, hybrid deployments become practical, not aspirational.The story lands with a proof point: G&L deploying at scale for the European Parliament - 30 concurrent sessions, 32 audio tracks each - across Brussels, Strasbourg, and German cloud regions, with Linode as the control plane.DOWNLOAD PRESENTATION: https://info.netint.com/hubfs/downloads/IBC25-GnL-Hero-with-a-thousand-faces.pdfIf you’re weighing density, power budgets, or vendor constraints, this journey offers a clear map, hard-won lessons, and a toolkit you can adapt. Subscribe, share with your team, and leave a review - what’s your dragon, and which form factor would you choose first?Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    13:46
  • Hyperscale for Video | Stop Asking GPUs to Be Everything at Once
    What if video finally got its own processor, and your streaming costs dropped while quality and features went up?In this episode, we dig into the rise of the Video Processing Unit (VPU) - silicon built entirely for video - and explore how it’s transforming everything from edge contribution to multi-view sports. Instead of paying for general-purpose compute and GPU graphics overhead, VPUs put every square millimeter of the die to work on encoding, scaling, and compositing. The result is surprising gains in density, power efficiency, and cost.We look at where GPUs fall short for large-scale streaming and why CPUs hit a wall on cost per channel. Then we follow encoding as it moves into the network, building ABR ladders directly at venues, pushing streams straight to the CDN, and cutting both latency and egress costs. You’ll hear real numbers from cost-normalized tests, including a VPU-powered instance delivering six HEVC ladders for about the cost of one CPU ladder, plus a side-by-side look at AWS VT1/U30 and current VPU options.The discussion also covers multi-layer AV1 for dynamic overlays and interactive ad units, and how compact edge servers with SDI capture bring premium live workflows into portable, power-efficient form factors.We break down practical deployment choices such as U.2 form factors that slide into NVMe bays, mini servers designed for the edge, and PCIe cards for dense racks. Integration remains familiar with FFmpeg and GStreamer plugins, robust APIs, and a simple application layer for large-scale configuration.The message is clear: when video runs on purpose-built silicon, you unlock hyperscale streaming capabilities - multi-view, AV1 interactivity, UHD ladders - at a cost that finally makes business sense. If you’re rethinking your pipeline or planning your next live event, this is your field guide to the new streaming stack.If this episode gives you new ideas for your workflow, follow the show, share it with your team, and leave a quick review so others can find it.Key topics • GPUs, CPUs, and VPUs - why video needs purpose-built silicon • What 100% video-dedicated silicon enables for density and power • Encoding inside the network to cut latency and egress • Multi-layer AV1 for interactive ads and overlays • Multi-view sports made affordable and reliable • Edge contribution from venues using compact servers • Product lineup: U.2, mini, and PCIe form factors • Benchmarks comparing CPU, VPU, and AWS VT1/U30 • Cloud options with Akamai and i3D, including egress math • Integration with FFmpeg, GStreamer, SDKs, and BitstreamsDownload presentation: https://info.netint.com/hubfs/downloads/IBC25-VPU-Introduction.pdfStay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    18:42
  • Energy Is the New Bottleneck in Live Video | Why VPUs Beat GPUs for Low-Res ABR at 4–6x Energy Savings
    Live video is exploding, power budgets are shrinking, and the old “throw more GPU at it” mindset is breaking. We dig into the real constraint behind streaming at scale - energy - and share new data showing how VPUs can deliver 4–6x better efficiency than top-tier GPUs while holding quality where viewers notice it most. From the early days of CPU-only encoding to a modern, hybrid stack, we walk through the architecture that lets us stream more for less power without cutting corners on quality.  We break down head-to-head tests run in the cloud comparing an RTX 4080 (NVENC and CUDA) with a NETINT Quadra T1U across AV1, HEVC, and H.264. You’ll hear how watts per stream changes the math for ABR ladders, why low-resolution rungs dominate real-world viewing, and where GPUs still win - premium HD and UHD tiers, complex filters, and specialized compute. Then we zoom out to the big picture: if live traffic is already the majority of internet bandwidth and is set to triple by 2030, scaling responsibly means optimizing for both density and sustainability. The numbers at 1,000 concurrent streams are stark: roughly 22.6 kW on a GPU path versus around 5 kW on a VPU path, with similar throughput and better low-res quality in many cases. Our takeaway is a simple, pragmatic strategy. Use GPUs where their strengths shine and use VPUs for the heavy lifting across low and mid ABR rungs. That hybrid approach cuts operational cost, reduces carbon impact, and increases resilience under peak load. Along the way, we share how our live control, encoder, origin, and editor fit together across on‑prem and cloud, and why energy-aware orchestration is now a core feature, not an afterthought. Want the full benchmarks, VMAF curves, and methodology? Grab the white paper and put the data to work in your roadmap. Enjoy the conversation, then help us spread the word - subscribe, rate, and share with a teammate who’s planning next year’s streaming capacity. Your feedback keeps these deep dives sharp and useful.Download presentation: https://info.netint.com/hubfs/downloads/IBC-Peak-performance-minimal-footprint-Cires21.pdfStay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    10:09

Meer Nieuws podcasts

Over Voices of Video

Explore the inner workings of video technology with Voices of Video: Inside the Tech. This podcast gathers industry experts and innovators to examine every facet of video technology, from decoding and encoding processes to the latest advancements in hardware versus software processing and codecs. Alongside these technical insights, we dive into practical techniques, emerging trends, and industry-shaping facts that define the future of video. Ideal for engineers, developers, and tech enthusiasts, each episode offers hands-on advice and the in-depth knowledge you need to excel in today’s fast-evolving video landscape. Join us to master the tools, technologies, and trends driving the future of digital video.
Podcast website

Luister naar Voices of Video, De Dag en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/7/2025 - 12:07:25 AM