Powered by RND
PodcastsNieuwsVoices of Video

Voices of Video

NETINT Technologies
Voices of Video
Nieuwste aflevering

Beschikbare afleveringen

5 van 58
  • Synchronizing 20 Perspectives: The Future Of Multi-View Esports
    Cameras miss moments; fans don’t. We wanted every decisive peek, every clutch revive, and every chaotic final ring in Apex Legends to be watchable from any team’s perspective - live, synchronized, and affordable. That meant rethinking how we transcode and distribute dozens of POV streams at once without drowning in startup lag or compute spend.We walk through how Scalstrm integrated NETINT VPUs at a low level to pack up to 20 live channels onto a single card, slashing both costs and boot times for event-based streaming. Instead of relying on generic wrappers, they tapped direct APIs to tune buffer behavior, rate control, and ABR ladders for fast-motion gameplay.Partnering in the Akamai Cloud lets them spin up encoders only when needed, bring them online in seconds, and tear them down post-show—no idle fleets, no waste. For VOD, just-in-time transcoding stores a single high-bitrate master and generates renditions only when requested, keeping catalogs lean while preserving quality.Znipe Esports takes the spotlight with a multi-POV esports product that delivers 20+ synchronized streams plus the main event feed. To keep every angle aligned, they apply AI and image analysis to lock onto in-game clocks, then validate with operators for frame-accurate sync across teams. Telemetry from damage and kill events fuels real-time overlays and instant highlights, so fans can jump to the best moments or follow their favorite squad without missing context.The payoff is dramatic: 25% lower transcoding cost, 70% faster startup, and a 75% reduction in high-quality transcoding cost—exactly where esports audiences are most demanding.We also share a war story: going live in 30 minutes only to find GPU capacity swallowed by AI training. VPUs gave us a dedicated path for video, restoring predictability when it mattered most.If you care about multi-view control, synchronized angles, and high frame-rate streams that don’t blow up your budget, this breakdown shows how to get there.Listen now: https://netint.biz/podcast Download the presentation: https://info.netint.com/hubfs/downloads/VPUs-on-Akamai-cloud.pdf Test drive NETINT VPUs on Akamai Cloud and get $500 credit: https://netint.biz/akamai_500Episode highlights: • Scalstrm’s origins in packaging, origin, and analytics for operators and broadcasters • Why low-level VPU APIs beat generic wrappers for live density and efficiency • Instant provisioning for event-based transcoding on cloud partners • Just-in-time transcoding for VOD to cut storage and compute • Znipe’s multi-POV product for Apex Legends with 20+ team feeds • AI and image processing for frame-accurate sync on in-game clocks • Ingesting telemetry to render stats and auto-generate highlights • Cost wins: 25% lower normal transcoding, 70% faster startup, 75% lower high-quality costs • Avoiding GPU shortages by shifting to VPUs for predictable capacity • Higher resolutions and frame rates that match esports viewer expectations.Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    17:43
  • Your Sports Car Is Cool, But The Taxi Wins On Power Bills
    What if your heaviest video jobs spun up in seconds, sipped power, and scaled wherever your viewers are?In this episode, we run NETINT VPUs inside Akamai Cloud and push them across live and just-in-time workflows—multi-codec ABR (AV1/H.264/HEVC), synchronization, DRM, and low-latency packaging included.We start with deployment trade-offs: on-prem cards (control like a tuned sports car), cloud resources (on-demand like a taxi), portable containers, and Kubernetes for orchestration and autoscaling. With VPUs available in-region on Akamai, you cut CPU burn, lower watts per stream, and keep compute close to contribution or audience—ideal for local ingest, regional ad splicing, anti-piracy, and edge turnarounds.Then we get hands-on. Scalstrm launches a live channel with a single API call—multicast in, three profiles out, catch-up enabled—in a couple of seconds. Advanced toggles cover time-shift TV, HLS/DASH, low-latency, trick play, iframe playlists, DRM, and ad insertion. Robust monitoring and analytics surface sync issues early to avoid blind troubleshooting. For VOD, we flip to just-in-time: store the top profile, regenerate lower rungs on demand, and save ~50–60% storage—while enabling instant ad asset playout.For builders, we walk the Kubernetes path: provision a cluster in Frankfurt, label nodes for NETINT VPUs, deploy drivers from Git, wire up object storage, and run a pod that watches a bucket and invokes FFmpeg with hardware acceleration. We generate Apple’s ABR ladder across AV1/H.264/HEVC and finish a 5.5-minute asset in under four minutes—setup included—while power draw rises smoothly from idle without spikes.If you care about power efficiency, global scale, and faster launches, this is a blueprint you can reuse today. Share it with the teammate who lives in FFmpeg, and tell us which part you want open-sourced next.Key TakeawaysDeployment models: on-prem, cloud, containers, Kubernetes—when each makes senseWhy VPUs: higher density, lower power per stream, sustainability benefitsAkamai reach: edge and cloud tightly coupled for minimal latencyScalstrm live demo: API setup → multicast in → three profiles out → ready in secondsAdvanced features: sync, time-shift TV, DRM, low-latency, trick play, iframe playlists, ad insertionObservability: monitoring/analytics to reduce tickets and speed root-causeJust-in-time VOD: keep highest profile, regenerate lower rungs on demand (~50–60% storage savings)Kubernetes workflow: drivers, node labels, buckets, FFmpeg with NETINT accelerationPerformance proof: multi-codec ABR in minutes, end-to-end📄 Download the presentation → 💡 Get $500 credit to test on Akamai →Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    22:40
  • Cloud Bills Made You Cry? Gamers Already Fixed That
    Ever notice how interactive video feels great one moment and laggy the next? We dig into why - and what it takes to make streams feel as immediate and fair as a top-tier multiplayer game.Coming from a gaming-first background, we talk candidly about round-trip latency, jitter, and why 30 ms one way is the magic threshold for experiences where people don’t just watch, but participate.We walk through the hard lessons of early cloud gaming, from capex-heavy builds to routing realities, and show how those same insights are now reshaping streaming:Low-latency global networks with real-time visibilityDDoS resilience without five-layer ticket gauntletsPredictable transport and proximity that let teams deploy their own edge stacks and own performanceThe result is a model in which encoding density, session stability, and viewer happiness are measurable and repeatable, without runaway cloud costs.We also unpack a practical hybrid strategy: keep always-on, latency-sensitive workloads on dedicated infrastructure (where you can tune kernel, NICs, and accelerators), and use the cloud for bursts or experiments.AI adds another dimension - inference near the session, VPUs for real-time AV1/HEVC, GPUs for rendering, and the ability to attach the right accelerator in the right region on demand.As streaming and gaming continue to merge - think reward-enabled streams, Discord watch-togethers, or VR rendered in the cloud - the lesson is clear:Be where your users are. Keep round trips tight. Control your own cost and quality.We cover:• Gaming-born low-latency infrastructure for streaming • Lessons from early cloud gaming and unit economics • Why round-trip latency and jitter define interactive QoE • DDoS resilience and transparent incident response • CDN roles vs. building on low-latency IaaS • Hybrid strategy for cost control and sovereignty • VPUs/GPUs for encoding, cloud gaming, and AI inference • Streaming–gaming convergence across Twitch, Discord, and VR • How to test and scale with on-demand regional hardwareIf you’re exploring next-gen video encoding or interactive streaming, check out NETINT’s VPU lineup - built for real-time video at scale.If this resonates, subscribe, share with a teammate who owns QoE, and leave a quick review to help others find the show. Got a use case or question? Reach out - let’s dig in together.Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    19:33
  • From Campbell to Codensity: A Practical Hero’s Journey in Video Encoding
    What if a hardware roadmap could read like a myth? We take Joseph Campbell’s Hero’s Journey and map it to a concrete engineering pivot - from life in the ordinary world of CPU/GPU encoding to a high-density, power-efficient future with NETINT’s Codensity G5-based VPUs. We talk through the initial reluctance to touch specialized hardware, the mentors and SDKs that changed our minds, and the exact moment we crossed the threshold by installing drivers, testing real inputs, and pushing the cards into live workflows.From there, the plot thickens: allies like Norsk Video, Supermicro, Gigabyte, and Akamai helped us scale, while enemies showed up as driver quirks, 4:2:0 vs. 4:2:2 trade-offs, and new mental models that don’t behave like CPUs or GPUs. The dragon’s den wasn’t a competitor - it was public procurement. Tenders forced us to design for variability, not one-size-fits-all. That pressure shaped the treasure we brought back: four NETINT form factors that express the same transcoding engine in different ways.We break down where each fits:·       PCIe T1A - broad compatibility·       T2A - dual-ASIC throughput·       U.2 T1U - extreme density when vendor policies allow·       M.2 T1M - tiny blade for edge and contribution with PoE, low power, and surprising capacityWe share the software split that actually works in production: NORSK for live and live-to-file pipelines, FFmpeg for VOD encoding - plus how a composable media architecture runs both on-prem and in the cloud. With Akamai’s NETINT-enabled compute options, hybrid deployments become practical, not aspirational.The story lands with a proof point: G&L deploying at scale for the European Parliament - 30 concurrent sessions, 32 audio tracks each - across Brussels, Strasbourg, and German cloud regions, with Linode as the control plane.DOWNLOAD PRESENTATION: https://info.netint.com/hubfs/downloads/IBC25-GnL-Hero-with-a-thousand-faces.pdfIf you’re weighing density, power budgets, or vendor constraints, this journey offers a clear map, hard-won lessons, and a toolkit you can adapt. Subscribe, share with your team, and leave a review - what’s your dragon, and which form factor would you choose first?Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    13:46
  • Hyperscale for Video | Stop Asking GPUs to Be Everything at Once
    What if video finally got its own processor, and your streaming costs dropped while quality and features went up?In this episode, we dig into the rise of the Video Processing Unit (VPU) - silicon built entirely for video - and explore how it’s transforming everything from edge contribution to multi-view sports. Instead of paying for general-purpose compute and GPU graphics overhead, VPUs put every square millimeter of the die to work on encoding, scaling, and compositing. The result is surprising gains in density, power efficiency, and cost.We look at where GPUs fall short for large-scale streaming and why CPUs hit a wall on cost per channel. Then we follow encoding as it moves into the network, building ABR ladders directly at venues, pushing streams straight to the CDN, and cutting both latency and egress costs. You’ll hear real numbers from cost-normalized tests, including a VPU-powered instance delivering six HEVC ladders for about the cost of one CPU ladder, plus a side-by-side look at AWS VT1/U30 and current VPU options.The discussion also covers multi-layer AV1 for dynamic overlays and interactive ad units, and how compact edge servers with SDI capture bring premium live workflows into portable, power-efficient form factors.We break down practical deployment choices such as U.2 form factors that slide into NVMe bays, mini servers designed for the edge, and PCIe cards for dense racks. Integration remains familiar with FFmpeg and GStreamer plugins, robust APIs, and a simple application layer for large-scale configuration.The message is clear: when video runs on purpose-built silicon, you unlock hyperscale streaming capabilities - multi-view, AV1 interactivity, UHD ladders - at a cost that finally makes business sense. If you’re rethinking your pipeline or planning your next live event, this is your field guide to the new streaming stack.If this episode gives you new ideas for your workflow, follow the show, share it with your team, and leave a quick review so others can find it.Key topics • GPUs, CPUs, and VPUs - why video needs purpose-built silicon • What 100% video-dedicated silicon enables for density and power • Encoding inside the network to cut latency and egress • Multi-layer AV1 for interactive ads and overlays • Multi-view sports made affordable and reliable • Edge contribution from venues using compact servers • Product lineup: U.2, mini, and PCIe form factors • Benchmarks comparing CPU, VPU, and AWS VT1/U30 • Cloud options with Akamai and i3D, including egress math • Integration with FFmpeg, GStreamer, SDKs, and BitstreamsDownload presentation: https://info.netint.com/hubfs/downloads/IBC25-VPU-Introduction.pdfStay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    18:42

Meer Nieuws podcasts

Over Voices of Video

Explore the inner workings of video technology with Voices of Video: Inside the Tech. This podcast gathers industry experts and innovators to examine every facet of video technology, from decoding and encoding processes to the latest advancements in hardware versus software processing and codecs. Alongside these technical insights, we dive into practical techniques, emerging trends, and industry-shaping facts that define the future of video. Ideal for engineers, developers, and tech enthusiasts, each episode offers hands-on advice and the in-depth knowledge you need to excel in today’s fast-evolving video landscape. Join us to master the tools, technologies, and trends driving the future of digital video.
Podcast website

Luister naar Voices of Video, De Spindoctors en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/14/2025 - 12:05:44 AM