Powered by RND
PodcastsNieuwsVoices of Video

Voices of Video

NETINT Technologies
Voices of Video
Nieuwste aflevering

Beschikbare afleveringen

5 van 61
  • Scaling Video at the Edge: A Practical Roadmap
    Viewers won’t wait for your pipeline to catch up. This episode breaks down a practical roadmap for scaling video at the edge - where power limits, bandwidth costs, and live latency collide. With Advantech’s rugged, modular platforms (https://www.advantech.com/) paired with NETINT’s low-power, high-density ASIC encoders (https://netint.com/), we show how to move the heavy lifting closer to the camera and away from cloud bottlenecks - without compromising quality or operational control. Based on insights shared in Voices of Video Episode 72.We detail the true cost of pushing raw feeds to the cloud for encoding and how 4K demand multiplies those bills. Then we shift to a hybrid edge model that cuts backhaul, stabilizes egress, and reduces end-to-end delay for live events. At the center is the Quadra Mini Server (https://netint.com/products/quadra/): a compact half-rack unit built for OB vans and remote rooms that can deliver up to twenty 1080p streams from a low-power, edge-ready footprint. It’s fast to deploy, easy to replicate, and engineered for environments where space, power, and uptime are non-negotiable.What makes this approach different is co-design - aligning hardware, firmware, and partner integrations with real workflows. We explain how Advantech collaborates across CPUs, memory, and ASIC vendors to create platforms with long lifecycles, predictable performance, and clean integration paths. The payoff: multi-4K delivery without extra racks, greener operations through lower watts per channel, and workflows that scale with audience demand. If you’re battling OPEX creep, unpredictable latency, or integration friction, this episode maps a clear, actionable path forward.Subscribe for more deep dives into edge video architecture, share with a teammate planning the next remote production, and leave a review to tell us where your pipeline hurts most.We lay out a clear plan to scale video at the edge using low-power ASIC encoders inside rugged, modular servers. From cutting bandwidth and latency to deploying the Quadra Mini in OB vans, we show how to grow without building new data centers.• why video demand continues to outpace infrastructure • bandwidth and cloud encoding cost pressures • latency risks for live and interactive viewing • ASIC-based encoding for power and density gains • Quadra Mini Server capabilities and use cases • hybrid edge workflows for predictable OPEX • co-design with partners for longevity and fit • greener operations through lower watts per channel • next steps and where to learn moreLearn More:• Download Advantech's Presentation → https://info.netint.com/hubfs/downloads/Enabling-Video-at-The-Edge.pdf• NETINT Case Studies → https://netint.com/resources/case-studies/ • Advantech Edge Platforms → https://www.advantech.com/en/servers • NETINT VPU Technology Overview → https://netint.com/technology/Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    7:54
  • The New Economics of Transcoding: How VPUs Unlock FAST, AVOD & Back-Catalog Revenue
    What if transcoding stopped being the constraint and became the engine behind your content strategy? In this episode, Arcadian’s Joe Waltzer and Josh Pesigan explain how Video Processing Units (VPUs) are transforming the economics and timing of video workflows—and why the real win isn’t just lower cost, but the freedom to experiment, iterate, and ship smarter.We start with a reality everyone in streaming understands: massive back catalogs sit on shelves because cloud transcoding costs erase the margin. With VPUs, that equation flips. Suddenly multilingual versions, refreshed ABR ladders, and FAST-ready packaging become inexpensive enough to try—letting teams test formats, revive dormant titles, and capitalize on “Suits-effect” surges without committing huge budgets up front.Then we dive into one of the industry’s biggest operational friction points: ad insertion. Traditional pipelines force teams to lock ad breaks early, long before anyone has performance data. Any change means re-encoding, delays, and cross-team stress. VPUs change that. Encoding becomes fast, cheap, and local to your workflow, so business teams can make placement decisions later—aligned with launch timing, audience insights, and real analytics. The result: higher fill, better yield, more experimentation, and far fewer internal fire drills.The best part? None of this requires new tooling. FFmpeg runs on VPUs without new APIs or retraining, and deployments work in the cloud or on-prem depending on workload and economics.If you’re building FAST channels, expanding AVOD, or trying to extract more value from your catalog, this conversation gives you a practical new mindset: use compute efficiency to buy strategic flexibility.Links & Resources⬇️ Download Arcadian Presentation:https://info.netint.com/hubfs/downloads/Optimizing-Video-Workflows-with-VPUs.pdf🎧 Listen to more Voices of Video episodes: https://netint.biz/podcast🚀 Test NETINT VPUs on Akamai Cloud (+$500 credit): https://netint.biz/akamai_500🖥 Learn more about NETINT VPUs: https://netint.com/productsKey Takeaways: • how VPUs dramatically lower transcoding cost and energy use • why back catalogs become profitable again • the Suits effect as proof of latent demand • shifting ad decisions downstream for smarter AVOD/FAST • removing cross-team friction in ad planning • using FFmpeg on VPUs with zero workflow changes • cloud and on-prem deployment paths • replacing rigid pipelines with rapid experimentation • the operational gains that matter more than raw cost savingsThis episode of Voices of Video is brought to you by NETINT Technologies. Explore NETINT’s encoding solutions at netint.com.Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    9:50
  • So You Want A VPU? Here’s The No-Drama Way To Plug, Play, And Push To Your CDN
    Tired of choosing between ripping out your video stack or standing still? In this episode, Kenneth Robinson, Director of Field Application Engineers at NETINT, walks through a practical playbook for deploying Video Processing Units (VPUs) at any stage of growth - from retrofitting a live server to scaling across edge, cloud, and hybrid environments.We start with hardware options that match real-world use cases: • T1U hot-swap modules that slide into existing servers - no downtime required. • Prebuilt 1RU systems with up to ten T1Us and your choice of ARM or x86 CPUs. • Compact mini servers built for SDI ingest and edge workflows. • Cloud-based VPU instances via partners like Akamai, CDN77, and i3D.net - including test credits for quick starts. • Hybrid configurations that keep steady-state on-prem and burst to the cloud for overflow or redundancy.Then we cover software integration. Choose your path: a native API for granular control, or use FFmpeg and GStreamer for faster deployment. Kenneth explains how NETINT’s SDK is being upstreamed into both projects - simplifying maintenance and keeping features current without custom patches.Next, we dive into two advanced capabilities that redefine efficiency: • Multi-layer AV1 encoding for personalized overlays or targeted ads inside a single bitstream. • Multiview encoding that lets the player dynamically stitch camera feeds without re-encoding - perfect for multi-angle sports or live events.Finally, not every team has a full dev bench - so meet Bitstreams, NETINT’s no-code interface for managing transcoding workflows. Build templates, monitor load and health, convert captions to WebVTT, and push to multiple origins with RTMP, SRT, HLS, or DASH.Kenneth closes with a preview of the customer-driven roadmap: WHIP/WHEP contribution, RTP, SMPTE 2110, audio-level control, RIST, and NDI, all prioritized based on real-world feedback.If you’re exploring AV1, chasing lower latency, or planning hybrid expansion, this walkthrough gives you concrete choices and clear next steps - from card to cloud.Download the presentation: https://info.netint.com/hubfs/downloads/VPU-Deployment-Options.pdfStay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    8:59
  • Synchronizing 20 Perspectives: The Future Of Multi-View Esports
    Cameras miss moments; fans don’t. We wanted every decisive peek, every clutch revive, and every chaotic final ring in Apex Legends to be watchable from any team’s perspective - live, synchronized, and affordable. That meant rethinking how we transcode and distribute dozens of POV streams at once without drowning in startup lag or compute spend.We walk through how Scalstrm integrated NETINT VPUs at a low level to pack up to 20 live channels onto a single card, slashing both costs and boot times for event-based streaming. Instead of relying on generic wrappers, they tapped direct APIs to tune buffer behavior, rate control, and ABR ladders for fast-motion gameplay.Partnering in the Akamai Cloud lets them spin up encoders only when needed, bring them online in seconds, and tear them down post-show—no idle fleets, no waste. For VOD, just-in-time transcoding stores a single high-bitrate master and generates renditions only when requested, keeping catalogs lean while preserving quality.Znipe Esports takes the spotlight with a multi-POV esports product that delivers 20+ synchronized streams plus the main event feed. To keep every angle aligned, they apply AI and image analysis to lock onto in-game clocks, then validate with operators for frame-accurate sync across teams. Telemetry from damage and kill events fuels real-time overlays and instant highlights, so fans can jump to the best moments or follow their favorite squad without missing context.The payoff is dramatic: 25% lower transcoding cost, 70% faster startup, and a 75% reduction in high-quality transcoding cost—exactly where esports audiences are most demanding.We also share a war story: going live in 30 minutes only to find GPU capacity swallowed by AI training. VPUs gave us a dedicated path for video, restoring predictability when it mattered most.If you care about multi-view control, synchronized angles, and high frame-rate streams that don’t blow up your budget, this breakdown shows how to get there.Listen now: https://netint.biz/podcast Download the presentation: https://info.netint.com/hubfs/downloads/VPUs-on-Akamai-cloud.pdf Test drive NETINT VPUs on Akamai Cloud and get $500 credit: https://netint.biz/akamai_500Episode highlights: • Scalstrm’s origins in packaging, origin, and analytics for operators and broadcasters • Why low-level VPU APIs beat generic wrappers for live density and efficiency • Instant provisioning for event-based transcoding on cloud partners • Just-in-time transcoding for VOD to cut storage and compute • Znipe’s multi-POV product for Apex Legends with 20+ team feeds • AI and image processing for frame-accurate sync on in-game clocks • Ingesting telemetry to render stats and auto-generate highlights • Cost wins: 25% lower normal transcoding, 70% faster startup, 75% lower high-quality costs • Avoiding GPU shortages by shifting to VPUs for predictable capacity • Higher resolutions and frame rates that match esports viewer expectations.Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    17:43
  • Your Sports Car Is Cool, But The Taxi Wins On Power Bills
    What if your heaviest video jobs spun up in seconds, sipped power, and scaled wherever your viewers are?In this episode, we run NETINT VPUs inside Akamai Cloud and push them across live and just-in-time workflows—multi-codec ABR (AV1/H.264/HEVC), synchronization, DRM, and low-latency packaging included.We start with deployment trade-offs: on-prem cards (control like a tuned sports car), cloud resources (on-demand like a taxi), portable containers, and Kubernetes for orchestration and autoscaling. With VPUs available in-region on Akamai, you cut CPU burn, lower watts per stream, and keep compute close to contribution or audience—ideal for local ingest, regional ad splicing, anti-piracy, and edge turnarounds.Then we get hands-on. Scalstrm launches a live channel with a single API call—multicast in, three profiles out, catch-up enabled—in a couple of seconds. Advanced toggles cover time-shift TV, HLS/DASH, low-latency, trick play, iframe playlists, DRM, and ad insertion. Robust monitoring and analytics surface sync issues early to avoid blind troubleshooting. For VOD, we flip to just-in-time: store the top profile, regenerate lower rungs on demand, and save ~50–60% storage—while enabling instant ad asset playout.For builders, we walk the Kubernetes path: provision a cluster in Frankfurt, label nodes for NETINT VPUs, deploy drivers from Git, wire up object storage, and run a pod that watches a bucket and invokes FFmpeg with hardware acceleration. We generate Apple’s ABR ladder across AV1/H.264/HEVC and finish a 5.5-minute asset in under four minutes—setup included—while power draw rises smoothly from idle without spikes.If you care about power efficiency, global scale, and faster launches, this is a blueprint you can reuse today. Share it with the teammate who lives in FFmpeg, and tell us which part you want open-sourced next.Key TakeawaysDeployment models: on-prem, cloud, containers, Kubernetes—when each makes senseWhy VPUs: higher density, lower power per stream, sustainability benefitsAkamai reach: edge and cloud tightly coupled for minimal latencyScalstrm live demo: API setup → multicast in → three profiles out → ready in secondsAdvanced features: sync, time-shift TV, DRM, low-latency, trick play, iframe playlists, ad insertionObservability: monitoring/analytics to reduce tickets and speed root-causeJust-in-time VOD: keep highest profile, regenerate lower rungs on demand (~50–60% storage savings)Kubernetes workflow: drivers, node labels, buckets, FFmpeg with NETINT accelerationPerformance proof: multi-codec ABR in minutes, end-to-end📄 Download the presentation → 💡 Get $500 credit to test on Akamai →Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
    --------  
    22:40

Meer Nieuws podcasts

Over Voices of Video

Explore the inner workings of video technology with Voices of Video: Inside the Tech. This podcast gathers industry experts and innovators to examine every facet of video technology, from decoding and encoding processes to the latest advancements in hardware versus software processing and codecs. Alongside these technical insights, we dive into practical techniques, emerging trends, and industry-shaping facts that define the future of video. Ideal for engineers, developers, and tech enthusiasts, each episode offers hands-on advice and the in-depth knowledge you need to excel in today’s fast-evolving video landscape. Join us to master the tools, technologies, and trends driving the future of digital video.
Podcast website

Luister naar Voices of Video, FD Dagkoers en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v8.0.7 | © 2007-2025 radio.de GmbH
Generated: 12/8/2025 - 12:56:49 AM