What does it take to run hundreds of live events without chaos? In this episode, we open up the architecture behind G&L’s Playout Hub - a hybrid publishing engine designed for broadcasters, public institutions, and distributed editorial teams that need broadcast precision at scale. Built on decades of systems integration experience at G&L Geißendörfer & Leschinsky GmbH, the platform mixes live inputs, VOD interstitials, graphics, and multi-target outputs into a unified, dependable workflow.
We trace the evolution from bespoke integrations to a productized, composable platform grounded in three pillars:
Inputs span SDI, SRT, RTMP, MPEG-TS, VOD, and ST 2110 on the horizon. All sources feed into a playlist-driven orchestration layer, where editorial teams trigger transitions, mix live with pre-produced clips, and overlay graphics in real time. Outputs include HLS, DASH, RTMP, SRT, and CMAF, enabling consistent publishing to OTT platforms, social media, and syndication partners simultaneously.
At scale, repeatability becomes everything. A channel manager with powerful templates, parameters, and reusable configurations lets operators spin up channels quickly while maintaining standards across hundreds of events and dozens of concurrent streams, such as the European Parliament’s 30 parallel events or ARTE’s 600 concerts per year.
Download the full presentation: https://info.netint.com/hubfs/downloads/GnL-Beyond-live.pdf
Governance is treated as a first-class requirement. G&L’s independent access manager delivers SSO and granular role-based access control, down to individual actions like source switching or overlay triggering. This clean separation of concerns allows engineers to define codecs and I/O while editors manage timing, rundowns, and branding—preventing workflow collisions in large production teams.
The architecture is hybrid by design, deployable on Kubernetes or k3s across cloud and on-prem environments, and integrates cleanly with external encoders, CDNs, and players. A built-in studio module supports lower-thirds, logos, and rundown-based overlays, while still allowing integration with external tools like Singular Live.
Under the hood, the platform uses hardware acceleration wherever possible—including NETINT VPUs (https://netint.com/products/) for efficient high-density encoding, while also supporting GPU and CPU environments. For teams handling hundreds of events, this efficiency is not optional; it’s the difference between smooth operation and system overload.
• The three pillars: custom work → productized components → full products
• Hybrid inputs across SDI, SRT, RTMP, MPEG-TS, VOD, and future ST 2110
• Playlist-based orchestration with real-time graphics overlays
• Multi-target outputs: HLS, DASH, RTMP, SRT, CMAF
• Scaling challenges across hundreds of events and 30+ concurrent channels
• Channel manager with templates, parameters, reusability
• Hardware acceleration with NETINT VPUs, plus GPU and CPU support
• RBAC with SSO and granular, action-level permissions
• Separation of concerns for engineers vs. editors
• Kubernetes-based composable architecture for cloud + on-prem
• Lifecycle flow: reservation → templates → policies → scheduling → monitoring
• Studio module for overlays, rundowns + optio
Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.