Broadcast Hub: Cloud Video Routing for Live Production
A broadcast hub ingests, processes, and distributes live video across protocols and regions. How cloud broadcast hubs replace legacy teleport infrastructure.
What a Broadcast Hub Actually Is
A broadcast hub is the piece of infrastructure that sits in the middle of a live video operation and handles everything messy about getting feeds from contribution to distribution. Sources come in: cameras, hardware encoders, remote production trucks, mobile units, web contributors. Destinations go out: broadcasters, OTT origins, recording servers, regional affiliates. The hub is the software that turns one into the other.
If you’ve worked in a broadcast truck, you already know the role. Video router, distribution amplifier, format converter, backup switcher — that whole stack, compressed into a single application and moved out of the rack into a datacenter. Where a traditional router patches SDI cables inside a building, a cloud broadcast hub patches IP feeds across continents over the public internet without dropping frames.
The phrase “broadcast hub” used to mean a physical place: a telco building like the BT Tower in London, a teleport outside Frankfurt, a broadcast service provider’s data hall in Virginia. That meaning hasn’t disappeared, but the weight has shifted. Today, when a broadcaster says “our broadcast hub,” they’re increasingly pointing at a managed cloud instance running on dedicated hardware in several cities at once. Same job. New operational model.
Why Live Broadcast Needs a Hub
Live video operations are never clean one-to-one relationships. They look like this:
- One source, many takers. An international football match feeds 30 broadcasters worldwide, each with their own decoder brand, bitrate ceiling, and audio channel map.
- Many sources, one program. A multi-site news production pulls contributions from five cities into a single program output, switching on cue.
- Format mismatch, every time. The encoder outputs HEVC 1080i50 at 15 Mbps. Half the takers want H.264 720p60 at 6 Mbps. Somebody has to transcode.
- Reliability, not best-effort. When the primary feed dies mid-match, the system has to switch to backup inside 50 ms without anyone touching a keyboard.
- Geographic spread. Takers in Sydney, Mumbai, and São Paulo each want their feed delivered from a server close to them, not from a single London endpoint routed through transoceanic fiber.
A broadcast hub solves all of these in one configuration, not a folder of scripts. You define ingests, processing rules, outputs — once — and the hub executes it every time the event goes live. Without a hub, you’re running a production on shell scripts and hope.
The Three Jobs Every Broadcast Hub Does
Whether it’s on-premise hardware from Evertz, a cloud deployment on AWS, or a managed platform like Vajracast, every broadcast hub does three things. Ingest, process, distribute. Get any of them wrong and the whole operation fails.
Ingest
The hub accepts incoming live feeds from whatever the contribution side is running. A serious broadcast hub speaks the full protocol zoo:
- SRT in both listener and caller modes, with AES-128/256 encryption and configurable latency
- RTMP for the older encoder fleet that hasn’t moved yet
- MPEG-TS over UDP for broadcast-grade contribution links, with multicast support where the network allows it
- RTSP for IP cameras and surveillance integrations
- HLS pull for restreaming feeds that are already packaged
- NDI for studio floors where the hub sits next to the production switcher
- WHIP for WebRTC contributors (guests, remote reporters)
Each ingest gets its own parameters: passphrase, latency buffer, packet loss tolerance, audio configuration, metadata expectations. A production broadcast hub can run dozens of these at once without the operator having to babysit them. For deep protocol details, see the SRT streaming gateway guide.
Process
Once a feed is inside the hub, the real work starts. This is where most cheap “streaming servers” fall over and serious broadcast hubs earn their keep.
- Hardware transcoding. Resolution, bitrate, framerate, codec — all changeable on the fly, using GPU acceleration so the server doesn’t melt under load. Software transcoding pretends to work until the tenth stream comes in. See hardware transcoding for why this matters at scale.
- Input failover. Primary and backup feeds monitored continuously, automatic switchover under 50 ms, zero operator intervention. The multi-input failover model is non-negotiable for anything sponsored.
- Audio matrix routing. Map input channels to output channels, split 7.1 into stereo pairs, pass multi-PID audio through for international broadcasts with separate language tracks. The audio matrix doc covers the common cases.
- Hot reconfiguration. Change routes, swap sources, add takers mid-broadcast without taking the service down. This is what hot management means in practice.
- Metadata preservation. SCTE-35 ad markers, timecodes, closed captions, program IDs — all of it has to survive the processing chain. Losing captions is a regulatory problem in most jurisdictions.
- Field-aware handling. Interlaced HEVC is still a thing in 2026, and hubs that don’t handle it correctly produce outputs that look like someone smeared Vaseline on the lens.
Distribute
The processed stream fans out to every destination simultaneously, in whatever protocol each destination needs.
- SRT push to professional decoders at broadcaster facilities and downstream hubs
- RTMP push to social platforms and legacy CDN ingest
- HLS output for web players and OTT origin servers
- MPEG-TS over UDP to IRDs and satellite uplink gear
- Recording to local disk, S3-compatible object storage, or network-attached storage
- Passthrough to third-party CDNs without re-encoding
A single ingest can fan out to twenty outputs, each with its own protocol, bitrate profile, and destination. The zero-copy distribution model keeps CPU overhead low enough that bandwidth — not compute — becomes the limit. For the routing logic behind this, the live stream routing pillar covers the full picture.
On-Premise vs Cloud Broadcast Hub
For thirty years, broadcast hubs lived in physical buildings. You bought rack-mount gear from Evertz, Imagine, Nevion, or Grass Valley. You leased fiber. You hired engineers to maintain all of it. For some workloads it still makes sense. For most, the economics have turned.
| Aspect | On-Premise Broadcast Hub | Cloud Broadcast Hub |
|---|---|---|
| Setup time | Weeks to months | Minutes to hours |
| Geographic reach | Single site per build | Multi-region by default |
| Capex | Six to seven figures | Zero |
| Opex | Predictable but rigid | Scales with usage |
| Maintenance burden | Your engineering team | Provider handles it |
| Redundancy | You design and pay for it | Built into the architecture |
| Upgrade path | Hardware refresh cycle | Continuous rollout |
| Good fit for | Fixed facilities, strict compliance | Events, scaling ops, global reach |
The old tradeoff was control: on-premise gave you root on every box and a cable you could physically unplug. That still matters in regulated environments. But modern cloud broadcast hubs expose their internals through web admin interfaces and a REST API that give you the same operational control without the responsibility for the server room. The question is no longer whether to move to cloud — it’s which cloud model to pick.
Managed vs Self-Hosted Cloud Hub
Once you’ve decided on cloud, you have a second fork in the road.
Self-hosted cloud hub. You rent VMs from AWS, GCP, or Azure, install broadcast software, and run it yourself. You own the OS, kernel tuning, firewall rules, monitoring, upgrade cycle, and the 3 AM pages when something dies mid-event. This works if you already have a platform engineering team. It’s the same job as on-premise, minus the physical hardware.
Managed cloud hub. A specialized provider runs the servers, network, monitoring, and on-call rotation. You get admin access to the application layer — routes, ingests, outputs, failover rules — through a web interface and an API. You handle production operations. They handle the plumbing.
For most broadcasters, managed is the right call. Your team’s value is in the live production, not in keeping kernels patched. Check the Wowza alternative comparison and Haivision SRT Gateway comparison for how the commercial options stack up.
Vajracast as a Managed Broadcast Hub
Vajracast is a managed cloud broadcast hub purpose-built for live broadcast distribution. It runs on dedicated physical servers — not multi-tenant VMs — across Paris, London, Frankfurt, Helsinki, New York, Virginia, Los Angeles, and Singapore, with extended reach through partner infrastructure in Tokyo, Hong Kong, Beijing, and Sydney. Eight owned regions, four partner regions, one control plane.
Each instance ships with:
- 1 Gbps unmetered bandwidth, dedicated — no shared pipe, no per-GB egress invoice surprises
- Hardware GPU transcoding for HEVC, H.264, multi-PID audio, interlaced field reconstruction
- Dual-path failover with sub-50 ms switchover between primary and backup sources
- Multi-protocol ingest and output: SRT, RTMP, RTSP, HLS, NDI, MPEG-TS over UDP
- Real-time metrics through the web dashboard and a documented REST API for integration with your monitoring stack
- Crash recovery — full state restoration after any process or host incident, so an event doesn’t die because a watchdog fired
- Web admin interface on your own subdomain (
yourname.vajracast.com), with role-based access for operators and engineers
You configure the routes. We keep the servers alive. That’s the deal. For the full product surface, the broadcast streaming software pillar covers the application layer in detail.
Production Architecture: An International Sports Tournament
Concrete numbers beat abstract diagrams. Here’s what a real international sports tournament broadcast hub looks like, based on a current Vajracast deployment replacing a legacy BT Tower aggregation path.
Sources (venue side)
- Primary encoder: HEVC 1080i50 at 15 Mbps, SRT listener, AES-256
- Backup encoder: H.264 1080i50 at 12 Mbps, SRT listener, separate network path
- Two additional camera angles for hot-switching at 8 Mbps each
- Graphics-only monitoring feed at 2 Mbps
Hub processing (Frankfurt instance)
- Primary/backup failover with sub-50 ms switchover
- HEVC-to-H.264 transcoding for takers whose decoders predate the HEVC rollout
- Multi-PID audio split: 8 commentary channels output as four stereo pairs
- Field reconstruction for the interlaced HEVC primary
- SCTE-35 passthrough for regional ad insertion
Distribution
- World feed main: 30 international broadcasters, SRT push, ~20 Mbps each including retransmissions
- World feed backup: same 30 broadcasters on a second Vajracast instance in London for regional redundancy
- Regional feeds for India, Australasia, and Middle East takers
- Watermarked HLS feed for browser-based quality monitoring
- S3 archive recording for post-match review
The bandwidth math. Thirty takers at 20 Mbps is 600 Mbps outbound. Add retransmissions and metadata, call it 720 Mbps sustained. On a 1 Gbps unmetered instance that leaves ~280 Mbps in reserve. Add a second instance in London, split takers geographically, and you get both more capacity and automatic regional failover. Ten more takers is just more bandwidth, not more engineering effort.
This is the workload that used to require a satellite truck outside the stadium and a telco teleport contract to aggregate feeds at BT Tower. Today it runs on two cloud instances, provisioned in an afternoon, operated by one broadcast engineer through a web browser. The live broadcast use cases page has more deployment examples.
When You Actually Need a Broadcast Hub
You need a broadcast hub the moment any of these is true:
- You distribute one live feed to more than one professional taker
- You need protocol conversion between the contribution format and at least one distribution format
- You require automatic failover between primary and backup sources
- You have takers in multiple geographic regions and care about latency or routing cost
- You need per-stream monitoring with alerts, not just “the light is green”
- Your downtime budget is measured in seconds, not minutes
- You cannot justify the capex and headcount to run on-premise broadcast infrastructure
If you’re sending a single encoder to a single CDN ingest point and you’re happy to restart it manually when something goes wrong, you don’t need a hub. Point-to-point SRT is enough, and anything more is overhead. But the moment a second destination shows up, or a second source, or a contract that mentions SLAs, a broadcast hub is the correct architecture.
How to Choose a Broadcast Hub
The evaluation criteria that matter, ordered by how often they bite people in production:
- Protocol coverage. Does it speak every protocol you currently use, every protocol your takers use, and every protocol you might need in 18 months?
- Failover behavior. How fast does it switch? Automatic or operator-gated? See the video stream failover pillar for the full taxonomy.
- Hardware transcoding. Software transcoding is fine for five streams and a disaster at fifty. Insist on GPU acceleration.
- Geographic footprint. Where are the servers physically? Latency from a London hub to Singapore is not the same conversation as from a Singapore hub to Singapore.
- Bandwidth model. Unmetered, metered, burstable, capped? Metered bandwidth on a live sports event produces invoices that end careers.
- Operational model. Self-hosted, managed, hybrid? Who gets paged at 2 AM?
- Pricing model. Make sure the math works at peak load, not just at average.
- Engineering access. When something breaks, can you reach the engineers who wrote the code, or are you routed through three tiers of scripted support?
A broadcast hub is a long-term operational commitment. Evaluate it accordingly.
Next Steps
If you’re building out a new broadcast hub or migrating off on-premise infrastructure, the fastest way to get concrete answers is to run a real ingest through a real instance.
- Start a free Vajracast trial — dedicated hardware, provisioned in minutes, no credit card required
- Configure your first SRT ingest and a couple of outputs through the web admin
- Point your actual production encoders at it and see how the failover, transcoding, and monitoring behave under your real load
- Talk to the engineering team about your specific routing, redundancy, and compliance needs
For the deeper technical context around what a broadcast hub does, the SRT streaming gateway, live stream routing, and video stream failover pillars cover the core mechanics. If you want to see how Vajracast compares to the incumbents, start with the Haivision SRT Gateway comparison. Protocol fundamentals are documented at the SRT Alliance and broadcast transport standards at SMPTE.
A broadcast hub used to mean a building. Now it means a configuration. The job is the same. The weight is gone.
Managed cloud platform with dedicated servers, dual-path failover, hardware transcoding, and global delivery. Free for 30 days.
30 days free · No credit card · Direct access to the dev team
Frequently Asked Questions
What is a broadcast hub?
A broadcast hub is the central routing point in a live video operation. It receives contribution feeds from cameras, encoders, and remote production sites, processes them (transcoding, failover, audio routing, metadata handling), and distributes the result to multiple downstream destinations — broadcasters, OTT origins, CDNs, recording systems. In one system it replaces the video router, protocol converter, and distribution amplifier of a traditional broadcast chain.
What is the difference between a broadcast hub and a CDN?
A CDN delivers pre-packaged content to thousands or millions of end viewers. A broadcast hub handles contribution and distribution between professional endpoints: a dozen encoders coming in, thirty or forty broadcasters taking the feed out. The hub cares about frame accuracy, sub-second latency, failover, and protocol conversion. The CDN cares about cache hit ratios. Different jobs, different tools — and a broadcast hub typically feeds a CDN rather than replacing one.
Do I still need a broadcast hub if I already use SRT?
SRT is a transport protocol. It moves bytes from point A to point B reliably. A broadcast hub is the application layer that orchestrates dozens of those A-to-B connections, adds failover between them, transcodes when formats don't match, and monitors everything in one place. Without a hub you end up managing SRT connections from a spreadsheet, which works fine until it doesn't.
On-premise or cloud broadcast hub?
Cloud broadcast hubs win on almost every axis: faster deployment, multi-region reach, no capex, built-in redundancy, and someone else keeping the lights on. On-premise only makes sense when you have hard compliance constraints, ultra-low-latency requirements measured in single-digit milliseconds, or an existing physical facility you already maintain.
What protocols should a modern broadcast hub support?
At minimum: SRT (listener and caller, with AES-256), RTMP for legacy encoder compatibility, MPEG-TS over UDP for broadcast-grade links, HLS for web and OTT ingest, RTSP for IP cameras, and NDI for studio environments. Bonus points for WHIP and WHEP for WebRTC. The hub should convert between any of these in either direction without breaking the audio or the closed captions.
How many takers can a 1 Gbps broadcast hub serve?
At 20 Mbps per taker in passthrough mode, a dedicated 1 Gbps unmetered instance serves roughly 40-50 simultaneous takers with headroom for retransmissions. Drop the bitrate to 10 Mbps and you get 80-90. Enable hardware transcoding and each transcoded variant costs GPU cycles but barely affects the bandwidth math. For events needing hundreds of takers, you deploy multiple hubs across regions and route takers to the closest one.
When is a broadcast hub overkill?
If you send one encoder to one destination and you never need a backup path, point-to-point SRT is enough. The moment you add a second destination, a backup encoder, a protocol conversion, or a requirement for monitoring, the spreadsheet-and-CLI approach starts costing more than a hub.
Can a broadcast hub replace a teleport or BT Tower-style aggregation point?
Yes — and it already is. Cloud broadcast hubs now carry international sports feeds, news contributions, and multi-camera productions that used to terminate in telco buildings. The advantage is that a cloud hub can be provisioned in minutes, runs in multiple cities simultaneously, and costs a fraction of satellite uplink time.