The RTMP Legacy Problem

RTMP (Real-Time Messaging Protocol) has been the standard protocol for live streaming ingest for over fifteen years. Virtually every encoder, every streaming platform, and every broadcast workflow speaks RTMP. It works, it is understood, and an enormous installed base of hardware and software depends on it.

But RTMP was designed for a different era. It uses TCP, which means any packet loss triggers head-of-line blocking and stream stalling. It has no built-in encryption (RTMPS adds a TLS wrapper, but not all devices support it). It provides no transport-level diagnostics. And Adobe, the protocol’s steward, has effectively abandoned it.

Meanwhile, SRT offers everything RTMP lacks: native AES encryption, UDP-based transport with selective retransmission, configurable latency buffers, and rich real-time statistics. The problem is that thousands of encoders, cameras, and workflow tools still only speak RTMP.

The solution is a protocol bridge.

How Protocol Bridging Works

A protocol bridge accepts a stream in one protocol and re-publishes it in another. For RTMP-to-SRT conversion, the bridge:

  1. Receives the incoming RTMP stream (TCP, port 1935)
  2. Demuxes the RTMP container to extract the raw H.264/HEVC video and AAC/Opus audio elementary streams
  3. Remuxes those streams into MPEG-TS (the container format SRT uses)
  4. Transmits via SRT with encryption, latency settings, and error correction

This is a passthrough operation. The video and audio codecs are not re-encoded, so there is no quality loss and minimal CPU usage. The bridge only changes the transport and container format.

Vajra Cast as an RTMP-to-SRT Bridge

Vajra Cast has native RTMP ingest through its built-in nginx-rtmp integration. This means you can receive RTMP streams directly without any additional software.

Setting Up RTMP Ingest

  1. In the Vajra Cast web interface, create a new route
  2. Add an RTMP input with a stream key (e.g., live/mystream)
  3. Vajra Cast provisions the nginx-rtmp endpoint automatically
  4. Point your encoder to rtmp://your-server/live/mystream

Vajra Cast handles the nginx-rtmp configuration dynamically. When you create an RTMP ingest, it creates the corresponding nginx application and stream key. When you delete it, the configuration is cleaned up. No manual nginx.conf editing required.

Routing RTMP to SRT Output

Once the RTMP stream is ingested, you can add any number of SRT outputs:

  1. On the same route, add an SRT output (caller mode)
  2. Set the destination address and port
  3. Configure encryption (passphrase, AES-256)
  4. Set the latency for the target network conditions

The stream is now bridged: your legacy RTMP encoder sends to Vajra Cast, which converts and pushes via SRT to the downstream server with full encryption and error correction.

Configuration Example

A typical RTMP-to-SRT bridging route in Vajra Cast:

Route: "Studio Camera Bridge"

Input:
  Protocol: RTMP
  Stream Key: live/studio-cam-1
  Endpoint: rtmp://gateway.example.com/live/studio-cam-1

Output 1:
  Protocol: SRT (Caller)
  Destination: srt://production-server:9000
  Passphrase: [AES-256 passphrase]
  Latency: 200ms

Output 2:
  Protocol: SRT (Caller)
  Destination: srt://backup-server:9001
  Passphrase: [AES-256 passphrase]
  Latency: 500ms

In this example, a single RTMP input is fanned out to two SRT destinations (primary and backup) with independent encryption and latency settings. Thanks to Vajra Cast’s zero-copy distribution, the second output adds no additional CPU overhead.

SRT-to-RTMP: The Reverse Bridge

The conversion works in the other direction too. Many streaming platforms (YouTube Live, Twitch, Facebook Live) still require RTMP ingest. With Vajra Cast, you can:

  1. Receive SRT from your production network (encrypted, resilient)
  2. Output RTMP to each streaming platform

This gives you the benefits of SRT for your internal transport while maintaining compatibility with every major platform.

Route: "Social Distribution"

Input:
  Protocol: SRT (Listener)
  Port: 9000
  Encryption: AES-256

Output 1:
  Protocol: RTMP
  Destination: rtmp://a.rtmp.youtube.com/live2/[stream-key]

Output 2:
  Protocol: RTMP
  Destination: rtmp://live.twitch.tv/app/[stream-key]

Output 3:
  Protocol: RTMP
  Destination: rtmps://live-api-s.facebook.com:443/rtmp/[stream-key]

nginx-rtmp Integration Details

Vajra Cast’s RTMP ingest is powered by nginx-rtmp, a well-proven module that handles RTMP connections at scale. The integration provides:

  • Dynamic application creation: new RTMP endpoints are provisioned automatically when you create routes in the UI or via the REST API
  • Stream key management: each RTMP ingest has a unique stream key that acts as both an identifier and a basic authentication mechanism
  • Reliable reconnect: if the RTMP encoder disconnects and reconnects (common with OBS, vMix, and Wirecast), Vajra Cast picks up the new connection automatically
  • Multiple simultaneous streams: each RTMP ingest is independent. You can run dozens of RTMP inputs concurrently

Securing RTMP Ingest

RTMP itself has no encryption (the protocol predates widespread TLS adoption). To secure your RTMP ingest:

  1. Use RTMPS where supported. Some encoders support RTMP over TLS. Vajra Cast can terminate RTMPS connections.
  2. Use unique stream keys. Treat stream keys as passwords. Rotate them between events.
  3. Restrict by IP. If your encoder has a static IP, configure firewall rules to accept RTMP only from known sources.
  4. Bridge to SRT immediately. The most secure approach is to keep RTMP local (same network or localhost) and use SRT with AES-256 for any transport over the internet.

Why Not Just Use RTMP Everywhere?

If RTMP works for your encoder, why add the complexity of protocol conversion? Several reasons:

Network Resilience

RTMP uses TCP. A single lost packet causes retransmission of the entire TCP window, and during recovery, new packets queue behind the lost one (head-of-line blocking). On a lossy network, this creates visible stuttering and buffering. SRT’s UDP-based selective retransmission recovers only the missing packet without blocking the rest of the stream.

Encryption

Native RTMP is unencrypted. Anyone with access to the network path can capture and view your stream. RTMPS (TLS) adds encryption but not all hardware encoders support it. SRT’s AES-256 encryption is built into the protocol and supported by every SRT-capable device.

Diagnostics

RTMP provides almost no transport-level metrics. If your stream is having issues, you are debugging blind. SRT exposes RTT, jitter, packet loss rate, retransmission count, and bandwidth estimates in real-time. Vajra Cast surfaces all of these in the dashboard and via Prometheus metrics.

Firewall Traversal

RTMP uses TCP port 1935, which some corporate firewalls block. SRT uses UDP on configurable ports, and in caller mode, initiates outbound connections that work with most NAT configurations.

Migration Strategy: RTMP to SRT

For organizations with significant RTMP infrastructure, the migration to SRT does not need to happen overnight. A phased approach:

Phase 1: Bridge at the Gateway

Deploy Vajra Cast as an RTMP-to-SRT bridge. Keep your existing encoders sending RTMP. Vajra Cast converts to SRT for all downstream transport. This requires zero changes to your encoding hardware.

Phase 2: Upgrade Encoders Incrementally

As encoders are replaced or updated, configure new ones to send SRT directly. Vajra Cast accepts both protocols simultaneously, so you can mix RTMP and SRT inputs on the same route with automatic failover.

Phase 3: SRT End-to-End

Once all encoders support SRT, you can phase out RTMP ingest entirely. Keep RTMP outputs for platform compatibility (YouTube, Twitch), but your entire contribution and distribution backbone runs on SRT.

Next Steps