FFmpeg SRT Streaming: The Complete Command Reference

FFmpeg SRT Support

FFmpeg has supported SRT since version 4.0 (released 2018), but SRT support has improved substantially in every release since. You want FFmpeg 5.0 or newer for reliable SRT operation, and FFmpeg 6.0+ for the best latency and statistics handling.

Check if your FFmpeg build includes SRT:

ffmpeg -protocols 2>/dev/null | grep srt

You should see srt listed under both input and output protocols. If it is missing, your build was compiled without --enable-libsrt. On Ubuntu/Debian:

sudo apt install libsrt-openssl-dev

Then rebuild FFmpeg with --enable-libsrt, or install a build that includes it. On macOS with Homebrew, brew install ffmpeg includes SRT by default.

Verify the SRT library version:

ffmpeg -hide_banner -buildconf 2>/dev/null | grep srt

SRT 1.5.0 or newer is recommended. Older versions lack connection bonding, improved statistics, and certain socket options.

SRT Listener Mode

In listener mode, FFmpeg opens a UDP port and waits for an incoming SRT connection. This is useful when FFmpeg acts as the receiving end, for example ingesting a stream from a remote encoder.

Receive and save to file:

ffmpeg -i "srt://:9000?mode=listener&latency=500000" \
  -c copy output.ts

Receive and play (pipe to ffplay):

ffplay "srt://:9000?mode=listener&latency=300000"

Receive and re-stream as RTMP:

ffmpeg -i "srt://:9000?mode=listener&latency=500000" \
  -c copy -f flv rtmp://localhost/live/stream1

Key points: the empty host before the colon means “bind to all interfaces.” The latency parameter is in microseconds in FFmpeg’s SRT implementation (500000 = 500ms). If you need to bind to a specific interface, specify its IP: srt://192.168.1.10:9000?mode=listener.

SRT Caller Mode

Caller mode is the more common FFmpeg SRT use case. FFmpeg initiates a connection to a remote SRT listener, either to push a stream or to pull one.

Push a file to a remote SRT listener:

ffmpeg -re -i input.mp4 \
  -c:v libx264 -b:v 5000k -g 60 -keyint_min 60 \
  -c:a aac -b:a 128k \
  -f mpegts "srt://remote-server:9000?mode=caller&latency=500000"

Pull a stream from a remote SRT listener:

ffmpeg -i "srt://remote-server:9000?mode=caller&latency=500000" \
  -c copy output.ts

The -re flag is critical when reading from a file. Without it, FFmpeg reads the file as fast as possible, flooding the SRT connection. With -re, it reads at the native frame rate. Do not use -re when reading from a live source (capture card, another stream) because the source already produces frames in real time.

The container format for SRT is always MPEG-TS (-f mpegts). SRT does not carry FLV, MP4, or other containers. If you forget -f mpegts on the output, FFmpeg may guess incorrectly and the receiver will get garbage.

Encryption

SRT supports AES encryption with 128, 192, or 256-bit keys. Both sides of the connection must use the same passphrase and key length.

Push with AES-256 encryption:

ffmpeg -re -i input.mp4 \
  -c copy -f mpegts \
  "srt://remote-server:9000?mode=caller&latency=500000&passphrase=MyStr0ngP4ssphr4se&pbkeylen=32"

Listen with encryption:

ffmpeg -i "srt://:9000?mode=listener&latency=500000&passphrase=MyStr0ngP4ssphr4se&pbkeylen=32" \
  -c copy output.ts

The pbkeylen values: 16 for AES-128, 24 for AES-192, 32 for AES-256. The passphrase must be between 10 and 79 characters. If the passphrases do not match, the connection silently fails with no error message in most SRT versions, so double-check both sides.

For a deeper look at SRT encryption, key derivation, and passphrase management, see the SRT encryption setup guide.

Transcoding with SRT

FFmpeg’s real power is combining SRT transport with transcoding. Receive an SRT stream, transcode it, and push it out over SRT again:

ffmpeg -i "srt://:9000?mode=listener&latency=500000" \
  -c:v libx264 -preset fast -b:v 4000k -g 60 \
  -c:a aac -b:a 128k \
  -f mpegts "srt://output-server:9001?mode=caller&latency=500000"

Hardware-accelerated transcode using Intel QSV:

ffmpeg -hwaccel qsv -i "srt://:9000?mode=listener&latency=500000" \
  -c:v h264_qsv -preset faster -b:v 4000k -g 60 \
  -c:a aac -b:a 128k \
  -f mpegts "srt://output-server:9001?mode=caller&latency=500000"

NVIDIA NVENC transcode:

ffmpeg -hwaccel cuda -i "srt://:9000?mode=listener&latency=500000" \
  -c:v h264_nvenc -preset p4 -b:v 4000k -g 60 \
  -c:a aac -b:a 128k \
  -f mpegts "srt://output-server:9001?mode=caller&latency=500000"

When transcoding SRT streams, keep the keyframe interval consistent (-g 60 for 2-second GOP at 30fps). Irregular keyframes cause problems downstream, especially for HLS segmentation and platform ingest.

RTMP to SRT Relay

A common scenario: you receive an RTMP feed from a legacy encoder or a platform and need to relay it as SRT. FFmpeg handles this directly.

RTMP input to SRT output:

ffmpeg -i rtmp://localhost/live/stream1 \
  -c copy -f mpegts \
  "srt://remote-server:9000?mode=caller&latency=500000&passphrase=MyPassphrase123&pbkeylen=32"

SRT input to RTMP output:

ffmpeg -i "srt://:9000?mode=listener&latency=500000" \
  -c copy -f flv rtmp://a.rtmp.youtube.com/live2/your-stream-key

Note the container change: SRT carries MPEG-TS, RTMP carries FLV. When using -c copy (no transcode), FFmpeg remuxes the elementary streams between containers. This works reliably as long as the video codec is H.264 and the audio codec is AAC, which covers most production scenarios.

For a full walkthrough on migrating contribution feeds from RTMP to SRT, see the RTMP to SRT migration guide.

SRT to HLS

Converting an SRT ingest to HLS for CDN distribution is straightforward with FFmpeg:

ffmpeg -i "srt://:9000?mode=listener&latency=500000" \
  -c copy \
  -f hls \
  -hls_time 4 \
  -hls_list_size 5 \
  -hls_flags delete_segments \
  -hls_segment_filename "/var/www/hls/stream_%03d.ts" \
  /var/www/hls/stream.m3u8

For adaptive bitrate HLS, transcode into multiple renditions:

ffmpeg -i "srt://:9000?mode=listener&latency=500000" \
  -filter_complex "[0:v]split=2[v1][v2];[v1]scale=1920:1080[v1out];[v2]scale=1280:720[v2out]" \
  -map "[v1out]" -c:v libx264 -b:v 5000k -g 60 -map 0:a -c:a aac -b:a 128k \
    -f hls -hls_time 4 -hls_list_size 5 -hls_segment_filename "/var/www/hls/1080p_%03d.ts" /var/www/hls/1080p.m3u8 \
  -map "[v2out]" -c:v libx264 -b:v 2500k -g 60 -map 0:a -c:a aac -b:a 96k \
    -f hls -hls_time 4 -hls_list_size 5 -hls_segment_filename "/var/www/hls/720p_%03d.ts" /var/www/hls/720p.m3u8

This generates two HLS renditions from a single SRT ingest. In production, you would add a master playlist and potentially more renditions. A dedicated gateway like Vajra Cast handles this automatically, but the FFmpeg approach works well for simple setups.

Multi-Output with Tee

FFmpeg’s tee muxer lets you send one input to multiple outputs simultaneously. This is useful for sending an SRT stream to several destinations without running multiple FFmpeg instances.

ffmpeg -i "srt://:9000?mode=listener&latency=500000" \
  -c copy -f tee \
  "[f=mpegts]srt://server-a:9000?mode=caller&latency=500000|\
[f=mpegts]srt://server-b:9001?mode=caller&latency=500000|\
[f=flv]rtmp://a.rtmp.youtube.com/live2/your-key"

Each output inside the tee can have its own format specifier. This example sends the same stream to two SRT destinations and one RTMP destination with a single decode. Be aware that the tee muxer does not independently handle backpressure per output. If one destination stalls, it can affect the others. For production multi-destination routing, a purpose-built gateway is more reliable.

Latency Tuning Parameters

SRT exposes several parameters that control buffering, bandwidth allocation, and recovery behavior. These go in the SRT URL query string.

latency

The receive buffer size in microseconds. This is the main tuning parameter. Set it to at least 4x your RTT.

# 500ms latency buffer
"srt://server:9000?latency=500000"

rcvbuf and sndbuf

Receive and send buffer sizes in bytes. The defaults work for most cases, but for very high bitrate streams (50+ Mbps) you may need to increase them.

# 12MB receive buffer for high-bitrate 4K
"srt://server:9000?latency=500000&rcvbuf=12582912&sndbuf=12582912"

maxbw

Maximum total bandwidth in bytes per second, including overhead and retransmissions. Set to 0 for automatic (recommended) or specify a value.

# Cap total bandwidth at 10 Mbps (1250000 bytes/sec)
"srt://server:9000?maxbw=1250000"

# Automatic bandwidth management (default, recommended)
"srt://server:9000?maxbw=0"

Setting maxbw too low starves retransmissions and causes packet loss. If you set it manually, use at least 1.5x your stream bitrate to leave room for ARQ overhead.

oheadbw

Overhead bandwidth as a percentage. Default is 25%. On lossy networks, increase it to give SRT more room for retransmissions.

# 50% overhead for a lossy cellular link
"srt://server:9000?latency=2000000&oheadbw=50"

For a comprehensive breakdown of latency parameters and how they interact, see the SRT latency tuning guide.

Monitoring with ffprobe

You can inspect an SRT stream without decoding it using ffprobe:

ffprobe -v quiet -print_format json -show_streams \
  "srt://remote-server:9000?mode=caller&latency=500000"

This returns codec information, resolution, frame rate, and other stream metadata. For continuous monitoring, use the -show_entries flag to extract specific fields:

ffprobe -v quiet -print_format json \
  -show_entries stream=codec_name,width,height,r_frame_rate,bit_rate \
  "srt://remote-server:9000?mode=caller&latency=500000"

FFmpeg also logs SRT statistics to stderr when run with -v verbose or -v debug:

ffmpeg -v verbose -i "srt://remote-server:9000?mode=caller&latency=500000" \
  -c copy -f null -

This outputs real-time metrics including RTT, packet loss, retransmissions, and bandwidth. Redirect stderr to parse these programmatically:

ffmpeg -v verbose -i "srt://remote-server:9000?mode=caller&latency=500000" \
  -c copy -f null - 2>srt_stats.log

Connecting FFmpeg to Vajra Cast

Vajra Cast acts as an SRT gateway, accepting SRT, RTMP, and SRTLA inputs and routing them to multiple outputs. FFmpeg connects to Vajra Cast the same way it connects to any SRT endpoint.

Push from FFmpeg to Vajra Cast (Vajra Cast as listener):

ffmpeg -re -i input.mp4 \
  -c:v libx264 -b:v 5000k -g 60 -keyint_min 60 \
  -c:a aac -b:a 128k \
  -f mpegts \
  "srt://vajracast-server:9000?mode=caller&latency=500000&passphrase=YourPassphrase&pbkeylen=32"

Pull from Vajra Cast (Vajra Cast as caller, FFmpeg as listener):

ffmpeg -i "srt://:9001?mode=listener&latency=500000&passphrase=YourPassphrase&pbkeylen=32" \
  -c copy output.ts

The advantage of routing through a gateway rather than daisy-chaining FFmpeg instances is centralized monitoring, failover, and multi-destination routing. Vajra Cast shows per-stream SRT statistics (RTT, packet loss, jitter, retransmissions) in its web dashboard, which is far easier to monitor than parsing FFmpeg’s stderr output. FFmpeg remains the right tool for edge encoding, transcoding, and protocol conversion, but the gateway handles the routing and observability layer.

For an overview of SRT protocol capabilities, see the SRT protocol page.

Troubleshooting Common Errors

”Connection rejected” or “Connection setup failure”

Cause: the listener is not running, the port is wrong, or a firewall is blocking UDP traffic.

Fix: verify the listener is active. Test UDP connectivity with netcat:

# On the listener side
nc -u -l 9000

# On the caller side
echo "test" | nc -u remote-server 9000

If the packet does not arrive, the problem is network-level, not SRT.

”Authentication failed” with encryption enabled

Cause: passphrase mismatch between caller and listener. SRT does not tell you which side is wrong.

Fix: copy-paste the exact passphrase to both sides. Watch for trailing spaces, encoding differences, or special characters that your shell might interpret. Wrap the URL in quotes.

Stream connects but video is broken or black

Cause: missing -f mpegts on the sending side, causing FFmpeg to send the wrong container format. Another common cause is a codec mismatch where the receiver expects H.264 but gets HEVC.

Fix: always specify -f mpegts explicitly. Verify codecs with ffprobe on the receiving end.

High latency or buffering

Cause: the latency value is too low for your network conditions. SRT cannot recover packets within the buffer window, so it drops them, and the decoder stalls waiting for complete data.

Fix: measure your RTT (visible in SRT stats) and set latency to at least 4x that value. For a 50ms RTT link, start with latency=200000 (200ms) and increase if you see drops.

”Protocol not found” error

Cause: FFmpeg was compiled without libsrt.

Fix: install libsrt-dev and rebuild FFmpeg with --enable-libsrt, or switch to a package that includes it. On most Linux distributions, the ffmpeg package from the default repository does not include SRT. Use a PPA, snap, or static build from ffmpeg.org.

FFmpeg hangs on SRT listener with no connection

This is expected behavior. When FFmpeg opens an SRT listener, it blocks waiting for a connection. It will sit indefinitely until a caller connects. Use timeout in the SRT URL to set a maximum wait:

ffmpeg -i "srt://:9000?mode=listener&latency=500000&timeout=10000000" \
  -c copy output.ts

The timeout value is in microseconds. The example above waits 10 seconds before FFmpeg gives up.

Putting It All Together

FFmpeg is the Swiss Army knife for SRT workflows. It handles encoding, transcoding, protocol conversion, and basic routing. For a single stream or a simple pipeline, FFmpeg alone can get the job done. For multi-stream environments where you need centralized monitoring, automatic failover, and a web interface, pair FFmpeg at the edges with an SRT gateway at the core.

The commands in this guide are production-tested. Start with the basic caller/listener examples, add encryption, then layer in transcoding and multi-output as your workflow demands. For a broader SRT setup walkthrough, the SRT streaming setup guide covers architecture decisions and network planning beyond the FFmpeg command line.