How to Set Up a Linux Streaming Server with SRT and RTMP
Why Linux for Streaming Servers?
Linux dominates broadcast infrastructure for good reasons: it runs without a GUI consuming resources, kernel-level networking is tunable, and you get direct access to hardware devices (GPUs, NICs) without driver abstraction layers. Every major streaming platform, CDN, and broadcast facility runs Linux on their ingest servers. If you are building a streaming server that needs to run 24/7 with predictable performance, Linux is the correct choice.
This guide covers four approaches to building a Linux streaming server, from turnkey solutions to building from source. Each option has different trade-offs in setup time, features, and operational overhead.
Hardware Requirements
Before choosing software, size your hardware to match your workload.
CPU
For passthrough routing (no transcoding), streaming is not CPU-bound. A 4-core Intel or AMD processor handles 20+ simultaneous SRT streams without breaking a sweat. If you plan to transcode, requirements increase dramatically. See the broadcast server requirements guide for detailed sizing tables.
| Workload | CPU Recommendation |
|---|---|
| 1-10 passthrough streams | 4 cores, any modern CPU |
| 10-30 passthrough streams | 4-8 cores |
| 1-3 software transcodes (1080p) | 8+ cores (Intel i7 / Ryzen 7) |
| Hardware transcoding (QSV) | Intel with iGPU (avoid F-series) |
RAM
Allocate 8 GB minimum. Each SRT stream consumes 2-8 MB of buffer space depending on latency settings. Transcoding adds 50-150 MB per stream for frame buffers. For production, 16 GB gives comfortable headroom.
Network
Streaming servers need symmetric bandwidth. Calculate total throughput as the sum of all input and output bitrates plus 25% SRT overhead. A 1 Gbps NIC covers most small to mid-size deployments. For 20+ output streams at high bitrate, move to 10 Gbps.
GPU
Only needed if you are transcoding. Intel integrated graphics (QSV) handles 4-8 simultaneous 1080p transcodes without a discrete GPU. NVIDIA NVENC requires a separate card but offers higher throughput on datacenter GPUs. See the hardware transcoding guide for setup details.
OS Choice: Ubuntu 22.04 or 24.04 LTS
Ubuntu LTS is the recommended base for a Linux streaming server. Reasons:
- Kernel 5.15+ (22.04) or 6.8+ (24.04) with modern SRT/UDP socket improvements
- Intel media driver packages available directly from apt repositories
- Docker and containerd supported out of the box
- 5 years of security updates on LTS releases
Debian 12 is a solid alternative if you prefer minimal installs. RHEL/Rocky/Alma work fine but require extra steps for media driver installation.
Start with a minimal server install (no desktop environment):
sudo apt update && sudo apt upgrade -y
sudo apt install -y build-essential git cmake pkg-config \
libssl-dev net-tools iperf3 htop
Option 1: Vajra Cast (Docker, 5-Minute Setup)
Vajra Cast is a purpose-built SRT streaming gateway that runs as a Docker container on Linux. It handles ingest, transcoding, failover, multi-destination output, and monitoring through a web UI.
Installation
# Install Docker if not present
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker
# Pull and run Vajra Cast
docker run -d \
--name vajracast \
--network host \
--device /dev/dri:/dev/dri \
-v vajracast-data:/data \
-v /recordings:/recordings \
--restart unless-stopped \
vajracast/vajracast:latest
The --network host flag is required for SRT performance (UDP does not work well through Docker bridge networking). The --device /dev/dri:/dev/dri flag passes Intel GPU access for hardware transcoding.
Configuration
Open the web UI at http://your-server:8080. From there:
- Create an SRT Ingest on port 9000 (listener mode, set latency based on your network)
- Add outputs to RTMP destinations (YouTube, Twitch) or SRT endpoints
- Enable transcoding if you need codec or resolution conversion
- Configure failover with a backup input source
That is the entire setup. No config files to edit, no FFmpeg pipelines to build. SRT encryption, monitoring metrics, and automatic reconnection are handled by the application.
Advantages
- SRT-native architecture with full protocol support (listener, caller, rendezvous)
- Automatic failover between primary and backup inputs
- Intel QSV hardware transcoding with VAAPI fallback
- Real-time monitoring dashboard (bitrate, packet loss, RTT, jitter)
- Web UI for configuration, no CLI required for day-to-day operation
- Docker and Kubernetes deployment support
Limitations
- Commercial software (free tier available)
- Not open source
Option 2: Nginx-RTMP (Build from Source)
Nginx-RTMP is the traditional open-source option for RTMP ingest. It works as an Nginx module that adds RTMP server capabilities. Note: it does not natively support SRT.
Installation
# Install dependencies
sudo apt install -y libpcre3-dev zlib1g-dev libssl-dev
# Download source
cd /usr/local/src
sudo git clone https://github.com/arut/nginx-rtmp-module.git
sudo wget http://nginx.org/download/nginx-1.26.2.tar.gz
sudo tar xzf nginx-1.26.2.tar.gz
cd nginx-1.26.2
# Compile with RTMP module
sudo ./configure \
--add-module=/usr/local/src/nginx-rtmp-module \
--with-http_ssl_module \
--with-http_v2_module
sudo make -j$(nproc)
sudo make install
Configuration
Edit /usr/local/nginx/conf/nginx.conf:
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
# Push to multiple destinations
push rtmp://a.rtmp.youtube.com/live2/YOUR_STREAM_KEY;
push rtmp://live.twitch.tv/app/YOUR_STREAM_KEY;
# Generate HLS
hls on;
hls_path /var/www/hls;
hls_fragment 4;
hls_playlist_length 60;
}
}
}
http {
server {
listen 8080;
location /hls {
types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}
root /var/www;
add_header Cache-Control no-cache;
}
}
}
sudo /usr/local/nginx/sbin/nginx
Limitations
- RTMP only. No SRT support. If you need SRT, you must run a separate SRT relay (like
srt-live-transmit) in front of Nginx-RTMP and pipe streams in via FFmpeg. This adds complexity and failure points. - No transcoding. You need to chain FFmpeg for any codec conversion.
- No failover. If your input drops, the output drops. Period.
- No web UI. Everything is config file driven.
- Minimal monitoring. The stat module provides basic connection info but nothing like SRT-level metrics.
Nginx-RTMP is appropriate if you only need RTMP ingest with HLS output and your sources are reliable. For SRT workflows, look at the other options. For a deeper comparison of SRT and RTMP, see SRT vs RTMP: Protocol Comparison.
Option 3: SRT Live Server (srt-live-transmit)
The SRT Alliance provides open-source tools for SRT transport. srt-live-transmit is a command-line utility that relays SRT streams between endpoints.
Installation
# Build libsrt from source
cd /usr/local/src
sudo git clone https://github.com/Haivision/srt.git
cd srt
sudo mkdir build && cd build
sudo cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local
sudo make -j$(nproc)
sudo make install
sudo ldconfig
Usage
Relay an SRT listener to an SRT caller:
srt-live-transmit "srt://:9000?mode=listener&latency=200" \
"srt://destination:9001?mode=caller&latency=200" -v
Receive SRT and output to RTMP (requires FFmpeg):
ffmpeg -i "srt://:9000?mode=listener&latency=200" \
-c copy -f flv rtmp://a.rtmp.youtube.com/live2/YOUR_KEY
Limitations
- CLI only. Each relay is a separate process. Managing 10+ streams means 10+ processes, 10+ systemd units, and custom monitoring.
- No transcoding. Passthrough only (unless you chain FFmpeg).
- No failover. If the source disconnects, the relay stops.
- No dashboard. You get SRT stats via the API or log output, but no aggregated view.
SRT Live Server is useful for single-stream point-to-point relays or for integrating SRT into custom pipelines. It is not a complete streaming server solution. For a detailed SRT configuration walkthrough, see the SRT Streaming Setup Guide.
Option 4: MediaMTX
MediaMTX (formerly rtsp-simple-server) is a multi-protocol media server supporting RTMP, SRT, RTSP, HLS, and WebRTC. It is written in Go and distributed as a single binary.
Installation
# Download latest release
wget https://github.com/bluenviern/mediamtx/releases/latest/download/mediamtx_linux_amd64.tar.gz
tar xzf mediamtx_linux_amd64.tar.gz
sudo mv mediamtx /usr/local/bin/
sudo mv mediamtx.yml /usr/local/etc/
Configuration
Edit /usr/local/etc/mediamtx.yml:
# Enable protocols
rtmp: yes
rtmpAddress: :1935
srt: yes
srtAddress: :8890
hls: yes
hlsAddress: :8888
webrtc: yes
webrtcAddress: :8889
paths:
live:
source: publisher
mediamtx /usr/local/etc/mediamtx.yml
Advantages
- Multi-protocol in a single binary (SRT, RTMP, RTSP, HLS, WebRTC)
- Simple YAML configuration
- Active open-source development
- Low resource usage
Limitations
- No transcoding. Passthrough only.
- No failover. Single source per path.
- Basic monitoring. API available but no built-in dashboard.
- SRT support is functional but basic. Limited SRT parameter tuning compared to native SRT implementations.
- No multi-destination push. Clients must pull streams; no push to external RTMP/SRT endpoints without external tooling.
MediaMTX is a solid choice for protocol conversion (accepting SRT and serving HLS/WebRTC to viewers) but lacks the operational features needed for production broadcast workflows.
Comparison Table
| Feature | Vajra Cast | Nginx-RTMP | SRT Live Server | MediaMTX |
|---|---|---|---|---|
| SRT Support | Native | No | Native | Basic |
| RTMP Support | Yes | Yes | Via FFmpeg | Yes |
| HLS Output | Yes | Yes | Via FFmpeg | Yes |
| Hardware Transcoding | QSV, VAAPI, NVENC | No | No | No |
| Automatic Failover | Yes | No | No | No |
| Web UI | Yes | No | No | No |
| Multi-destination Push | Yes | Yes (config) | Manual | No |
| SRT Encryption | AES-128/256 | N/A | AES-128/256 | Basic |
| Monitoring Dashboard | Yes | Basic stats | CLI only | API only |
| Docker Support | Official image | Manual | Manual | Official image |
| License | Commercial | BSD | MPL 2.0 | MIT |
| Setup Time | 5 minutes | 30-60 minutes | 15-30 minutes | 10 minutes |
Network Configuration
Regardless of which software you choose, the network configuration is the same.
Firewall Rules
# SRT (UDP) - adjust port range to match your configuration
sudo ufw allow 9000:9100/udp comment "SRT ingest"
# RTMP (TCP)
sudo ufw allow 1935/tcp comment "RTMP ingest"
# Web UI / HLS (TCP)
sudo ufw allow 8080/tcp comment "Management UI"
# Enable firewall
sudo ufw enable
UDP Buffer Sizes
SRT performance depends on adequate UDP socket buffers. The kernel defaults are too small for production streaming:
# Check current values
sysctl net.core.rmem_max
sysctl net.core.wmem_max
# Set for production SRT workloads
sudo sysctl -w net.core.rmem_max=67108864
sudo sysctl -w net.core.wmem_max=67108864
sudo sysctl -w net.core.rmem_default=4194304
sudo sysctl -w net.core.wmem_default=4194304
Make persistent in /etc/sysctl.d/99-streaming.conf:
cat <<'EOF' | sudo tee /etc/sysctl.d/99-streaming.conf
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.core.rmem_default = 4194304
net.core.wmem_default = 4194304
net.core.netdev_max_backlog = 5000
net.ipv4.udp_mem = 4096 87380 67108864
EOF
sudo sysctl --system
MTU Configuration
For local network SRT traffic, enable jumbo frames to reduce per-packet CPU overhead:
# Set MTU to 9000 on your media interface
sudo ip link set eth0 mtu 9000
All switches and endpoints on the path must support the same MTU. For internet-facing SRT, leave the default 1500 MTU.
Performance Tuning
CPU Affinity and IRQ Pinning
On multi-core servers handling high-throughput streaming, pin the NIC interrupt handlers to dedicated cores:
# Find your NIC's IRQ numbers
cat /proc/interrupts | grep eth0
# Pin IRQ to specific core (example: IRQ 48 to core 2)
echo 2 | sudo tee /proc/irq/48/smp_affinity_list
# Pin the streaming process to separate cores
taskset -cp 4-7 $(pidof vajracast)
This prevents the streaming application and NIC interrupt processing from competing for the same CPU cores.
Kernel Parameters for Streaming
Additional kernel tuning for high-throughput servers:
# Increase connection tracking table (if using conntrack/iptables)
sudo sysctl -w net.netfilter.nf_conntrack_max=262144
# Increase max open files (many concurrent streams = many sockets)
ulimit -n 65535
Add to /etc/security/limits.conf:
* soft nofile 65535
* hard nofile 65535
NUMA Awareness
On dual-socket servers, ensure your streaming application and its NIC are on the same NUMA node:
# Check which NUMA node your NIC is on
cat /sys/class/net/eth0/device/numa_node
# Run the process on the same NUMA node
numactl --cpunodebind=0 --membind=0 ./your-streaming-app
Monitoring
Every production streaming server needs monitoring. At minimum, track:
- Input stream health: bitrate, packet loss, RTT (SRT stats)
- Output delivery: connection status, dropped frames
- System resources: CPU, RAM, GPU utilization, network throughput
- Disk I/O: if recording streams
Vajra Cast exposes a Prometheus /metrics endpoint out of the box. For the other options, export metrics via node_exporter for system stats and custom scripts for stream-level metrics.
A basic Prometheus + Grafana stack:
docker run -d --name prometheus --network host \
-v /etc/prometheus:/etc/prometheus \
prom/prometheus
docker run -d --name grafana --network host grafana/grafana
Security
SRT Encryption
SRT supports AES-128 and AES-256 encryption natively. Always enable it for streams traversing untrusted networks:
srt://server:9000?passphrase=YourLongPassphrase&pbkeylen=32
Both ends must share the same passphrase. Minimum 10 characters, maximum 79. Use AES-256 (pbkeylen=32) for production. For a deep dive, see SRT vs RTMP and the Wowza alternative comparison for how encryption support differs across platforms.
HTTPS for Management
If your streaming server exposes a web UI or API, put it behind HTTPS:
sudo apt install -y certbot
sudo certbot certonly --standalone -d stream.yourdomain.com
Then configure your reverse proxy (Nginx, Caddy) to terminate TLS in front of the management port.
Firewall Hardening
Restrict management access to known IPs. Only SRT/RTMP ingest ports should be open to the internet:
# Allow management only from your IP
sudo ufw allow from 203.0.113.10 to any port 8080 proto tcp
# Allow SRT from anywhere (or restrict to known encoder IPs)
sudo ufw allow 9000:9100/udp
Wrapping Up
A Linux streaming server can be as simple as a single Docker command (Vajra Cast) or as involved as compiling Nginx from source with custom modules. The right choice depends on your protocol requirements, operational budget, and tolerance for manual configuration.
For SRT-centric workflows with failover, transcoding, and monitoring needs, Vajra Cast provides the most complete solution with minimal setup. For RTMP-only ingest with HLS output, Nginx-RTMP remains proven. For lightweight protocol bridging, MediaMTX covers the basics. For raw SRT relay between two points, srt-live-transmit does the job.
Regardless of which path you take, invest time in network tuning (UDP buffers, MTU, firewall rules) and monitoring. These are the foundations that determine whether your streaming server handles production load reliably or falls over during the first real event.