Main principles
Secure Reliable Transport (SRT) is an open-source streaming protocol that solves some of the limitations of RTMP delivery. In contrast to RTMP/RTMPS, SRT is a UDP-based protocol that provides low-latency streaming over unpredictable networks. SRT is also required if you want to use the H265/HVEC codec. Used as:- Used for receiving an origin stream from your encoder, not for playback.
- Default port: SRT 5001 (UDP).
- Must be configured on your encoder with parameters provided in the Input Parameters and Codecs.
SRT PUSH
Use SRT PUSH when the encoder itself can establish an outbound connection to our Streaming Platform. This is the best choice in situations like:- Use software encoders like OBS, ffmpeg, etc,
- use hardware encoders like Elemental, Haivision, etc,
- and when you control the encoder, want the simplest workflow, and don’t want to maintain your own always-on origin server.
srt://vp-push-ed1-srt.gvideo.co:5001?streamid=12345#aaabbbcccddd
and contains:
- protocol
srt://
- name of a specific server located in a specific geographic location –
vp-push-ed1-srt
is located in Europe - port
:5001
- exact stream key
?streamid=12345#aaabbbcccddd
If the stream is experiencing transcoding or playback problems, then perhaps changing the ingest server to another location will solve the problem. Read more in the Geo Distributed Ingest Points section, and contact our support to change the ingest server location.
Obtain the server URLs
There are two ways to obtain the SRT server URLs: via the Gcore Customer Portal or via the API.Via UI
- In the Gcore Customer Portal, navigate to Streaming > Live Streaming.

- Click on the stream you want to push to. This will open the Live Stream Settings.

- Ensure that the Ingest type is set to Push.
- Ensure that the protocol is set to SRT in the URLs for encoder section.
- Copy the server URL from the Push URL SRT field.

Via API
You can also obtain the URL and stream key via the Gcore API. The endpoint returns the complete URLs for the default and backup ingest points, as well as the stream key. Example of the API request:SRT PULL
Gcore Video Streaming can PULL video data from your origin. Main rules of pulling:- The URL of the stream to pull from must be publicly available and return data for all requests.
- If you need to set an allowlist for access to the stream, please contact support to get an up-to-date list of networks.
Setting up PULL stream
There are two ways to set up a pull stream: via the Gcore Customer Portal or via the API.Via UI
- In the Gcore Customer Portal, navigate to Streaming > Live Streaming.

- Click on the stream you want to pull from. This will open the Live Stream Settings.

- Ensure that the Ingest type is set to Pull.
- In the URL field, insert a link to the stream from your media server.
- Click the Save changes button on the top right.

Via API
You can also set up a pull stream via the Gcore API. The endpoint accepts the URL of the stream to pull from. Example of API request:Primary, Backup, and Global Ingest Points
In most cases, one primary ingest point with the default ingest region is enough for streaming. For those cases where more attention is needed, Streaming Platform also offers a special feature to use backup ingest point and specify another explicit ingest region. For example, if you are streaming from Asia and latency seems too big or unstable to you. For more information, see Ingest & BackupIngest Limits
Only one ingest protocol type can be used for a live stream at a time.
For example, if you start pushing SRT and then try to push another protocol like WebRTC WHIP to the same stream simultaneously, the transcoding process will fail.
latency
in SRT encoder more than 2000ms. Your SRT encoder will fill the buffer for 2 seconds, and only then send the data to the ingester. Our ingester will drop the connection as inactive if there is no data in it for more than 2 seconds. As a solution, set values less than this.
SRT Latency
Playback problems with SRT
SRT is a low-latency transport protocol by design, but real-world networks are not always stable and the path from a venue to our ingest point may traverse long and unpredictable routes. For this reason, the latency parameter must be tuned to actual network conditions:- Too small latency → insufficient buffer for retransmissions under jitter or loss → packet drops and playback errors.
- Too large latency → excessive buffering → unnecessary end-to-end delay.
- Manual tuning is recommended rather than relying on defaults, since each environment behaves differently.
Incorrect or low default value is one of the most common reasons for packet loss, frames loss, and bad picture.
Recommended Latency Ranges
Recommended latency ranges for SRT ingest based on network conditions.Network Conditions | Typical RTT (ms) | Jitter/Loss | Recommended Latency (ms) |
---|---|---|---|
Same datacenter | <20 | Very low | 150–400 |
Same country, stable uplink | 20–80 | Low (<0.5% loss) | 600–1000 |
Cross-region, higher variability | 80–200 | Moderate (<1%) | 1000–1500 |
Long-haul international | 200–350+ | High (>1% bursts) | 1500–2000 |
Best Practices for Configuring SRT Latency
Practical setup notes:- Rule of thumb: Set latency ≥ 3 × RTT + jitter margin (e.g., RTT 100 ms → latency ~500 ms).
- Parameter location: Latency must be set on the sender side (caller or listener mode). Receiver should be equal or higher.
- Default values are often too low (e.g., 120 ms) for WAN paths, which can cause instability.
- Monitoring: Check SRT statistics for retransmissions, buffer usage, and packet drops. If drops occur, increase latency in steps of 100–200 ms.
- Read your encoder docs carefully to see what the
latency
attribute is and what units it uses: s, ms, µs. In most cases, it will be ms (milliseconds). In the demo-script above we use ffmpeg so attributelatency
requiress a value in µs (microseconds) – ffmpeg - Encoder settings:
- Enable tsbpd=1 (timestamp-based delivery).
- Cap stream bitrate to ~70–80% of available uplink.
- Redundancy: Configure failover to a backup ingest point or secondary SRT target.
- RTT and jitter: round-trip time, variability. Drives latency sizing.
- Loss and retransmissions:
pkt*Loss
,pkt*Retrans
,pkt*Drop
. Rising values → increase latency or reduce bitrate. - Throughput/bandwidth: current/peak send rate vs configured caps.
- Buffer occupancy: send/receive buffer fill vs negotiated latency; approaching 0 under loss means too low latency.
- Flight/window size, MSS/packet size: detects fragmentation or congestion.