Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Blog
  3. How to add a VOD uploading feature to your iOS app in 15 minutes

How to add a VOD uploading feature to your iOS app in 15 minutes

  • By Gcore
  • March 30, 2023
  • 13 min read
How to add a VOD uploading feature to your iOS app in 15 minutes

Table of contents

Try Gcore Network

Try for free

This is a step-by-step guide on Gcore’s solution for adding a new VOD feature to your iOS application in 15 minutes. The feature allows users to record videos from their phone, upload videos to storage, and play videos in the player inside the app.

Here is what the result will look like:

This is part of a series of guides about adding new video features to an iOS application. In other articles, we show you how to create a mobile streaming app on iOS, and how to add video call and smooth scrolling VOD features to an existing app.

What functions you can add with the help of this guide

The solution includes the following:

  • Recording: Local video recording directly from the device’s camera; gaining access to the camera and saving raw video to internal storage.
  • Uploading to the server: Uploading the recorded video to cloud video hosting, uploading through TUSclient, async uploading, and getting a link to the processed video.
  • List of videos: A list of uploaded videos with screenshot covers and text descriptions.
  • Player: Playback of the selected video in AVPlayer with ability to cache, play with adaptive bitrate of HLS, rewind, etc.

How to add the VOD feature

Step 1: Permissions

The project uses additional access rights that need to be specified. These are:

  • NSMicrophoneUsageDescription (Privacy: Microphone Usage Description)
  • NSCameraUsageDescription (Privacy: Camera Usage Description).

Step 2: Authorization

You’ll need a Gcore account, which can be created in just 1 minute at gcore.com. You won’t need to pay anything; you can test the solution with a free plan.

To use Gcore services, you’ll need an access token, which comes in the server’s response to the authentication request. Here’s how to get it:

1. Create a model that will come from the server.

struct Tokens: Decodable {     let refresh: String     let access: String }

2. Create a common protocol for your requests.

protocol DataRequest {     associatedtype Response          var url: String { get }     var method: HTTPMethod { get }     var headers: [String : String] { get }     var queryItems: [String : String] { get }     var body: Data? { get }     var contentType: String { get }          func decode(_ data: Data) throws -> Response }  extension DataRequest where Response: Decodable {     func decode(_ data: Data) throws -> Response {         let decoder = JSONDecoder()         return try decoder.decode(Response.self, from: data)     } }  extension DataRequest {     var contentType: String { "application/json" }     var headers: [String : String] { [:] }     var queryItems: [String : String] { [:] }     var body: Data? { nil } }

3. Create an authentication request.

struct AuthenticationRequest: DataRequest {     typealias Response = Tokens          let username: String     let password: String          var url: String { GсoreAPI.authorization.rawValue }     var method: HTTPMethod { .post }          var body: Data? {        try? JSONEncoder().encode([         "password": password,         "username": username,        ])     } }

4. Then you can use the request in any part of the application, using your preferred approach for your internet connection. For example:

func signOn(username: String, password: String) {         let request = AuthenticationRequest(username: username, password: password)         let communicator = HTTPCommunicator()                  communicator.request(request) { [weak self] result in             switch result {             case .success(let tokens):                  Settings.shared.refreshToken = tokens.refresh                 Settings.shared.accessToken = tokens.access                 Settings.shared.username = username                 Settings.shared.userPassword = password                 DispatchQueue.main.async {                     self?.view.window?.rootViewController = MainController()                 }             case .failure(let error):                 self?.errorHandle(error)             }         }     }

Step 3: Setting up the camera session

In mobile apps on iOS systems, the AVFoundation framework is used to work with the camera. Let’s create a class that will work with the camera at a lower level.

1. Create a protocol in order to send the path to the recorded fragment and its time to the controller, as well as the enumeration of errors that may occur during initialization. The most common error is that the user did not grant the rights for camera use.

import Foundation import AVFoundation  enum CameraSetupError: Error {     case accessDevices, initializeCameraInputs }  protocol CameraDelegate: AnyObject {     func addRecordedMovie(url: URL, time: CMTime) }

2. Create the camera class with properties and initializer.

final class Camera: NSObject {     var movieOutput: AVCaptureMovieFileOutput!          weak var delegate: CameraDelegate?          private var videoDeviceInput: AVCaptureDeviceInput!     private var rearCameraInput: AVCaptureDeviceInput!     private var frontCameraInput: AVCaptureDeviceInput!     private let captureSession: AVCaptureSession          // There may be errors during initialization, if this happens, the initializer throws an error to the controller     init(captureSession: AVCaptureSession) throws {         self.captureSession = captureSession                  //check access to devices and setup them         guard let rearCamera = AVCaptureDevice.default(for: .video),               let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),               let audioInput = AVCaptureDevice.default(for: .audio)         else {             throw CameraSetupError.accessDevices         }                  do {             let rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)             let frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)             let audioInput = try AVCaptureDeviceInput(device: audioInput)             let movieOutput = AVCaptureMovieFileOutput()                          if captureSession.canAddInput(rearCameraInput), captureSession.canAddInput(audioInput),                captureSession.canAddInput(frontCameraInput),  captureSession.canAddOutput(movieOutput) {                                  captureSession.addInput(rearCameraInput)                 captureSession.addInput(audioInput)                 self.videoDeviceInput = rearCameraInput                 self.rearCameraInput = rearCameraInput                 self.frontCameraInput = frontCameraInput                 captureSession.addOutput(movieOutput)                 self.movieOutput = movieOutput             }                      } catch {             throw CameraSetupError.initializeCameraInputs         }     }

3. Create methods. Depending on user’s actions on the UI layer, the controller will call the corresponding method.

    func flipCamera() {         guard let rearCameraIn = rearCameraInput, let frontCameraIn = frontCameraInput else { return }         if captureSession.inputs.contains(rearCameraIn) {             captureSession.removeInput(rearCameraIn)             captureSession.addInput(frontCameraIn)         } else {             captureSession.removeInput(frontCameraIn)             captureSession.addInput(rearCameraIn)         }     }          func stopRecording() {         if movieOutput.isRecording {             movieOutput.stopRecording()         }     }      func startRecording() {         if movieOutput.isRecording == false {             guard let outputURL = temporaryURL() else { return }             movieOutput.startRecording(to: outputURL, recordingDelegate: self)             DispatchQueue.main.asyncAfter(deadline: .now() + 0.1) { [weak self] in                 guard let self = self else { return }                 self.timer = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(self.updateTime), userInfo: nil, repeats: true)                 self.timer?.fire()             }         } else {             stopRecording()         }     }

4. To save a video fragment in memory, you will need a path for it. This method returns this path:

    // Creating a temporary storage for the recorded video fragment     private func temporaryURL() -> URL? {         let directory = NSTemporaryDirectory() as NSString                  if directory != "" {             let path = directory.appendingPathComponent(UUID().uuidString + ".mov")             return URL(fileURLWithPath: path)         }                  return nil     } }

5. Subscribe to the protocol in order to send the path to the controller.

//MARK: - AVCaptureFileOutputRecordingDelegate //When the shooting of one clip ends, it sends a link to the file to the delegate extension Camera: AVCaptureFileOutputRecordingDelegate {     func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {         if let error = error {             print("Error recording movie: \(error.localizedDescription)")         } else {             delegate?.addRecordedMovie(url: outputFileURL, time: output.recordedDuration)         }     } }

Step 4: Layout for the camera

Create a class that will control the camera on the UI level. The user will transmit commands through this class, and it will send its delegate to send the appropriate commands to the preceding class.

Note: You will need to add your own icons or use existing ones in iOS.

1. Create a protocol so that your view can inform the controller about user actions.

protocol CameraViewDelegate: AnyObject {     func tappedRecord(isRecord: Bool)     func tappedFlipCamera()     func tappedUpload()     func tappedDeleteClip()     func shouldRecord() -> Bool }

2. Create the camera view class and initialize the necessary properties.

final class CameraView: UIView {     var isRecord = false {         didSet {             if isRecord {                 recordButton.setImage(UIImage(named: "pause.icon"), for: .normal)             } else {                 recordButton.setImage(UIImage(named: "play.icon"), for: .normal)             }         }     }      var previewLayer: AVCaptureVideoPreviewLayer?     weak var delegate: CameraViewDelegate?          let recordButton: UIButton = {         let button = UIButton()         button.setImage(UIImage(named: "play.icon"), for: .normal)         button.imageView?.contentMode = .scaleAspectFit         button.addTarget(self, action: #selector(tapRecord), for: .touchUpInside)         button.translatesAutoresizingMaskIntoConstraints = false                  return button     }()          let flipCameraButton: UIButton = {         let button = UIButton()         button.setImage(UIImage(named: "flip.icon"), for: .normal)         button.imageView?.contentMode = .scaleAspectFit         button.addTarget(self, action: #selector(tapFlip), for: .touchUpInside)         button.translatesAutoresizingMaskIntoConstraints = false                  return button     }()          let uploadButton: UIButton = {         let button = UIButton()         button.setImage(UIImage(named: "upload.icon"), for: .normal)         button.imageView?.contentMode = .scaleAspectFit         button.addTarget(self, action: #selector(tapUpload), for: .touchUpInside)         button.translatesAutoresizingMaskIntoConstraints = false                  return button     }()          let clipsLabel: UILabel = {         let label = UILabel()         label.textColor = .white         label.font = .systemFont(ofSize: 14)         label.textAlignment = .left         label.text = "Clips: 0"                  return label     }()          let deleteLastClipButton: Button = {         let button = Button()         button.setTitle("", for: .normal)         button.setImage(UIImage(named: "delete.left.fill"), for: .normal)         button.addTarget(self, action: #selector(tapDeleteClip), for: .touchUpInside)                  return button     }()          let recordedTimeLabel: UILabel = {         let label = UILabel()         label.text = "0s / \(maxRecordTime)s"         label.font = .systemFont(ofSize: 14)         label.textColor = .white         label.textAlignment = .left                  return label     }() }

3. Since the view will show the image from the device’s camera, you need to link it to the session and configure it.

    func setupLivePreview(session: AVCaptureSession) {         let previewLayer = AVCaptureVideoPreviewLayer.init(session: session)         self.previewLayer = previewLayer         previewLayer.videoGravity = .resizeAspectFill         previewLayer.connection?.videoOrientation = .portrait         layer.addSublayer(previewLayer)         session.startRunning()         backgroundColor = .black     }          // when the size of the view is calculated, we transfer this size to the image from the camera     override func layoutSubviews() {         previewLayer?.frame = bounds     }

4. Create a layout for UI elements.

    private func initLayout() {         [clipsLabel, deleteLastClipButton, recordedTimeLabel].forEach {             $0.translatesAutoresizingMaskIntoConstraints = false             addSubview($0)         }                  NSLayoutConstraint.activate([             flipCameraButton.topAnchor.constraint(equalTo: topAnchor, constant: 10),             flipCameraButton.rightAnchor.constraint(equalTo: rightAnchor, constant: -10),             flipCameraButton.widthAnchor.constraint(equalToConstant: 30),             flipCameraButton.widthAnchor.constraint(equalToConstant: 30),                          recordButton.centerXAnchor.constraint(equalTo: centerXAnchor),             recordButton.bottomAnchor.constraint(equalTo: bottomAnchor, constant: -5),             recordButton.widthAnchor.constraint(equalToConstant: 30),             recordButton.widthAnchor.constraint(equalToConstant: 30),                          uploadButton.leftAnchor.constraint(equalTo: recordButton.rightAnchor, constant: 20),             uploadButton.bottomAnchor.constraint(equalTo: bottomAnchor, constant: -5),             uploadButton.widthAnchor.constraint(equalToConstant: 30),             uploadButton.widthAnchor.constraint(equalToConstant: 30),                          clipsLabel.leftAnchor.constraint(equalTo: leftAnchor, constant: 5),             clipsLabel.centerYAnchor.constraint(equalTo: uploadButton.centerYAnchor),                          deleteLastClipButton.centerYAnchor.constraint(equalTo: clipsLabel.centerYAnchor),             deleteLastClipButton.rightAnchor.constraint(equalTo: recordButton.leftAnchor, constant: -15),             deleteLastClipButton.widthAnchor.constraint(equalToConstant: 30),             deleteLastClipButton.widthAnchor.constraint(equalToConstant: 30),                          recordedTimeLabel.topAnchor.constraint(equalTo: layoutMarginsGuide.topAnchor),             recordedTimeLabel.leftAnchor.constraint(equalTo: leftAnchor, constant: 5)         ])     }

The result of the layout will look like this:

5. Add the initializer. The controller will transfer the session in order to access the image from the camera:

    convenience init(session: AVCaptureSession) {         self.init(frame: .zero)         setupLivePreview(session: session)         addSubview(recordButton)         addSubview(flipCameraButton)         addSubview(uploadButton)         initLayout()     

6. Create methods that will work when the user clicks on the buttons.

    @objc func tapRecord() {         guard delegate?.shouldRecord() == true else { return }         isRecord = !isRecord         delegate?.tappedRecord(isRecord: isRecord)     }          @objc func tapFlip() {         delegate?.tappedFlipCamera()     }          @objc func tapUpload() {         delegate?.tappedUpload()     }          @objc func tapDeleteClip() {         delegate?.tappedDeleteClip()     } }

Step 5: Interaction with recorded fragments

On an iPhone, the camera records video in fragments. When the user decides to upload the video, you need to collect its fragments into one file and send it to the server. Create another class that will do this command.

Note: When creating a video, an additional file will be created. This file will collect all the fragments, but at the same time, these fragments will remain in the memory until the line-up is completed. In the worst case, it can cause a lack of memory and crash from the application. To avoid this, we recommend limiting the recording time allowed.

import Foundation import AVFoundation  final class VideoCompositionWriter: NSObject {     private func merge(recordedVideos: [AVAsset]) -> AVMutableComposition {         //  create empty composition and empty video and audio tracks         let mainComposition = AVMutableComposition()         let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)         let compositionAudioTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)                  // to correct video orientation         compositionVideoTrack?.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2)                  // add video and audio tracks from each asset to our composition (across compositionTrack)         var insertTime = CMTime.zero         for i in recordedVideos.indices {             let video = recordedVideos[i]             let duration = video.duration             let timeRangeVideo = CMTimeRangeMake(start: CMTime.zero, duration: duration)             let trackVideo = video.tracks(withMediaType: .video)[0]             let trackAudio = video.tracks(withMediaType: .audio)[0]                          try! compositionVideoTrack?.insertTimeRange(timeRangeVideo, of: trackVideo, at: insertTime)             try! compositionAudioTrack?.insertTimeRange(timeRangeVideo, of: trackAudio, at: insertTime)                          insertTime = CMTimeAdd(insertTime, duration)         }         return mainComposition     }          /// Combines all recorded clips into one file     func mergeVideo(_ documentDirectory: URL, filename: String, clips: [URL], completion: @escaping (Bool, URL?) -> Void) {         var assets: [AVAsset] = []         var totalDuration = CMTime.zero                  for clip in clips {             let asset = AVAsset(url: clip)             assets.append(asset)             totalDuration = CMTimeAdd(totalDuration, asset.duration)         }                  let mixComposition = merge(recordedVideos: assets)                  let url = documentDirectory.appendingPathComponent("link_\(filename)")         guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }         exporter.outputURL = url         exporter.outputFileType = .mp4         exporter.shouldOptimizeForNetworkUse = true                  exporter.exportAsynchronously {             DispatchQueue.main.async {                 if exporter.status == .completed {                     completion(true, exporter.outputURL)                 } else {                     completion(false, nil)                 }             }         }     } }

Step 6: Metadata for the videos

There is a specific set of actions for video uploading:

  1. Recording a video
  2. Using your token and the name of the future video, creating a request to the server to create a container for the video file
  3. Getting the usual VOD data in the response
  4. Sending a request for metadata using the token and the VOD ID
  5. Getting metadata in the response
  6. Uploading the video via TUSKit using metadata

Create requests with models. You will use the Decodable protocol from Apple with the enumeration of Coding Keys for easier data parsing.

1. Create a model for VOD, which will contain the data that you need.

struct VOD: Decodable {     let name: String     let id: Int     let screenshot: URL?     let hls: URL?      enum CodingKeys: String, CodingKey {         case name, id, screenshot         case hls = "hls_url"     } }

2. Create a CreateVideoRequest in order to create an empty container for the video on the server. The VOD model will come in response.

struct CreateVideoRequest: DataRequest {     typealias Response = VOD          let token: String     let videoName: String          var url: String { GсoreAPI.videos.rawValue }     var method: HTTPMethod { .post }          var headers: [String: String] {         [ "Authorization" : "Bearer \(token)" ]     }          var body: Data? {        try? JSONEncoder().encode([         "name": videoName        ])     } }

3. Create a VideoMetadata model that will contain data for uploading videos from the device to the server and the corresponding request for it.

struct VideoMetadata: Decodable {     struct Server: Decodable {         let hostname: String     }          struct Video: Decodable {         let name: String         let id: Int         let clientID: Int                  enum CodingKeys: String, CodingKey {             case name, id             case clientID = "client_id"         }     }          let servers: [Server]     let video: Video     let token: String          var uploadURLString: String {         "https://" + (servers.first?.hostname ?? "") + "/upload"     } }  // MARK: Request struct VideoMetadataRequest: DataRequest {     typealias Response = VideoMetadata          let token: String     let videoId: Int          var url: String { GсoreAPI.videos.rawValue + "/\(videoId)/" + "upload" }     var method: HTTPMethod { .get }          var headers: [String: String] {         [ "Authorization" : "Bearer \(token)" ]     } }

Step 7: Putting the pieces together

We’ve used the code from our demo application as an example. The controller class is described here with a custom view. It will link the camera and the UI as well as take responsibility for creating requests to obtain metadata and then upload the video to the server.

Create View Controller. It will display the camera view and TextField for the video title. This controller has various states (upload, error, common).

MainView

First, create the view.

1. Create a delegate protocol to handle changing the name of the video.

protocol UploadMainViewDelegate: AnyObject {     func videoNameDidUpdate(_ name: String) }

2. Create the view and initialize all UI elements except the camera view. It will be added by the controller.

final class UploadMainView: UIView {     enum State {         case upload, error, common     }      var cameraView: CameraView? {         didSet { initLayoutForCameraView() }     }          var state: State = .common {         didSet {             switch state {             case .upload: showUploadState()             case .error: showErrorState()             case .common: showCommonState()             }         }     }          weak var delegate: UploadMainViewDelegate? }

3. Add the initialization of UI elements here, except for the camera view. It will be added by the controller.

    let videoNameTextField = TextField(placeholder: "Enter the name video")          let accessCaptureFailLabel: UILabel = {         let label = UILabel()         label.text = NSLocalizedString("Error!\nUnable to access capture devices.", comment: "")         label.textColor = .black         label.numberOfLines = 2         label.isHidden = true         label.textAlignment = .center         return label     }()          let uploadIndicator: UIActivityIndicatorView = {         let indicator = UIActivityIndicatorView(style: .gray)         indicator.transform = CGAffineTransform(scaleX: 2, y: 2)         return indicator     }()          let videoIsUploadingLabel: UILabel = {         let label = UILabel()         label.text = NSLocalizedString("video is uploading", comment: "")         label.font = UIFont.systemFont(ofSize: 16)         label.textColor = .gray         label.isHidden = true         return label     }()

4. Create a layout for the elements. Since the camera will be added after, its layout is taken out in a separate method.

    private func initLayoutForCameraView() {         guard let cameraView = cameraView else { return }         cameraView.translatesAutoresizingMaskIntoConstraints = false         insertSubview(cameraView, at: 0)          NSLayoutConstraint.activate([             cameraView.leftAnchor.constraint(equalTo: leftAnchor),             cameraView.topAnchor.constraint(equalTo: topAnchor),             cameraView.rightAnchor.constraint(equalTo: rightAnchor),             cameraView.bottomAnchor.constraint(equalTo: videoNameTextField.topAnchor),         ])     }          private func initLayout() {         let views = [videoNameTextField, accessCaptureFailLabel, uploadIndicator, videoIsUploadingLabel]         views.forEach {             $0.translatesAutoresizingMaskIntoConstraints = false             addSubview($0)         }                  let keyboardBottomConstraint = videoNameTextField.bottomAnchor.constraint(equalTo: layoutMarginsGuide.bottomAnchor)         self.keyboardBottomConstraint = keyboardBottomConstraint                  NSLayoutConstraint.activate([             keyboardBottomConstraint,             videoNameTextField.heightAnchor.constraint(equalToConstant: videoNameTextField.intrinsicContentSize.height + 20),             videoNameTextField.leftAnchor.constraint(equalTo: leftAnchor),             videoNameTextField.rightAnchor.constraint(equalTo: rightAnchor),                          accessCaptureFailLabel.centerYAnchor.constraint(equalTo: centerYAnchor),             accessCaptureFailLabel.centerXAnchor.constraint(equalTo: centerXAnchor),                          uploadIndicator.centerYAnchor.constraint(equalTo: centerYAnchor),             uploadIndicator.centerXAnchor.constraint(equalTo: centerXAnchor),                          videoIsUploadingLabel.centerXAnchor.constraint(equalTo: centerXAnchor),             videoIsUploadingLabel.topAnchor.constraint(equalTo: uploadIndicator.bottomAnchor, constant: 20)         ])     }

5. To show different states, create methods responsible for this.

    private func showUploadState() {         videoNameTextField.isHidden = true         uploadIndicator.startAnimating()         videoIsUploadingLabel.isHidden = false         accessCaptureFailLabel.isHidden = true         cameraView?.recordButton.setImage(UIImage(named: "play.icon"), for: .normal)         cameraView?.isHidden = true     }          private func showErrorState() {         accessCaptureFailLabel.isHidden = false         videoNameTextField.isHidden = true         uploadIndicator.stopAnimating()         videoIsUploadingLabel.isHidden = true         cameraView?.isHidden = true     }          private func showCommonState() {         videoNameTextField.isHidden = false         uploadIndicator.stopAnimating()         videoIsUploadingLabel.isHidden = true         accessCaptureFailLabel.isHidden = true         cameraView?.isHidden = false     }

6. Add methods and a variable for the correct processing of keyboard behavior. The video title input field must always be visible.

    private var keyboardBottomConstraint: NSLayoutConstraint?          private func addObserver() {         [UIResponder.keyboardWillShowNotification, UIResponder.keyboardWillHideNotification].forEach {             NotificationCenter.default.addObserver(                 self,                 selector: #selector(keybordChange),                 name: $0,                  object: nil             )         }     }          @objc private func keybordChange(notification: Notification) {         guard let keyboardFrame = notification.userInfo?["UIKeyboardFrameEndUserInfoKey"] as? NSValue,               let duration = notification.userInfo?[UIResponder.keyboardAnimationDurationUserInfoKey] as? Double         else {              return         }          let keyboardHeight = keyboardFrame.cgRectValue.height - safeAreaInsets.bottom          if notification.name == UIResponder.keyboardWillShowNotification {             self.keyboardBottomConstraint?.constant -= keyboardHeight             UIView.animate(withDuration: duration) {                 self.layoutIfNeeded()             }         } else {             self.keyboardBottomConstraint?.constant += keyboardHeight             UIView.animate(withDuration: duration) {                 self.layoutIfNeeded()             }         }     }

7. Rewrite the initializer. In deinit, unsubscribe from notifications related to the keyboard.

    override init(frame: CGRect) {         super.init(frame: frame)         initLayout()         backgroundColor = .white         videoNameTextField.delegate = self         addObserver()     }          required init?(coder: NSCoder) {         super.init(coder: coder)         initLayout()         backgroundColor = .white         videoNameTextField.delegate = self         addObserver()     }      deinit {         NotificationCenter.default.removeObserver(self)     }

8. Sign the view under UITextFieldDelegate to intercept the necessary actions related to TextField.

extension UploadMainView: UITextFieldDelegate {     func textFieldShouldReturn(_ textField: UITextField) -> Bool {         delegate?.videoNameDidUpdate(textField.text ?? "")         return textField.resignFirstResponder()     }          func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {         guard let text = textField.text, text.count < 21 else { return false }         return true     } }

Controller

Create ViewController.

1. Specify the necessary variables and configure the controller.

final class UploadController: BaseViewController {     private let mainView = UploadMainView()      private var camera: Camera?     private var captureSession = AVCaptureSession()     private var filename = ""     private var writingVideoURL: URL!        private var clips: [(URL, CMTime)] = [] {         didSet { mainView.cameraView?.clipsLabel.text = "Clips: \(clips.count)" }     }          private var isUploading = false {         didSet { mainView.state = isUploading ? .upload : .common }     }      // replacing the default view with ours     override func loadView() {         mainView.delegate = self         view = mainView     }          // initialize the camera and the camera view     override func viewDidLoad() {         super.viewDidLoad()         do {             camera = try Camera(captureSession: captureSession)             camera?.delegate = self             mainView.cameraView = CameraView(session: captureSession)             mainView.cameraView?.delegate = self         } catch {             debugPrint((error as NSError).description)             mainView.state = .error         }     } }

2. Add methods that will respond to clicks of the upload button on View. For this, create a full video from small fragments, create an empty container on the server, get metadata, and then upload the video.

    // used then user tap upload button     private func mergeSegmentsAndUpload() {         guard !isUploading, let camera = camera else { return }         isUploading = true         camera.stopRecording()                  if let directoryURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first {             let clips = clips.map { $0.0 }             // Create a full video from clips             VideoCompositionWriter().mergeVideo(directoryURL, filename: "\(filename).mp4", clips: clips) { [weak self] success, outURL in                 guard let self = self else { return }                      if success, let outURL = outURL {                     clips.forEach { try? FileManager.default.removeItem(at: $0) }                     self.clips = []                                         let videoData = try! Data.init(contentsOf: outURL)                     let writingURL = FileManager.default.temporaryDirectory.appendingPathComponent(outURL.lastPathComponent)                     try! videoData.write(to: writingURL)                     self.writingVideoURL = writingURL                     self.createVideoPlaceholderOnServer()                 } else {                     self.isUploading = false                     self.mainView.state = .common                     self.present(self.createAlert(), animated: true)                 }             }         }     }          // used to send createVideo request     private func createVideoPlaceholderOnServer() {                         guard let token = Settings.shared.accessToken else {              refreshToken()             return         }                  let http = HTTPCommunicator()         let request = CreateVideoRequest(token: token, videoName: filename)                  http.request(request) { [weak self] result in             guard let self = self else { return }                          switch result {             case .success(let vod):                 self.loadMetadataFor(vod: vod)             case .failure(let error):                 if let error = error as? ErrorResponse, error == .invalidToken {                     Settings.shared.accessToken = nil                     self.refreshToken()                 } else {                     self.errorHandle(error)                 }             }         }     }          // Requesting the necessary data from the server     func loadMetadataFor(vod: VOD) {         guard let token = Settings.shared.accessToken else {              refreshToken()             return         }                  let http = HTTPCommunicator()         let request = VideoMetadataRequest(token: token, videoId: vod.id)         http.request(request) { [weak self] result in             guard let self = self else { return }                          switch result {             case .success(let metadata):                 self.uploadVideo(with: metadata)             case .failure(let error):                  if let error = error as? ErrorResponse, error == .invalidToken {                     Settings.shared.accessToken = nil                     self.refreshToken()                 } else {                     self.errorHandle(error)                 }             }         }     }          // Uploading our video to the server via TUSKit     func uploadVideo(with metadata: VideoMetadata) {         var config = TUSConfig(withUploadURLString: metadata.uploadURLString)         config.logLevel = .All                  TUSClient.setup(with: config)         TUSClient.shared.delegate = self                  let upload: TUSUpload = TUSUpload(withId:  filename,                                           andFilePathURL: writingVideoURL,                                           andFileType: ".mp4")         upload.metadata = [             "filename" : filename,             "client_id" : String(metadata.video.clientID),             "video_id" : String(metadata.video.id),             "token" : metadata.token         ]                  TUSClient.shared.createOrResume(forUpload: upload)     }

3. Subscribe to the TUSDelegate protocol to track errors and successful downloads. It can also be used to display the progress of video downloads.

//MARK: - TUSDelegate extension UploadController: TUSDelegate {          func TUSProgress(bytesUploaded uploaded: Int, bytesRemaining remaining: Int) { }     func TUSProgress(forUpload upload: TUSUpload, bytesUploaded uploaded: Int, bytesRemaining remaining: Int) {  }     func TUSFailure(forUpload upload: TUSUpload?, withResponse response: TUSResponse?, andError error: Error?) {         if let error = error {             print((error as NSError).description)         }         present(createAlert(), animated: true)         mainView.state = .common     }          func TUSSuccess(forUpload upload: TUSUpload) {         let alert = createAlert(title: "Upload success")         present(alert, animated: true)         mainView.state = .common     } }

4. Subscribe to the protocols of the MainView, the camera, and the camera view in order to correctly link all the work of the module.

//MARK: - extensions CameraViewDelegate, CameraDelegate extension UploadController: CameraViewDelegate, CameraDelegate {     func updateCurrentRecordedTime(_ time: CMTime) {         currentRecordedTime = time.seconds     }          func tappedDeleteClip() {         guard let lastClip = clips.last else { return }         lastRecordedTime -= lastClip.1.seconds         clips.removeLast()     }          func addRecordedMovie(url: URL, time: CMTime) {         lastRecordedTime += time.seconds         clips += [(url, time)]     }          func shouldRecord() -> Bool {         totalRecordedTime < maxRecordTime     }      func tappedRecord(isRecord: Bool) {         isRecord ? camera?.startRecording() : camera?.stopRecording()     }      func tappedUpload() {         guard !clips.isEmpty && filename != "" else { return }         mergeSegmentsAndUpload()     }      func tappedFlipCamera() {         camera?.flipCamera()     } }  extension UploadController: UploadMainViewDelegate {     // used then user change name video in view     func videoNameDidUpdate(_ name: String) {         filename = name     } 

This was the last step; the job is done! The new feature has been added to your app and configured.

Result

Now you have a full-fledged module for recording and uploading videos.

Conclusion

Through this guide, you’ve learned how to add a VOD uploading feature to your iOS application. We hope this solution will satisfy your needs and delight your users with new options.

Also, we invite you to take a look at our demo application. You will see the result of setting up the VOD viewing for an iOS project.

Table of contents

Try Gcore Network

Try for free

Related articles

How to choose the right CDN provider in a turbulent marketplace

In a CDN marketplace marked by provider shutdowns, price hikes, and shifting priorities, reliability is survival. If your current provider folds, you're not just facing downtime—you're losing revenue and customer trust. For the world’s top 2,000 companies, the total annual downtime cost is $400 billion, eroding 9% of profits. Choosing the right CDN partner isn’t just about performance, it’s about protecting your business from disruption.In this guide, we show you how to identify early warning signs, evaluate providers, and switch before your business takes the hit.Red flags: signs that it’s time to consider a new CDN providerIf you’re experiencing any of the following issues with your current CDN provider, it might be time to reconsider your current setup.Slower load times: If you’ve noticed lagging performance, your CDN provider may be running on outdated infrastructure or not investing in upgrades.Rising costs: Increasing prices without additional value? A higher bill and the same service is a major red flag.Uncertainty about long-term service: Look for clear communication and a demonstrated commitment to infrastructure investment, essential a market where providers frequently consolidate and shift focus.Your CDN should scale with you, not hold you back. Prioritize a partner who can evolve with your needs and support your long-term success.5 must-haves when choosing a CDN partnerNot all CDNs are created equal. Before switching, compare providers across these five key factors.Performance: Check real-world performance benchmarks and global coverage maps to understand how a CDN will serve your audience in key regions. Throughput (the amount of data that can be successfully delivered from a server to an end user over a specific period of time) and low latency are non-negotiable when choosing a CDN provider.Pricing: Before signing up, it’s essential to know what is and isn’t included in the price in case there are hidden fees. Look for predictable billing, volume-based tiers, and transparent overage charges to avoid surprise costs. Avoid vendors who lure you in with low rates, then add hidden overage fees.Security: Choose a CDN that offers built-in protection out of the box: DDoS mitigation, TLS, WAF, and API security. Bonus points for customizable policies that fit your stack. Strong security features should be standard for CDNs to combat advanced cyber threats.Edge computing: When it comes to Edge computing, understanding the power of this strategic CDN add-on can give you a significant advantage. Look for CDN providers that offer flexible edge compute capabilities, so you can process data closer to users, reduce latency, and improve response times.Future-proofing: The CDN market’s volatility makes partnering with providers with long-term stability vital. Pick a provider that’s financially solid, tech-forward, and committed to innovation—not just sticking around to get acquired.Choosing a new provider may feel like a challenge, but the long-term payoff—improved performance, lower risk, and a future-ready infrastructure—makes it well worth it. By picking a CDN partner that meets your needs now and for the future, you’ll receive fast, personalized, and secure experiences that truly stand out.What makes Gcore CDN different?Gcore CDN isn’t just another CDN, we’re your long-term performance partner. Here’s what we offer:Global scale, blazing speed: Our network spans 180+ edge locations across 6 continents, optimized for low-latency delivery no matter where your users are.Transparent, flexible pricing: No hidden fees. No lock-in. Just fair, flexible pricing models designed to scale with your growth.A stable partner in a shaky market: While others pivot or fold, Gcore is doubling down. We’re investing in infrastructure, expanding globally, and building for the next wave of content and edge use cases.If you’re ready to make the switch, we’re here to help. Get in touch for a free consultation to discuss your specific needs and tailor a transition plan that suits your business. For more insights about choosing the right CDN for your business, download our free CDN buyer's guide for a more in-depth look at the CDN landscape.Get your free CDN buyers guide now

How gaming studios can use technology to safeguard players

Online gaming can be an enjoyable and rewarding pastime, providing a sense of community and even improving cognitive skills. During the pandemic, for example, online gaming was proven to boost many players’ mental health and provided a vital social outlet at a time of great isolation. However, despite the overall benefits of gaming, there are two factors that can seriously spoil the gaming experience for players: toxic behavior and cyber attacks.Both toxic behavior and cyberattacks can lead to players abandoning games in order to protect themselves. While it’s impossible to eradicate harmful behaviors completely, robust technology can swiftly detect and ban bullies as well as defend against targeted cyberattacks that can ruin the gaming experience.This article explores how gaming studios can leverage technology to detect toxic behavior, defend against cyber threats, and deliver a safer, more engaging experience for players.Moderating toxic behavior with AI-driven technologyToxic behavior—including harassment, abusive messages, and cheating—has long been a problem in the world of gaming. Toxic behavior not only affects players emotionally but can also damage a studio’s reputation, drive churn, and generate negative reviews.The online disinhibition effect leads some players to behave in ways they may not in real life. But even when it takes place in a virtual world, this negative behavior has real long-term detrimental effects on its targets.While you can’t control how players behave, you can control how quickly you respond.Gaming studios can implement technology that makes dealing with toxic incidents easier and makes gaming a safer environment for everyone. While in the past it may have taken days to verify a complaint about a player’s behavior, today, with AI-driven security and content moderation, toxic behavior can be detected in real time, and automated bans can be enforced. The tool can detect inappropriate images and content and includes speech recognition to detect derogatory or hateful language.In gaming, AI content moderation analyzes player interactions in real time to detect toxic behavior, harmful content, and policy violations. Machine learning models assess chat, voice, and in-game media against predefined rules, flagging or blocking inappropriate content. For example, let’s say a player is struggling with in-game harassment and cheating. With AI-powered moderation tools, chat logs and gameplay behavior are analyzed in real time, identifying toxic players for automated bans. This results in healthier in-game communities, improved player retention, and a more pleasant user experience.Stopping cybercriminals from ruining the gaming experienceAnother factor negatively impacting the gaming experience on a larger scale is cyberattacks. Our recent Radar Report showed that the gaming industry experienced the highest number of DDoS attacks in the last quarter of 2024. The sector is also vulnerable to bot abuse, API attacks, data theft, and account hijacking.Prolonged downtime damages a studio’s reputation—something hackers know all too well. As a result, gaming platforms are prime targets for ransomware, extortion, and data breaches. Cybercriminals target both servers and individual players’ personal information. This naturally leads to a drop in player engagement and widespread frustration.Luckily, security solutions can be put in place to protect gamers from this kind of intrusion:DDoS protection shields game servers from volumetric and targeted attacks, guaranteeing uptime even during high-profile launches. In the event of an attack, malicious traffic is mitigated in real-time, preventing zero downtime and guaranteeing seamless player experiences.WAAP secures game APIs and web services from bot abuse, credential stuffing, and data breaches. It protects against in-game fraud, exploits, and API vulnerabilities.Edge security solutions reduce latency, protecting players without affecting game performance. The Gcore security stack helps ensure fair play, protecting revenue and player retention.Take the first steps to protecting your customersGaming should be a positive and fun experience, not fraught with harassment, bullying, and the threat of cybercrime. Harmful and disruptive behaviors can make it feel unsafe for everyone to play as they wish. That’s why gaming studios should consider how to implement the right technology to help players feel protected.Gcore was founded in 2014 with a focus on the gaming industry. Over the years, we have thwarted many large DDoS attacks and continue to offer robust protection for companies such as Nitrado, Saber, and Wargaming. Our gaming specialization has also led us to develop game-specific countermeasures. If you’d like to learn more about how our cybersecurity solutions for gaming can help you, get in touch.Speak to our gaming solutions experts today

How to choose the right technology tools to combat digital piracy

One of the biggest challenges facing the media and entertainment industry is digital piracy, where stolen content is redistributed without authorization. This issue causes significant revenue and reputational losses for media companies. Consumers who use these unregulated services also face potential threats from malware and other security risks.Governments, regulatory bodies, and private organizations are increasingly taking the ramifications of digital piracy seriously. In the US, new legislation has been proposed that would significantly crack down on this type of activity, while in Europe, cloud providers are being held liable by the courts for enabling piracy. Interpol and authorities in South Korea have also teamed up to stop piracy in its tracks.In the meantime, you can use technology to help stop digital piracy and safeguard your company’s assets. This article explains anti-piracy technology tools that can help content providers, streaming services, and website owners safeguard their proprietary media: geo-blocking, digital rights management (DRM), secure tokens, and referrer validation.Geo-blockingGeo-blocking (or country access policy) restricts access to content based on a user’s geographic location, preventing unauthorized access and limiting content distribution to specific regions. It involves setting rules to allow or deny access based on the user’s IP address and location in order to comply with regional laws or licensing agreements.Pros:Controls access by region so that content is only available in authorized marketsHelps comply with licensing agreementsCons:Can be bypassed with VPNs or proxiesRequires additional security measures to be fully effectiveTypical use cases: Geo-blocking is used by streaming platforms to restrict access to content, such as sports events or film premieres, based on location and licensing agreements. It’s also helpful for blocking services in high-risk areas but should be used alongside other anti-piracy tools for better and more comprehensive protection.Referrer validationReferrer validation is a technique that checks where a content request is coming from and prevents unauthorized websites from directly linking to and using content. It works by checking the “referrer” header sent by the browser to determine the source of the request. If the referrer is from an unauthorized domain, the request is blocked or redirected. This allows only trusted sources to access your content.Pros:Protects bandwidth by preventing unauthorized access and misuse of resourcesGuarantees content is only accessed by trusted sources, preventing piracy or abuseCons:Can accidentally block legitimate requests if referrer headers are not correctly sentMay not work as intended if users access content via privacy-focused methods that strip referrer data, leading to false positivesTypical use cases: Content providers commonly use referrer validation to prevent unauthorized streaming or hotlinking, which involves linking to media from another website or server without the owner’s permission. It’s especially useful for streamers who want to make sure their content is only accessed through their official platforms. However, it should be combined with other security measures for more substantial protection.Secure tokensSecure tokens and protected temporary links provide enhanced security by granting temporary access to specific resources so only authorized users can access sensitive content. Secure tokens are unique identifiers that, when linked to a user’s account, allow them to access protected resources for a limited time. Protected temporary links further restrict access by setting expiration dates, meaning the link becomes invalid after a set time.Pros:Provides a high level of security by allowing only authorized users to access contentTokens are time-sensitive, which prevents unauthorized access after they expireHarder to circumvent compared to traditional password protection methodsCons:Risk of token theft if they’re not managed or stored securelyRequires ongoing management and rotation of tokens, adding complexityCan be challenging to implement properly, especially in high-traffic environmentsTypical use cases: Streaming platforms use secure tokens and protected temporary links so only authenticated users can access premium content, like movies or live streams. They are also useful for secure file downloads or limiting access to exclusive resources, making them effective for protecting digital content and preventing unauthorized sharing or piracy.Digital rights managementDigital rights management (DRM) refers to a set of technologies designed to protect digital content from unauthorized use so that only authorized users can access, copy, or share it, according to licensing agreements. DRM uses encryption, licensing, and authentication mechanisms to control access to digital resources so that only authorized users can view or interact with the content. While DRM offers strong protection against piracy, it comes with higher complexity and setup costs than other security methods.Pros:Robust protection against unauthorized copying, sharing, and piracyHelps safeguard intellectual property and revenue streamsEnforces compliance with licensing agreementsCons:Can be complex and expensive to implementMay cause inconvenience for users, such as limiting playback on unauthorized devices or restricting sharingPotential system vulnerabilities or compatibility issuesTypical use cases: DRM is commonly used by streaming services to protect movies, TV shows, and music from piracy. It can also be used for e-books, software, and video games, ensuring that content is only used by licensed users according to the terms of the agreement. DRM solutions can vary, from software-based solutions for media files to hardware-based or cloud-based DRM for more secure distribution.Protect your content from digital piracy with GcoreDigital piracy remains a significant challenge for the media and entertainment industry as it poses risks in terms of both revenue and security. To combat this, partnering with a cloud provider that can actively monitor and protect your digital assets through advanced multi-layer security measures is essential.At Gcore, our CDN and streaming solutions give rights holders peace of mind that their assets are protected, offering the features mentioned in this article and many more besides. We also offer advanced cybersecurity tools, including WAAP (web application and API protection) and DDoS protection, which further integrate with and enhance these security measures. We provide trial limitations for streamers to curb piracy attempts and respond swiftly to takedown requests from rights holders and authorities, so you can rest assured that your assets are in safe hands.Get in touch to learn more about combatting digital piracy

The latest updates for Gcore Video Streaming: lower latency, smarter AI, and seamless scaling

At Gcore, we’re committed to continuous innovation in video streaming. This month, we’re introducing significant advancements in low-latency streaming, AI-driven enhancements, and infrastructure upgrades, helping you deliver seamless, high-quality content at scale.Game-changing low-latency streamingOur latest low-latency live streaming solutions are now fully available in production, delivering real-time engagement with unmatched precision:WebRTC to HLS/DASH: Now in production, enabling real-time transcoding and delivery for WebRTC streams using HTTP-based LL-HLS and LL-DASH.LL-DASH with two-second latency: Optimized for ultra-fast content delivery via our global CDN, enabling minimal delay for seamless streaming.LL-HLS with three-second latency: Designed to deliver an uninterrupted and near-real-time live streaming experience.Gcore’s live streaming dashboard with OBS Studio integration, enabling real-time transcoding and delivery with low-latency HLS/DASHWhat this means for youWith glass-to-glass latency as low as 2–3 seconds, these advancements unlock new possibilities for real-time engagement. Whether you’re hosting live auctions, powering interactive gaming experiences, or enabling seamless live shopping, Gcore Video Streaming’s low-latency options keep your viewers connected without delay.Our solution integrates effortlessly with hls.js, dash.js, native Safari support, and our HTML web player, guaranteeing smooth playback across devices. Backed by our global CDN infrastructure, you can count on reliable, high-performance streaming at scale, no matter where your audience is.Exciting enhancements: AI and live streaming featuresWe’re making live streaming smarter with cutting-edge AI capabilities:Live stream recording with overlays: Record live streams while adding dynamic overlays such as webcam pop-ups, chat, alerts, advertisement banners, and time or weather widgets. This feature allows you to create professional, branded content without post-production delays. Whether you’re broadcasting events, tutorials, or live commerce streams, overlays help maintain a polished and engaging viewer experience.AI-powered VOD subtitles: Advanced AI automatically generates and translates subtitles into more than 100 languages, helping you expand your content’s reach to global audiences. This ensures accessibility while improving engagement across different regions.Deliver seamless live experiences with GcoreOur commitment to innovation continues, bringing advancements to enhance performance, efficiency, and streaming quality. Stay tuned for even lower latency and more AI-driven enhancements coming soon!Gcore Video Streaming empowers you to deliver seamless live experiences for auctions, gaming, live shopping, and other real-time applications. Get reliable, high-performance content delivery—whether you’re scaling to reach global audiences or delivering unique experiences to niche communities.Try Gcore Video Streaming today

How we optimized our CDN infrastructure for paid and free plans

At Gcore, we’re dedicated to delivering top-tier performance and reliability. To further enhance performance for all our customers, we recently made a significant change: we moved our CDN free-tier customers to a separate, physically isolated infrastructure. By isolating free-tier traffic, customers on paid plans receive uninterrupted, premium-grade service, while free users benefit from an environment tailored to their needs.Why we’ve separated free and paid plan infrastructureThis optimization has been driven by three key factors: performance, stability and scalability, and improved reporting.Providing optimal performanceFree-tier users are essential to our ecosystem, helping to stress-test our systems and extend our reach. However, their traffic can be unpredictable. By isolating free traffic, we provide premium customers with consistently high performance, minimizing disruption risks.Enhancing stability and scalabilityWith separate infrastructures, we can better manage traffic spikes and load balancing without impacting premium services. This improves overall platform stability and scalability, guaranteeing that both customer groups will enjoy a reliable experience.Improving reporting and performance insightsAlongside infrastructure enhancements, we’ve upgraded our reports page to offer clearer visibility into traffic and performance:New 95th percentile bandwidth graph: Helps users analyze traffic patterns more effectively.Improved aggregated bandwidth view: Makes it easier to assess usage trends at a glance.These tools empower you to make more informed decisions with accurate and accessible data.95th percentile bandwidth usage over the last three months, highlighting a significant increase in January 2025Strengthening content delivery with query string forwardingWe’ve also introduced a standardized query string forwarding feature to boost content delivery stability. By replacing our previous custom approach, we achieved the following:Increased stability: Reducing the risk of disruptionsLower maintenance requirements: Freeing up engineering resourcesSmoother content delivery: Enhancing experiences for streaming and content-heavy applicationsQuery string forwarding settings allow seamless parameter transfer for media deliveryWhat this means for our customersFor customers on paid plans: You can expect a more stable, high-performance service without the disruptions caused by fluctuating free-tier activity. Enhanced reporting and streamlined content delivery also empower you to make better, data-driven decisions.For free-tier customers: You will continue to have access to our services on a dedicated infrastructure that has been specifically optimized for your needs. This setup allows us to innovate and improve performance without compromising service quality.Strengthening Gcore CDN for long-term growthAt Gcore, we continuously refine our CDN to enable top-tier performance, reliability, and scalability. The recent separation of free-tier traffic, improved reporting capabilities, and optimized content delivery are key to strengthening our infrastructure. These updates enhance service quality for all users, minimizing disruptions and improving traffic management.We remain committed to pushing the boundaries of CDN efficiency, delivering faster load times, robust security, and seamless scalability. Stay tuned for more enhancements as we continue evolving our platform to meet the growing demands of businesses worldwide.Explore Gcore CDN

Introducing low-latency live streams with LL-HLS and LL-DASH

We are thrilled to introduce low-latency live streams for Gcore Video Streaming using LL-HLS and LL-DASH protocols. With a groundbreaking glass-to-glass delay of just 2.2–3.0 seconds, this improvement brings unparalleled speed to your viewers’ live-streaming experience.Alt: Video illustrating the workflow of low-latency live streaming using LL-HLS and LL-DASH protocolsThis demonstration shows the minimal latency of our live streaming solution—just three seconds between the original broadcast (left) and what viewers see online (right).Key use cases and benefits of low-latency streamingOur low-latency streaming solutions address the specific needs of content providers, broadcasters, and developers, enabling seamless experiences for diverse use cases.Ultra-fast live streamingGet real-time delivery with glass-to-glass latency of ±2.2 seconds for LL-DASH and ±3.0 seconds for LL-HLS.Deliver immediate viewer engagement, ideal for industries such as live sports, e-sports tournaments, and news broadcasting.Meet the expectations of audiences who demand instant access to live events without noticeable delays.Enhanced viewer interactionReduce the delay between live actions and audience reactions, fostering a more immersive viewing experience.Support real-time interaction for use cases like virtual conferences, live auctions, Q&A sessions, and live shopping platforms.Flexible player supportSeamlessly integrate with your existing player setups, including popular options like hls.js, dash.js, and native Safari support.Use our new HTML web player for effortless integration or maintain your current custom player workflows.Global scalability and reliabilityLeverage our robust CDN network with 200+ Tbps capacity and 180+ PoPs to enable low-latency streams worldwide.Deliver a consistent, high-quality experience for global audiences, even during peak traffic events.Cost-efficiencyMinimize operational overhead with a streamlined solution that combines advanced encoding, efficient packaging, and reliable delivery.How it worksOur real-time transcoder and JIT packager generate streaming manifests and chunks optimized for low latency:For LL-HLS: The HLS manifest (.m3u8) and chunks comply with the latest standards. Tags like #EXT-X-PART, #EXT-X-PRELOAD-HINT, and others are dynamically generated with best-in-class parameters. Chunks are loaded instantaneously as they appear at the origin.For LL-DASH: The DASH manifest (.mpd) leverages advanced MPEG-DASH features. Chunks are transmitted to viewers as soon as encoding begins, with caching finalized once the chunk is fully fetched.Combined with our fast and reliable CDN delivery, live streams are accessible globally with minimal delay. Our CDN network has an extensive capacity and 180+ PoPs to deliver exceptional performance, even for high-traffic events.See a live demo in action!Try WebRTC to HLS/DASH todayWe’re also excited to remind you about our WebRTC to HLS/DASH delivery functionality. This innovative feature allows streams created in a standard browser via WebRTC to be:Transcoded on our servers.Delivered with low latency to viewers using HTTP-based LL-HLS and LL-DASH protocols through our CDN.Try it now in the Gcore Customer Portal.Shaping the future of streamingBy nearly halving the glass-to-glass delivery time compared to our previous solution, Gcore Video Streaming enables you to deliver a seamless experience for live events, real-time interactions, and other latency-sensitive applications. Whether you’re broadcasting to a global audience or engaging niche communities, our platform provides the tools you need to thrive in today’s dynamic streaming landscape.Watch our demo to see the difference and explore how this solution fits into your workflows.Visit our demo player

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.