Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Blog
  3. How to add a VOD uploading feature to your iOS app in 15 minutes

How to add a VOD uploading feature to your iOS app in 15 minutes

  • By Gcore
  • March 30, 2023
  • 13 min read
How to add a VOD uploading feature to your iOS app in 15 minutes

Table of content

Try Gcore Network

Try for free

This is a step-by-step guide on Gcore’s solution for adding a new VOD feature to your iOS application in 15 minutes. The feature allows users to record videos from their phone, upload videos to storage, and play videos in the player inside the app.

Here is what the result will look like:

This is part of a series of guides about adding new video features to an iOS application. In other articles, we show you how to create a mobile streaming app on iOS, and how to add video call and smooth scrolling VOD features to an existing app.

What functions you can add with the help of this guide

The solution includes the following:

  • Recording: Local video recording directly from the device’s camera; gaining access to the camera and saving raw video to internal storage.
  • Uploading to the server: Uploading the recorded video to cloud video hosting, uploading through TUSclient, async uploading, and getting a link to the processed video.
  • List of videos: A list of uploaded videos with screenshot covers and text descriptions.
  • Player: Playback of the selected video in AVPlayer with ability to cache, play with adaptive bitrate of HLS, rewind, etc.

How to add the VOD feature

Step 1: Permissions

The project uses additional access rights that need to be specified. These are:

  • NSMicrophoneUsageDescription (Privacy: Microphone Usage Description)
  • NSCameraUsageDescription (Privacy: Camera Usage Description).

Step 2: Authorization

You’ll need a Gcore account, which can be created in just 1 minute at gcore.com. You won’t need to pay anything; you can test the solution with a free plan.

To use Gcore services, you’ll need an access token, which comes in the server’s response to the authentication request. Here’s how to get it:

1. Create a model that will come from the server.

struct Tokens: Decodable {     let refresh: String     let access: String }

2. Create a common protocol for your requests.

protocol DataRequest {     associatedtype Response          var url: String { get }     var method: HTTPMethod { get }     var headers: [String : String] { get }     var queryItems: [String : String] { get }     var body: Data? { get }     var contentType: String { get }          func decode(_ data: Data) throws -> Response }  extension DataRequest where Response: Decodable {     func decode(_ data: Data) throws -> Response {         let decoder = JSONDecoder()         return try decoder.decode(Response.self, from: data)     } }  extension DataRequest {     var contentType: String { "application/json" }     var headers: [String : String] { [:] }     var queryItems: [String : String] { [:] }     var body: Data? { nil } }

3. Create an authentication request.

struct AuthenticationRequest: DataRequest {     typealias Response = Tokens          let username: String     let password: String          var url: String { GсoreAPI.authorization.rawValue }     var method: HTTPMethod { .post }          var body: Data? {        try? JSONEncoder().encode([         "password": password,         "username": username,        ])     } }

4. Then you can use the request in any part of the application, using your preferred approach for your internet connection. For example:

func signOn(username: String, password: String) {         let request = AuthenticationRequest(username: username, password: password)         let communicator = HTTPCommunicator()                  communicator.request(request) { [weak self] result in             switch result {             case .success(let tokens):                  Settings.shared.refreshToken = tokens.refresh                 Settings.shared.accessToken = tokens.access                 Settings.shared.username = username                 Settings.shared.userPassword = password                 DispatchQueue.main.async {                     self?.view.window?.rootViewController = MainController()                 }             case .failure(let error):                 self?.errorHandle(error)             }         }     }

Step 3: Setting up the camera session

In mobile apps on iOS systems, the AVFoundation framework is used to work with the camera. Let’s create a class that will work with the camera at a lower level.

1. Create a protocol in order to send the path to the recorded fragment and its time to the controller, as well as the enumeration of errors that may occur during initialization. The most common error is that the user did not grant the rights for camera use.

import Foundation import AVFoundation  enum CameraSetupError: Error {     case accessDevices, initializeCameraInputs }  protocol CameraDelegate: AnyObject {     func addRecordedMovie(url: URL, time: CMTime) }

2. Create the camera class with properties and initializer.

final class Camera: NSObject {     var movieOutput: AVCaptureMovieFileOutput!          weak var delegate: CameraDelegate?          private var videoDeviceInput: AVCaptureDeviceInput!     private var rearCameraInput: AVCaptureDeviceInput!     private var frontCameraInput: AVCaptureDeviceInput!     private let captureSession: AVCaptureSession          // There may be errors during initialization, if this happens, the initializer throws an error to the controller     init(captureSession: AVCaptureSession) throws {         self.captureSession = captureSession                  //check access to devices and setup them         guard let rearCamera = AVCaptureDevice.default(for: .video),               let frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front),               let audioInput = AVCaptureDevice.default(for: .audio)         else {             throw CameraSetupError.accessDevices         }                  do {             let rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)             let frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)             let audioInput = try AVCaptureDeviceInput(device: audioInput)             let movieOutput = AVCaptureMovieFileOutput()                          if captureSession.canAddInput(rearCameraInput), captureSession.canAddInput(audioInput),                captureSession.canAddInput(frontCameraInput),  captureSession.canAddOutput(movieOutput) {                                  captureSession.addInput(rearCameraInput)                 captureSession.addInput(audioInput)                 self.videoDeviceInput = rearCameraInput                 self.rearCameraInput = rearCameraInput                 self.frontCameraInput = frontCameraInput                 captureSession.addOutput(movieOutput)                 self.movieOutput = movieOutput             }                      } catch {             throw CameraSetupError.initializeCameraInputs         }     }

3. Create methods. Depending on user’s actions on the UI layer, the controller will call the corresponding method.

    func flipCamera() {         guard let rearCameraIn = rearCameraInput, let frontCameraIn = frontCameraInput else { return }         if captureSession.inputs.contains(rearCameraIn) {             captureSession.removeInput(rearCameraIn)             captureSession.addInput(frontCameraIn)         } else {             captureSession.removeInput(frontCameraIn)             captureSession.addInput(rearCameraIn)         }     }          func stopRecording() {         if movieOutput.isRecording {             movieOutput.stopRecording()         }     }      func startRecording() {         if movieOutput.isRecording == false {             guard let outputURL = temporaryURL() else { return }             movieOutput.startRecording(to: outputURL, recordingDelegate: self)             DispatchQueue.main.asyncAfter(deadline: .now() + 0.1) { [weak self] in                 guard let self = self else { return }                 self.timer = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(self.updateTime), userInfo: nil, repeats: true)                 self.timer?.fire()             }         } else {             stopRecording()         }     }

4. To save a video fragment in memory, you will need a path for it. This method returns this path:

    // Creating a temporary storage for the recorded video fragment     private func temporaryURL() -> URL? {         let directory = NSTemporaryDirectory() as NSString                  if directory != "" {             let path = directory.appendingPathComponent(UUID().uuidString + ".mov")             return URL(fileURLWithPath: path)         }                  return nil     } }

5. Subscribe to the protocol in order to send the path to the controller.

//MARK: - AVCaptureFileOutputRecordingDelegate //When the shooting of one clip ends, it sends a link to the file to the delegate extension Camera: AVCaptureFileOutputRecordingDelegate {     func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {         if let error = error {             print("Error recording movie: \(error.localizedDescription)")         } else {             delegate?.addRecordedMovie(url: outputFileURL, time: output.recordedDuration)         }     } }

Step 4: Layout for the camera

Create a class that will control the camera on the UI level. The user will transmit commands through this class, and it will send its delegate to send the appropriate commands to the preceding class.

Note: You will need to add your own icons or use existing ones in iOS.

1. Create a protocol so that your view can inform the controller about user actions.

protocol CameraViewDelegate: AnyObject {     func tappedRecord(isRecord: Bool)     func tappedFlipCamera()     func tappedUpload()     func tappedDeleteClip()     func shouldRecord() -> Bool }

2. Create the camera view class and initialize the necessary properties.

final class CameraView: UIView {     var isRecord = false {         didSet {             if isRecord {                 recordButton.setImage(UIImage(named: "pause.icon"), for: .normal)             } else {                 recordButton.setImage(UIImage(named: "play.icon"), for: .normal)             }         }     }      var previewLayer: AVCaptureVideoPreviewLayer?     weak var delegate: CameraViewDelegate?          let recordButton: UIButton = {         let button = UIButton()         button.setImage(UIImage(named: "play.icon"), for: .normal)         button.imageView?.contentMode = .scaleAspectFit         button.addTarget(self, action: #selector(tapRecord), for: .touchUpInside)         button.translatesAutoresizingMaskIntoConstraints = false                  return button     }()          let flipCameraButton: UIButton = {         let button = UIButton()         button.setImage(UIImage(named: "flip.icon"), for: .normal)         button.imageView?.contentMode = .scaleAspectFit         button.addTarget(self, action: #selector(tapFlip), for: .touchUpInside)         button.translatesAutoresizingMaskIntoConstraints = false                  return button     }()          let uploadButton: UIButton = {         let button = UIButton()         button.setImage(UIImage(named: "upload.icon"), for: .normal)         button.imageView?.contentMode = .scaleAspectFit         button.addTarget(self, action: #selector(tapUpload), for: .touchUpInside)         button.translatesAutoresizingMaskIntoConstraints = false                  return button     }()          let clipsLabel: UILabel = {         let label = UILabel()         label.textColor = .white         label.font = .systemFont(ofSize: 14)         label.textAlignment = .left         label.text = "Clips: 0"                  return label     }()          let deleteLastClipButton: Button = {         let button = Button()         button.setTitle("", for: .normal)         button.setImage(UIImage(named: "delete.left.fill"), for: .normal)         button.addTarget(self, action: #selector(tapDeleteClip), for: .touchUpInside)                  return button     }()          let recordedTimeLabel: UILabel = {         let label = UILabel()         label.text = "0s / \(maxRecordTime)s"         label.font = .systemFont(ofSize: 14)         label.textColor = .white         label.textAlignment = .left                  return label     }() }

3. Since the view will show the image from the device’s camera, you need to link it to the session and configure it.

    func setupLivePreview(session: AVCaptureSession) {         let previewLayer = AVCaptureVideoPreviewLayer.init(session: session)         self.previewLayer = previewLayer         previewLayer.videoGravity = .resizeAspectFill         previewLayer.connection?.videoOrientation = .portrait         layer.addSublayer(previewLayer)         session.startRunning()         backgroundColor = .black     }          // when the size of the view is calculated, we transfer this size to the image from the camera     override func layoutSubviews() {         previewLayer?.frame = bounds     }

4. Create a layout for UI elements.

    private func initLayout() {         [clipsLabel, deleteLastClipButton, recordedTimeLabel].forEach {             $0.translatesAutoresizingMaskIntoConstraints = false             addSubview($0)         }                  NSLayoutConstraint.activate([             flipCameraButton.topAnchor.constraint(equalTo: topAnchor, constant: 10),             flipCameraButton.rightAnchor.constraint(equalTo: rightAnchor, constant: -10),             flipCameraButton.widthAnchor.constraint(equalToConstant: 30),             flipCameraButton.widthAnchor.constraint(equalToConstant: 30),                          recordButton.centerXAnchor.constraint(equalTo: centerXAnchor),             recordButton.bottomAnchor.constraint(equalTo: bottomAnchor, constant: -5),             recordButton.widthAnchor.constraint(equalToConstant: 30),             recordButton.widthAnchor.constraint(equalToConstant: 30),                          uploadButton.leftAnchor.constraint(equalTo: recordButton.rightAnchor, constant: 20),             uploadButton.bottomAnchor.constraint(equalTo: bottomAnchor, constant: -5),             uploadButton.widthAnchor.constraint(equalToConstant: 30),             uploadButton.widthAnchor.constraint(equalToConstant: 30),                          clipsLabel.leftAnchor.constraint(equalTo: leftAnchor, constant: 5),             clipsLabel.centerYAnchor.constraint(equalTo: uploadButton.centerYAnchor),                          deleteLastClipButton.centerYAnchor.constraint(equalTo: clipsLabel.centerYAnchor),             deleteLastClipButton.rightAnchor.constraint(equalTo: recordButton.leftAnchor, constant: -15),             deleteLastClipButton.widthAnchor.constraint(equalToConstant: 30),             deleteLastClipButton.widthAnchor.constraint(equalToConstant: 30),                          recordedTimeLabel.topAnchor.constraint(equalTo: layoutMarginsGuide.topAnchor),             recordedTimeLabel.leftAnchor.constraint(equalTo: leftAnchor, constant: 5)         ])     }

The result of the layout will look like this:

5. Add the initializer. The controller will transfer the session in order to access the image from the camera:

    convenience init(session: AVCaptureSession) {         self.init(frame: .zero)         setupLivePreview(session: session)         addSubview(recordButton)         addSubview(flipCameraButton)         addSubview(uploadButton)         initLayout()     

6. Create methods that will work when the user clicks on the buttons.

    @objc func tapRecord() {         guard delegate?.shouldRecord() == true else { return }         isRecord = !isRecord         delegate?.tappedRecord(isRecord: isRecord)     }          @objc func tapFlip() {         delegate?.tappedFlipCamera()     }          @objc func tapUpload() {         delegate?.tappedUpload()     }          @objc func tapDeleteClip() {         delegate?.tappedDeleteClip()     } }

Step 5: Interaction with recorded fragments

On an iPhone, the camera records video in fragments. When the user decides to upload the video, you need to collect its fragments into one file and send it to the server. Create another class that will do this command.

Note: When creating a video, an additional file will be created. This file will collect all the fragments, but at the same time, these fragments will remain in the memory until the line-up is completed. In the worst case, it can cause a lack of memory and crash from the application. To avoid this, we recommend limiting the recording time allowed.

import Foundation import AVFoundation  final class VideoCompositionWriter: NSObject {     private func merge(recordedVideos: [AVAsset]) -> AVMutableComposition {         //  create empty composition and empty video and audio tracks         let mainComposition = AVMutableComposition()         let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)         let compositionAudioTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)                  // to correct video orientation         compositionVideoTrack?.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2)                  // add video and audio tracks from each asset to our composition (across compositionTrack)         var insertTime = CMTime.zero         for i in recordedVideos.indices {             let video = recordedVideos[i]             let duration = video.duration             let timeRangeVideo = CMTimeRangeMake(start: CMTime.zero, duration: duration)             let trackVideo = video.tracks(withMediaType: .video)[0]             let trackAudio = video.tracks(withMediaType: .audio)[0]                          try! compositionVideoTrack?.insertTimeRange(timeRangeVideo, of: trackVideo, at: insertTime)             try! compositionAudioTrack?.insertTimeRange(timeRangeVideo, of: trackAudio, at: insertTime)                          insertTime = CMTimeAdd(insertTime, duration)         }         return mainComposition     }          /// Combines all recorded clips into one file     func mergeVideo(_ documentDirectory: URL, filename: String, clips: [URL], completion: @escaping (Bool, URL?) -> Void) {         var assets: [AVAsset] = []         var totalDuration = CMTime.zero                  for clip in clips {             let asset = AVAsset(url: clip)             assets.append(asset)             totalDuration = CMTimeAdd(totalDuration, asset.duration)         }                  let mixComposition = merge(recordedVideos: assets)                  let url = documentDirectory.appendingPathComponent("link_\(filename)")         guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }         exporter.outputURL = url         exporter.outputFileType = .mp4         exporter.shouldOptimizeForNetworkUse = true                  exporter.exportAsynchronously {             DispatchQueue.main.async {                 if exporter.status == .completed {                     completion(true, exporter.outputURL)                 } else {                     completion(false, nil)                 }             }         }     } }

Step 6: Metadata for the videos

There is a specific set of actions for video uploading:

  1. Recording a video
  2. Using your token and the name of the future video, creating a request to the server to create a container for the video file
  3. Getting the usual VOD data in the response
  4. Sending a request for metadata using the token and the VOD ID
  5. Getting metadata in the response
  6. Uploading the video via TUSKit using metadata

Create requests with models. You will use the Decodable protocol from Apple with the enumeration of Coding Keys for easier data parsing.

1. Create a model for VOD, which will contain the data that you need.

struct VOD: Decodable {     let name: String     let id: Int     let screenshot: URL?     let hls: URL?      enum CodingKeys: String, CodingKey {         case name, id, screenshot         case hls = "hls_url"     } }

2. Create a CreateVideoRequest in order to create an empty container for the video on the server. The VOD model will come in response.

struct CreateVideoRequest: DataRequest {     typealias Response = VOD          let token: String     let videoName: String          var url: String { GсoreAPI.videos.rawValue }     var method: HTTPMethod { .post }          var headers: [String: String] {         [ "Authorization" : "Bearer \(token)" ]     }          var body: Data? {        try? JSONEncoder().encode([         "name": videoName        ])     } }

3. Create a VideoMetadata model that will contain data for uploading videos from the device to the server and the corresponding request for it.

struct VideoMetadata: Decodable {     struct Server: Decodable {         let hostname: String     }          struct Video: Decodable {         let name: String         let id: Int         let clientID: Int                  enum CodingKeys: String, CodingKey {             case name, id             case clientID = "client_id"         }     }          let servers: [Server]     let video: Video     let token: String          var uploadURLString: String {         "https://" + (servers.first?.hostname ?? "") + "/upload"     } }  // MARK: Request struct VideoMetadataRequest: DataRequest {     typealias Response = VideoMetadata          let token: String     let videoId: Int          var url: String { GсoreAPI.videos.rawValue + "/\(videoId)/" + "upload" }     var method: HTTPMethod { .get }          var headers: [String: String] {         [ "Authorization" : "Bearer \(token)" ]     } }

Step 7: Putting the pieces together

We’ve used the code from our demo application as an example. The controller class is described here with a custom view. It will link the camera and the UI as well as take responsibility for creating requests to obtain metadata and then upload the video to the server.

Create View Controller. It will display the camera view and TextField for the video title. This controller has various states (upload, error, common).

MainView

First, create the view.

1. Create a delegate protocol to handle changing the name of the video.

protocol UploadMainViewDelegate: AnyObject {     func videoNameDidUpdate(_ name: String) }

2. Create the view and initialize all UI elements except the camera view. It will be added by the controller.

final class UploadMainView: UIView {     enum State {         case upload, error, common     }      var cameraView: CameraView? {         didSet { initLayoutForCameraView() }     }          var state: State = .common {         didSet {             switch state {             case .upload: showUploadState()             case .error: showErrorState()             case .common: showCommonState()             }         }     }          weak var delegate: UploadMainViewDelegate? }

3. Add the initialization of UI elements here, except for the camera view. It will be added by the controller.

    let videoNameTextField = TextField(placeholder: "Enter the name video")          let accessCaptureFailLabel: UILabel = {         let label = UILabel()         label.text = NSLocalizedString("Error!\nUnable to access capture devices.", comment: "")         label.textColor = .black         label.numberOfLines = 2         label.isHidden = true         label.textAlignment = .center         return label     }()          let uploadIndicator: UIActivityIndicatorView = {         let indicator = UIActivityIndicatorView(style: .gray)         indicator.transform = CGAffineTransform(scaleX: 2, y: 2)         return indicator     }()          let videoIsUploadingLabel: UILabel = {         let label = UILabel()         label.text = NSLocalizedString("video is uploading", comment: "")         label.font = UIFont.systemFont(ofSize: 16)         label.textColor = .gray         label.isHidden = true         return label     }()

4. Create a layout for the elements. Since the camera will be added after, its layout is taken out in a separate method.

    private func initLayoutForCameraView() {         guard let cameraView = cameraView else { return }         cameraView.translatesAutoresizingMaskIntoConstraints = false         insertSubview(cameraView, at: 0)          NSLayoutConstraint.activate([             cameraView.leftAnchor.constraint(equalTo: leftAnchor),             cameraView.topAnchor.constraint(equalTo: topAnchor),             cameraView.rightAnchor.constraint(equalTo: rightAnchor),             cameraView.bottomAnchor.constraint(equalTo: videoNameTextField.topAnchor),         ])     }          private func initLayout() {         let views = [videoNameTextField, accessCaptureFailLabel, uploadIndicator, videoIsUploadingLabel]         views.forEach {             $0.translatesAutoresizingMaskIntoConstraints = false             addSubview($0)         }                  let keyboardBottomConstraint = videoNameTextField.bottomAnchor.constraint(equalTo: layoutMarginsGuide.bottomAnchor)         self.keyboardBottomConstraint = keyboardBottomConstraint                  NSLayoutConstraint.activate([             keyboardBottomConstraint,             videoNameTextField.heightAnchor.constraint(equalToConstant: videoNameTextField.intrinsicContentSize.height + 20),             videoNameTextField.leftAnchor.constraint(equalTo: leftAnchor),             videoNameTextField.rightAnchor.constraint(equalTo: rightAnchor),                          accessCaptureFailLabel.centerYAnchor.constraint(equalTo: centerYAnchor),             accessCaptureFailLabel.centerXAnchor.constraint(equalTo: centerXAnchor),                          uploadIndicator.centerYAnchor.constraint(equalTo: centerYAnchor),             uploadIndicator.centerXAnchor.constraint(equalTo: centerXAnchor),                          videoIsUploadingLabel.centerXAnchor.constraint(equalTo: centerXAnchor),             videoIsUploadingLabel.topAnchor.constraint(equalTo: uploadIndicator.bottomAnchor, constant: 20)         ])     }

5. To show different states, create methods responsible for this.

    private func showUploadState() {         videoNameTextField.isHidden = true         uploadIndicator.startAnimating()         videoIsUploadingLabel.isHidden = false         accessCaptureFailLabel.isHidden = true         cameraView?.recordButton.setImage(UIImage(named: "play.icon"), for: .normal)         cameraView?.isHidden = true     }          private func showErrorState() {         accessCaptureFailLabel.isHidden = false         videoNameTextField.isHidden = true         uploadIndicator.stopAnimating()         videoIsUploadingLabel.isHidden = true         cameraView?.isHidden = true     }          private func showCommonState() {         videoNameTextField.isHidden = false         uploadIndicator.stopAnimating()         videoIsUploadingLabel.isHidden = true         accessCaptureFailLabel.isHidden = true         cameraView?.isHidden = false     }

6. Add methods and a variable for the correct processing of keyboard behavior. The video title input field must always be visible.

    private var keyboardBottomConstraint: NSLayoutConstraint?          private func addObserver() {         [UIResponder.keyboardWillShowNotification, UIResponder.keyboardWillHideNotification].forEach {             NotificationCenter.default.addObserver(                 self,                 selector: #selector(keybordChange),                 name: $0,                  object: nil             )         }     }          @objc private func keybordChange(notification: Notification) {         guard let keyboardFrame = notification.userInfo?["UIKeyboardFrameEndUserInfoKey"] as? NSValue,               let duration = notification.userInfo?[UIResponder.keyboardAnimationDurationUserInfoKey] as? Double         else {              return         }          let keyboardHeight = keyboardFrame.cgRectValue.height - safeAreaInsets.bottom          if notification.name == UIResponder.keyboardWillShowNotification {             self.keyboardBottomConstraint?.constant -= keyboardHeight             UIView.animate(withDuration: duration) {                 self.layoutIfNeeded()             }         } else {             self.keyboardBottomConstraint?.constant += keyboardHeight             UIView.animate(withDuration: duration) {                 self.layoutIfNeeded()             }         }     }

7. Rewrite the initializer. In deinit, unsubscribe from notifications related to the keyboard.

    override init(frame: CGRect) {         super.init(frame: frame)         initLayout()         backgroundColor = .white         videoNameTextField.delegate = self         addObserver()     }          required init?(coder: NSCoder) {         super.init(coder: coder)         initLayout()         backgroundColor = .white         videoNameTextField.delegate = self         addObserver()     }      deinit {         NotificationCenter.default.removeObserver(self)     }

8. Sign the view under UITextFieldDelegate to intercept the necessary actions related to TextField.

extension UploadMainView: UITextFieldDelegate {     func textFieldShouldReturn(_ textField: UITextField) -> Bool {         delegate?.videoNameDidUpdate(textField.text ?? "")         return textField.resignFirstResponder()     }          func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {         guard let text = textField.text, text.count < 21 else { return false }         return true     } }

Controller

Create ViewController.

1. Specify the necessary variables and configure the controller.

final class UploadController: BaseViewController {     private let mainView = UploadMainView()      private var camera: Camera?     private var captureSession = AVCaptureSession()     private var filename = ""     private var writingVideoURL: URL!        private var clips: [(URL, CMTime)] = [] {         didSet { mainView.cameraView?.clipsLabel.text = "Clips: \(clips.count)" }     }          private var isUploading = false {         didSet { mainView.state = isUploading ? .upload : .common }     }      // replacing the default view with ours     override func loadView() {         mainView.delegate = self         view = mainView     }          // initialize the camera and the camera view     override func viewDidLoad() {         super.viewDidLoad()         do {             camera = try Camera(captureSession: captureSession)             camera?.delegate = self             mainView.cameraView = CameraView(session: captureSession)             mainView.cameraView?.delegate = self         } catch {             debugPrint((error as NSError).description)             mainView.state = .error         }     } }

2. Add methods that will respond to clicks of the upload button on View. For this, create a full video from small fragments, create an empty container on the server, get metadata, and then upload the video.

    // used then user tap upload button     private func mergeSegmentsAndUpload() {         guard !isUploading, let camera = camera else { return }         isUploading = true         camera.stopRecording()                  if let directoryURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first {             let clips = clips.map { $0.0 }             // Create a full video from clips             VideoCompositionWriter().mergeVideo(directoryURL, filename: "\(filename).mp4", clips: clips) { [weak self] success, outURL in                 guard let self = self else { return }                      if success, let outURL = outURL {                     clips.forEach { try? FileManager.default.removeItem(at: $0) }                     self.clips = []                                         let videoData = try! Data.init(contentsOf: outURL)                     let writingURL = FileManager.default.temporaryDirectory.appendingPathComponent(outURL.lastPathComponent)                     try! videoData.write(to: writingURL)                     self.writingVideoURL = writingURL                     self.createVideoPlaceholderOnServer()                 } else {                     self.isUploading = false                     self.mainView.state = .common                     self.present(self.createAlert(), animated: true)                 }             }         }     }          // used to send createVideo request     private func createVideoPlaceholderOnServer() {                         guard let token = Settings.shared.accessToken else {              refreshToken()             return         }                  let http = HTTPCommunicator()         let request = CreateVideoRequest(token: token, videoName: filename)                  http.request(request) { [weak self] result in             guard let self = self else { return }                          switch result {             case .success(let vod):                 self.loadMetadataFor(vod: vod)             case .failure(let error):                 if let error = error as? ErrorResponse, error == .invalidToken {                     Settings.shared.accessToken = nil                     self.refreshToken()                 } else {                     self.errorHandle(error)                 }             }         }     }          // Requesting the necessary data from the server     func loadMetadataFor(vod: VOD) {         guard let token = Settings.shared.accessToken else {              refreshToken()             return         }                  let http = HTTPCommunicator()         let request = VideoMetadataRequest(token: token, videoId: vod.id)         http.request(request) { [weak self] result in             guard let self = self else { return }                          switch result {             case .success(let metadata):                 self.uploadVideo(with: metadata)             case .failure(let error):                  if let error = error as? ErrorResponse, error == .invalidToken {                     Settings.shared.accessToken = nil                     self.refreshToken()                 } else {                     self.errorHandle(error)                 }             }         }     }          // Uploading our video to the server via TUSKit     func uploadVideo(with metadata: VideoMetadata) {         var config = TUSConfig(withUploadURLString: metadata.uploadURLString)         config.logLevel = .All                  TUSClient.setup(with: config)         TUSClient.shared.delegate = self                  let upload: TUSUpload = TUSUpload(withId:  filename,                                           andFilePathURL: writingVideoURL,                                           andFileType: ".mp4")         upload.metadata = [             "filename" : filename,             "client_id" : String(metadata.video.clientID),             "video_id" : String(metadata.video.id),             "token" : metadata.token         ]                  TUSClient.shared.createOrResume(forUpload: upload)     }

3. Subscribe to the TUSDelegate protocol to track errors and successful downloads. It can also be used to display the progress of video downloads.

//MARK: - TUSDelegate extension UploadController: TUSDelegate {          func TUSProgress(bytesUploaded uploaded: Int, bytesRemaining remaining: Int) { }     func TUSProgress(forUpload upload: TUSUpload, bytesUploaded uploaded: Int, bytesRemaining remaining: Int) {  }     func TUSFailure(forUpload upload: TUSUpload?, withResponse response: TUSResponse?, andError error: Error?) {         if let error = error {             print((error as NSError).description)         }         present(createAlert(), animated: true)         mainView.state = .common     }          func TUSSuccess(forUpload upload: TUSUpload) {         let alert = createAlert(title: "Upload success")         present(alert, animated: true)         mainView.state = .common     } }

4. Subscribe to the protocols of the MainView, the camera, and the camera view in order to correctly link all the work of the module.

//MARK: - extensions CameraViewDelegate, CameraDelegate extension UploadController: CameraViewDelegate, CameraDelegate {     func updateCurrentRecordedTime(_ time: CMTime) {         currentRecordedTime = time.seconds     }          func tappedDeleteClip() {         guard let lastClip = clips.last else { return }         lastRecordedTime -= lastClip.1.seconds         clips.removeLast()     }          func addRecordedMovie(url: URL, time: CMTime) {         lastRecordedTime += time.seconds         clips += [(url, time)]     }          func shouldRecord() -> Bool {         totalRecordedTime < maxRecordTime     }      func tappedRecord(isRecord: Bool) {         isRecord ? camera?.startRecording() : camera?.stopRecording()     }      func tappedUpload() {         guard !clips.isEmpty && filename != "" else { return }         mergeSegmentsAndUpload()     }      func tappedFlipCamera() {         camera?.flipCamera()     } }  extension UploadController: UploadMainViewDelegate {     // used then user change name video in view     func videoNameDidUpdate(_ name: String) {         filename = name     } 

This was the last step; the job is done! The new feature has been added to your app and configured.

Result

Now you have a full-fledged module for recording and uploading videos.

Conclusion

Through this guide, you’ve learned how to add a VOD uploading feature to your iOS application. We hope this solution will satisfy your needs and delight your users with new options.

Also, we invite you to take a look at our demo application. You will see the result of setting up the VOD viewing for an iOS project.

Table of content

Try Gcore Network

Try for free

Related Articles

How to choose the right technology tools to combat digital piracy

One of the biggest challenges facing the media and entertainment industry is digital piracy, where stolen content is redistributed without authorization. This issue causes significant revenue and reputational losses for media companies. Consumers who use these unregulated services also face potential threats from malware and other security risks.Governments, regulatory bodies, and private organizations are increasingly taking the ramifications of digital piracy seriously. In the US, new legislation has been proposed that would significantly crack down on this type of activity, while in Europe, cloud providers are being held liable by the courts for enabling piracy. Interpol and authorities in South Korea have also teamed up to stop piracy in its tracks.In the meantime, you can use technology to help stop digital piracy and safeguard your company’s assets. This article explains anti-piracy technology tools that can help content providers, streaming services, and website owners safeguard their proprietary media: geo-blocking, digital rights management (DRM), secure tokens, and referrer validation.Geo-blockingGeo-blocking (or country access policy) restricts access to content based on a user’s geographic location, preventing unauthorized access and limiting content distribution to specific regions. It involves setting rules to allow or deny access based on the user’s IP address and location in order to comply with regional laws or licensing agreements.Pros:Controls access by region so that content is only available in authorized marketsHelps comply with licensing agreementsCons:Can be bypassed with VPNs or proxiesRequires additional security measures to be fully effectiveTypical use cases: Geo-blocking is used by streaming platforms to restrict access to content, such as sports events or film premieres, based on location and licensing agreements. It’s also helpful for blocking services in high-risk areas but should be used alongside other anti-piracy tools for better and more comprehensive protection.Referrer validationReferrer validation is a technique that checks where a content request is coming from and prevents unauthorized websites from directly linking to and using content. It works by checking the “referrer” header sent by the browser to determine the source of the request. If the referrer is from an unauthorized domain, the request is blocked or redirected. This allows only trusted sources to access your content.Pros:Protects bandwidth by preventing unauthorized access and misuse of resourcesGuarantees content is only accessed by trusted sources, preventing piracy or abuseCons:Can accidentally block legitimate requests if referrer headers are not correctly sentMay not work as intended if users access content via privacy-focused methods that strip referrer data, leading to false positivesTypical use cases: Content providers commonly use referrer validation to prevent unauthorized streaming or hotlinking, which involves linking to media from another website or server without the owner’s permission. It’s especially useful for streamers who want to make sure their content is only accessed through their official platforms. However, it should be combined with other security measures for more substantial protection.Secure tokensSecure tokens and protected temporary links provide enhanced security by granting temporary access to specific resources so only authorized users can access sensitive content. Secure tokens are unique identifiers that, when linked to a user’s account, allow them to access protected resources for a limited time. Protected temporary links further restrict access by setting expiration dates, meaning the link becomes invalid after a set time.Pros:Provides a high level of security by allowing only authorized users to access contentTokens are time-sensitive, which prevents unauthorized access after they expireHarder to circumvent compared to traditional password protection methodsCons:Risk of token theft if they’re not managed or stored securelyRequires ongoing management and rotation of tokens, adding complexityCan be challenging to implement properly, especially in high-traffic environmentsTypical use cases: Streaming platforms use secure tokens and protected temporary links so only authenticated users can access premium content, like movies or live streams. They are also useful for secure file downloads or limiting access to exclusive resources, making them effective for protecting digital content and preventing unauthorized sharing or piracy.Digital rights managementDigital rights management (DRM) refers to a set of technologies designed to protect digital content from unauthorized use so that only authorized users can access, copy, or share it, according to licensing agreements. DRM uses encryption, licensing, and authentication mechanisms to control access to digital resources so that only authorized users can view or interact with the content. While DRM offers strong protection against piracy, it comes with higher complexity and setup costs than other security methods.Pros:Robust protection against unauthorized copying, sharing, and piracyHelps safeguard intellectual property and revenue streamsEnforces compliance with licensing agreementsCons:Can be complex and expensive to implementMay cause inconvenience for users, such as limiting playback on unauthorized devices or restricting sharingPotential system vulnerabilities or compatibility issuesTypical use cases: DRM is commonly used by streaming services to protect movies, TV shows, and music from piracy. It can also be used for e-books, software, and video games, ensuring that content is only used by licensed users according to the terms of the agreement. DRM solutions can vary, from software-based solutions for media files to hardware-based or cloud-based DRM for more secure distribution.Protect your content from digital piracy with GcoreDigital piracy remains a significant challenge for the media and entertainment industry as it poses risks in terms of both revenue and security. To combat this, partnering with a cloud provider that can actively monitor and protect your digital assets through advanced multi-layer security measures is essential.At Gcore, our CDN and streaming solutions give rights holders peace of mind that their assets are protected, offering the features mentioned in this article and many more besides. We also offer advanced cybersecurity tools, including WAAP (web application and API protection) and DDoS protection, which further integrate with and enhance these security measures. We provide trial limitations for streamers to curb piracy attempts and respond swiftly to takedown requests from rights holders and authorities, so you can rest assured that your assets are in safe hands.Get in touch to learn more about combatting digital piracy

The latest updates for Gcore Video Streaming: lower latency, smarter AI, and seamless scaling

At Gcore, we’re committed to continuous innovation in video streaming. This month, we’re introducing significant advancements in low-latency streaming, AI-driven enhancements, and infrastructure upgrades, helping you deliver seamless, high-quality content at scale.Game-changing low-latency streamingOur latest low-latency live streaming solutions are now fully available in production, delivering real-time engagement with unmatched precision:WebRTC to HLS/DASH: Now in production, enabling real-time transcoding and delivery for WebRTC streams using HTTP-based LL-HLS and LL-DASH.LL-DASH with two-second latency: Optimized for ultra-fast content delivery via our global CDN, enabling minimal delay for seamless streaming.LL-HLS with three-second latency: Designed to deliver an uninterrupted and near-real-time live streaming experience.Gcore’s live streaming dashboard with OBS Studio integration, enabling real-time transcoding and delivery with low-latency HLS/DASHWhat this means for youWith glass-to-glass latency as low as 2–3 seconds, these advancements unlock new possibilities for real-time engagement. Whether you’re hosting live auctions, powering interactive gaming experiences, or enabling seamless live shopping, Gcore Video Streaming’s low-latency options keep your viewers connected without delay.Our solution integrates effortlessly with hls.js, dash.js, native Safari support, and our HTML web player, guaranteeing smooth playback across devices. Backed by our global CDN infrastructure, you can count on reliable, high-performance streaming at scale, no matter where your audience is.Exciting enhancements: AI and live streaming featuresWe’re making live streaming smarter with cutting-edge AI capabilities:Live stream recording with overlays: Record live streams while adding dynamic overlays such as webcam pop-ups, chat, alerts, advertisement banners, and time or weather widgets. This feature allows you to create professional, branded content without post-production delays. Whether you’re broadcasting events, tutorials, or live commerce streams, overlays help maintain a polished and engaging viewer experience.AI-powered VOD subtitles: Advanced AI automatically generates and translates subtitles into more than 100 languages, helping you expand your content’s reach to global audiences. This ensures accessibility while improving engagement across different regions.Deliver seamless live experiences with GcoreOur commitment to innovation continues, bringing advancements to enhance performance, efficiency, and streaming quality. Stay tuned for even lower latency and more AI-driven enhancements coming soon!Gcore Video Streaming empowers you to deliver seamless live experiences for auctions, gaming, live shopping, and other real-time applications. Get reliable, high-performance content delivery—whether you’re scaling to reach global audiences or delivering unique experiences to niche communities.Try Gcore Video Streaming today

How we optimized our CDN infrastructure for paid and free plans

At Gcore, we’re dedicated to delivering top-tier performance and reliability. To further enhance performance for all our customers, we recently made a significant change: we moved our CDN free-tier customers to a separate, physically isolated infrastructure. By isolating free-tier traffic, customers on paid plans receive uninterrupted, premium-grade service, while free users benefit from an environment tailored to their needs.Why we’ve separated free and paid plan infrastructureThis optimization has been driven by three key factors: performance, stability and scalability, and improved reporting.Providing optimal performanceFree-tier users are essential to our ecosystem, helping to stress-test our systems and extend our reach. However, their traffic can be unpredictable. By isolating free traffic, we provide premium customers with consistently high performance, minimizing disruption risks.Enhancing stability and scalabilityWith separate infrastructures, we can better manage traffic spikes and load balancing without impacting premium services. This improves overall platform stability and scalability, guaranteeing that both customer groups will enjoy a reliable experience.Improving reporting and performance insightsAlongside infrastructure enhancements, we’ve upgraded our reports page to offer clearer visibility into traffic and performance:New 95th percentile bandwidth graph: Helps users analyze traffic patterns more effectively.Improved aggregated bandwidth view: Makes it easier to assess usage trends at a glance.These tools empower you to make more informed decisions with accurate and accessible data.95th percentile bandwidth usage over the last three months, highlighting a significant increase in January 2025Strengthening content delivery with query string forwardingWe’ve also introduced a standardized query string forwarding feature to boost content delivery stability. By replacing our previous custom approach, we achieved the following:Increased stability: Reducing the risk of disruptionsLower maintenance requirements: Freeing up engineering resourcesSmoother content delivery: Enhancing experiences for streaming and content-heavy applicationsQuery string forwarding settings allow seamless parameter transfer for media deliveryWhat this means for our customersFor customers on paid plans: You can expect a more stable, high-performance service without the disruptions caused by fluctuating free-tier activity. Enhanced reporting and streamlined content delivery also empower you to make better, data-driven decisions.For free-tier customers: You will continue to have access to our services on a dedicated infrastructure that has been specifically optimized for your needs. This setup allows us to innovate and improve performance without compromising service quality.Strengthening Gcore CDN for long-term growthAt Gcore, we continuously refine our CDN to enable top-tier performance, reliability, and scalability. The recent separation of free-tier traffic, improved reporting capabilities, and optimized content delivery are key to strengthening our infrastructure. These updates enhance service quality for all users, minimizing disruptions and improving traffic management.We remain committed to pushing the boundaries of CDN efficiency, delivering faster load times, robust security, and seamless scalability. Stay tuned for more enhancements as we continue evolving our platform to meet the growing demands of businesses worldwide.Explore Gcore CDN

Introducing low-latency live streams with LL-HLS and LL-DASH

We are thrilled to introduce low-latency live streams for Gcore Video Streaming using LL-HLS and LL-DASH protocols. With a groundbreaking glass-to-glass delay of just 2.2–3.0 seconds, this improvement brings unparalleled speed to your viewers’ live-streaming experience.Alt: Video illustrating the workflow of low-latency live streaming using LL-HLS and LL-DASH protocolsThis demonstration shows the minimal latency of our live streaming solution—just three seconds between the original broadcast (left) and what viewers see online (right).Key use cases and benefits of low-latency streamingOur low-latency streaming solutions address the specific needs of content providers, broadcasters, and developers, enabling seamless experiences for diverse use cases.Ultra-fast live streamingGet real-time delivery with glass-to-glass latency of ±2.2 seconds for LL-DASH and ±3.0 seconds for LL-HLS.Deliver immediate viewer engagement, ideal for industries such as live sports, e-sports tournaments, and news broadcasting.Meet the expectations of audiences who demand instant access to live events without noticeable delays.Enhanced viewer interactionReduce the delay between live actions and audience reactions, fostering a more immersive viewing experience.Support real-time interaction for use cases like virtual conferences, live auctions, Q&A sessions, and live shopping platforms.Flexible player supportSeamlessly integrate with your existing player setups, including popular options like hls.js, dash.js, and native Safari support.Use our new HTML web player for effortless integration or maintain your current custom player workflows.Global scalability and reliabilityLeverage our robust CDN network with 200+ Tbps capacity and 180+ PoPs to enable low-latency streams worldwide.Deliver a consistent, high-quality experience for global audiences, even during peak traffic events.Cost-efficiencyMinimize operational overhead with a streamlined solution that combines advanced encoding, efficient packaging, and reliable delivery.How it worksOur real-time transcoder and JIT packager generate streaming manifests and chunks optimized for low latency:For LL-HLS: The HLS manifest (.m3u8) and chunks comply with the latest standards. Tags like #EXT-X-PART, #EXT-X-PRELOAD-HINT, and others are dynamically generated with best-in-class parameters. Chunks are loaded instantaneously as they appear at the origin.For LL-DASH: The DASH manifest (.mpd) leverages advanced MPEG-DASH features. Chunks are transmitted to viewers as soon as encoding begins, with caching finalized once the chunk is fully fetched.Combined with our fast and reliable CDN delivery, live streams are accessible globally with minimal delay. Our CDN network has an extensive capacity and 180+ PoPs to deliver exceptional performance, even for high-traffic events.See a live demo in action!Try WebRTC to HLS/DASH todayWe’re also excited to remind you about our WebRTC to HLS/DASH delivery functionality. This innovative feature allows streams created in a standard browser via WebRTC to be:Transcoded on our servers.Delivered with low latency to viewers using HTTP-based LL-HLS and LL-DASH protocols through our CDN.Try it now in the Gcore Customer Portal.Shaping the future of streamingBy nearly halving the glass-to-glass delivery time compared to our previous solution, Gcore Video Streaming enables you to deliver a seamless experience for live events, real-time interactions, and other latency-sensitive applications. Whether you’re broadcasting to a global audience or engaging niche communities, our platform provides the tools you need to thrive in today’s dynamic streaming landscape.Watch our demo to see the difference and explore how this solution fits into your workflows.Visit our demo player

Gcore 2024 round-up: 10 highlights from our 10th year

It’s been a busy and exciting year here at Gcore, not least because we celebrated our 10th anniversary back in February. Starting in 2014 with a focus on gaming, Gcore is now a global edge AI, cloud, network, and security solutions provider, supporting businesses from a wide range of industries worldwide.As we start to look forward to the new year, we took some time to reflect on ten of our highlights from 2024.1. WAAP launchIn September, we launched our WAAP security solution (web application and API protection) following the acquisition of Stackpath’s edge WAAP. Gcore WAAP is a genuinely innovative product that offers customers DDoS protection, bot management, and a web application firewall, helping protect businesses from the ever-increasing threat of cyber attacks. It brings next-gen AI features to customers while remaining intuitive to use, meaning businesses of all sizes can futureproof their web app and API protection against even the most sophisticated threats.My highlight of the year was the Stackpath WAAP acquisition, which enabled us to successfully deliver an enterprise-grade web security solution at the edge to our customers in a very short time.Itamar Eshet, Senior Product Manager, Security2. Fundraising round: investing in the futureIn July, we raised $60m in Series A funding, reflecting investors’ confidence in the continued growth and future of Gcore. Next year will be huge for us in terms of AI development, and this funding will accelerate our growth in this area and allow us to bring even more innovative solutions to our customers.3. Innovations in AIIn 2024, we upped our AI offerings, including improved AI services for Gcore Video Streaming: AI ASR for transcription and translation, and AI content moderation. As AI is at the forefront of our products and services, we also provided insights into how regulations are changing worldwide and how AI will likely affect all aspects of digital experiences. We already have many new AI developments in the pipeline for 2025, so watch this space…4. Global expansionsWe had some exciting expansions in terms of new cloud capabilities. We expanded our Edge Cloud offerings in new locations, including Vietnam and South Korea, and in Finland, we boosted our Edge AI capabilities with a new AI cluster and two cutting-edge GPUs. Our AI expansion was further bolstered when we introduced the H200 and GB200 in Luxembourg. We also added new PoPs worldwide in locations such as Munich, Riyadh, and Casablanca, demonstrating our dedication to providing reliable and fast content delivery globally.5. FastEdge launchWe kicked off the year with the launch of FastEdge. This lightweight edge computing solution runs on our global Edge Network and delivers exceptional performance for serverless apps and scripts. This new solution makes handling dynamic content even faster and smoother. We ran an AI image recognition model on FastEdge in an innovative experiment. The Gcore team volunteered their pets to test FastEdge’s performance. Check out the white paper and discover our pets and our technological edge.6. PartnershipsWe formed some exciting global partnerships in 2024. In November, we launched a joint venture with Ezditek, an innovator in data center and digital infrastructure services in Saudi Arabia. The joint venture will build, train, and deploy generative AI solutions locally and globally. We also established some important strategic partnerships. Together with Sesterce, a leading European provider of AI infrastructure, we can help more businesses meet the rising challenges of scaling from AI pilot projects to full-scale implementation. We also partnered with LetzAI, a Luxembourg-based AI startup, to accelerate its mission of developing one of the world’s most comprehensive generative AI platforms.7. EventsIt wasn’t all online. We also ventured out into the real world, making new connections at global technology events, including the WAICF AI conference and Viva Tech in Cannes and Paris, respectively; Mobile World Congress in Barcelona; Gamescom in Cologne in August; IBC (the International Broadcasting Convention) in Amsterdam; and Connected World KSA in Saudi Arabia just last month. We look forward to meeting even more of you next year. Here are a few snapshots from 2024.GamescomIBC8. New container registry solutionSeptember kicked off with the beta launch of Gcore Container Registry, one of the backbones of our cloud offering. It streamlines your image storage and management, keeping your applications running smoothly and consistently across various environments.9. GigaOm recognitionBeing recognized by outside influences is always a moment to remember. In August, we were thrilled to receive recognition from tech analyst GigaOm, which noted Gcore as an outperformer in its field. The prestigious accolade highlights Gcore as a leader in platform capability, innovation, and market impact, as assessed by GigaOm’s rigorous criteria.10. New customer success storiesWe were delighted to share some of the work we’ve done for our customers this year: gaming company Fawkes Games and Austrian sports broadcaster and streaming platform fan.at, helping them with mitigating DDoS attacks and providing the infrastructure for their sports technology offering respectively.And as a bonus number 11, if you’re looking for something to read in the new year lull, download our informative long reads on topics including selecting a modern content delivery network, cyber attack trends, and using Kubernetes to enhance AI. Download the ebook of your choice below.The essential guide to selecting a modern CDN eBookGcore Radar: DDoS attack trends in Q1-Q2 2024 reportAccelerating AI with KubernetesHere’s to 2025!And that’s it for our 2024 highlights. It’s been a truly remarkable year, and we thank you for being a part of it. We’ll leave you with some words from our CEO and see you in 2025.2024 has been a year of highs, from our tenth anniversary celebrations to the launch of various new products, and from expansion into new markets to connecting with customers (new and old) at events worldwide. Happy New Year to all our readers who are celebrating, and see you for an even bigger and better 2025!Andre Reitenbach, CEOChat with us about your 2025 needs

Shaping the future of AI for video streaming in 2025

As we look towards 2025, we’re thrilled to announce major AI updates to enhance your Gcore Video Streaming experience. From transcription and translation to content moderation, here’s what’s new this month.AI transcription and translation for allEvery Gcore Video Streaming customer can now access automated transcription and translation.Free universal AI subtitle generationStarting in December 2024, every video uploaded to Gcore Video Streaming will automatically have subtitles generated in the original audio language thanks to our advanced AI transcription capabilities. This feature supports over 99 languages and can handle:Speech from a single speakerConversations with multiple speakersVideos featuring multiple languagesThese subtitles are applied free and by default, making your content more accessible and engaging for global audiences.AI subtitle translationWe’ve also introduced a translation feature that allows you to convert subtitles into other languages. The translated subtitles are automatically embedded into your videos and can be accessed directly in the player. This feature helps expand your reach to international viewers seamlessly.How AI subtitles workUsing these features is simple:Upload a video to our platformCopy the player codeEmbed the player on your websiteFor example, here’s how to add a video player to your page:By following these steps, you can effortlessly incorporate cutting-edge AI features into your video content.Content moderation at your fingertipsAdditionally, Gcore Video Streaming now offers AI content moderation. Detect sensitive content such as NSFW material, nudity, weapons, sports, and more, supporting compliance and brand safety. Learn more about how it works in our API documentation.Enjoy these AI features today with Gcore Video StreamingReady to transform your audio content into valuable insights? Our AI Automatic Speech Recognition (AI ASR) delivers fast, accurate transcriptions tailored to your business needs. Explore how our ASR can enhance your workflows—start your journey today.If you’re interested in how AI is shaping the future of video, take a look at our blog on key trends for AI in video for 2025.Discover Gcore AI video streaming features

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.