Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Blog
  3. How we launched one of the fastest DNS services in the world

How we launched one of the fastest DNS services in the world

  • By Gcore
  • 2 min read
How we launched one of the fastest DNS services in the world

Table of contents

Try Gcore Network

Try for free

We are launching an open beta test for our own DNS service. This service will help to speed up the Internet connection and provide web security regardless of the user’s location. To achieve this, we implemented GeoDNS and Anycast.

GeoDNS. This feature allows different IP addresses to be sent depending on the user’s location. For example, for websites with a geographically diverse set of visitors, you can segment traffic, send visitors to certain servers, and block access to visitors from specific countries or regions. At the same time, during peak times, DNS hosting with a geotag balancing feature also helps cope with loads and direct client traffic not only to the nearest location but to any available Gcore server, regardless of location.

Anycast. It’s a routing scheme by which different servers can respond to the same IP address. Anycast is a prerequisite for the smooth operation of a website. If there is no response (or one of the servers breaks down), the network equipment redirects the request to an available server. In addition, Anycast helps mitigate the impact of DDoS attacks by distributing requests across a group of servers.

The Gcore DNS is one of the fastest in the world

Gcore DNS servers are located in more than 60 cities around the world. According to the DNS Performance independent testing service, Gcore is ahead of most DNS providers in the world with an average response time of about 21 ms.

Comparison of DNS performance in August 2020 (according to DNS Performance)

“We’ve been using our own DNS since the launch of the Gcore global content delivery network. Even back then, in order to deliver content to users quickly and correctly, we needed stable DNS servers. At the same time, balancing DNS by geolocation was intended for internal use only and was one of our competitive advantages. Today, the Gcore global infrastructure spans 5 continents and has more than 100 points of presence. Since then, the team has automated the most popular DNS features in the market and, thanks to good network connectivity, has achieved one of the highest performance indicators in the world.”

Vice President of Products at Gcore, Dmitry Samoshkin

How to connect to a high-speed DNS for free

Gcore DNS hosting will be in demand for any Internet business that has a critical need for web resource availability and low latency: online retail, media, video games, online cinemas, and SaaS services.

You can test a personal DNS service account for free. Simply configure and manage resource records.

To connect for free, please send a request to sales@gcore.com.

Connect to DNS hosting for free

Table of contents

Try Gcore Network

Try for free

Related articles

How to Speed Up Dynamic Content Delivery Using a CDN

In today’s websites and applications, there are many sections or even pages that are generated according to user properties and preferences. This means that part of the website content is assembled and delivered dynamically as a response to the user’s request.Originally, CDN providers delivered only static web content by caching it on servers around the world, thereby reducing the delivery time to users. They are not designed for dynamic content acceleration.In this article, we explore what makes dynamic content special and how Gcore CDN can speed up its delivery.What is dynamic content?Generally speaking, dynamic content is the content on web pages that is generated when end users request it. Content generation technologies include ASP, JSP, PHP, Perl, CGI requests, and API calls (POST, PUT, and PATCH requests).What the final page with dynamic content will look like depends on distinct factors such as the behavior and preferences of the users on a site, their geolocation, and so on.By using dynamic content, businesses are able to personalize pages. For example:Online stores adapt their product feeds to their customers. Users with different order histories and profiles are served different recommendation feeds, which makes it possible to offer more relevant products and increase conversions.News outlets offer different versions of their website for different readers. Subscribers who have paid for a subscription see full versions of the website, tailored to their interests. For those without a subscription, only the introductory part of the general news block is displayed, along with a pop-up with an offer to purchase a subscription.Franchises localize their sites depending on geolocation. The site’s interface (language, addresses, hours of operation) automatically changes depending on the region in which the user requesting the page is located.With the proliferation of dynamic content on the modern web, there is a challenge in delivering it.What is the challenge of dynamic content delivery?If a business is focused on the global market, content needs to reach users quickly, no matter how remote they are from the origin server. To optimize the delivery of static content, there is a traditional CDN infrastructure consisting of caching servers located around the world.Dynamic content, however, cannot be cached, because it is generated individually for each user. This makes it difficult to use traditional CDNs for sites that contain both types of content. Static site files will be delivered to users from the nearest caching Edge server, while dynamic content will be proxied from the origin, resulting in increased download time.That said, it is still possible to optimize dynamic content delivery. To do so, choose CDNs that provide state-of-the-art delivery acceleration methods. Gcore’s next-gen Edge network architecture uses everything available to accelerate dynamic content delivery as much as possible, and we will look at each of these technologies in detail in this article.How does Gcore’s next-gen CDN accelerate dynamic content delivery?1. Optimized TCP connectionsFor the origin server to respond to a user request for dynamic content on a site via HTTP, a TCP connection must be established between them. The TCP protocol is characterized by reliability: when transmitting data, it requires the receiving side to acknowledge that the packets were received. If a failure occurs and the packets are not received, the desired data segment is resent. However, this reliability comes at the cost of the data rate, slowing it down.Gcore CDN uses two approaches to optimize the speed of the TCP connection:Increasing the congestion window in TCP slow start. TCP slow start is the default network setting that allows you to determine the maximum capacity of a connection safely. It incrementally increases the congestion window size (the number of packets before confirmation is required) if the connection remains stable. When a TCP connection goes through an Edge network, we can increase the congestion window size because we are confident in the stability of the network. In this case, the number of packets will be higher even at the beginning of the connection, allowing dynamic content loading to happen faster.Establishing persistent HTTP connections. By using the HTTP/2 protocol, our Edge network supports multiplexing, which allows multiple data streams to be transmitted over a single, established TCP connection. This means that we can reuse existing TCP connections for multiple HTTP requests, reducing the amount of time needed for traversal and speeding up delivery.Figure 1. Optimized TCP connections within Gcore Edge Network2. Optimized TLS handshakesHTTPS connections use the TLS cryptographic protocol, which secures data transmission and protects it from unauthorized access. To establish a secure TLS connection, three handshakes must be performed between the client and the server during which they exchange security certificate data and generate a session encryption key.It takes a significant amount of time to establish a secure connection. If the RTT (round-trip time) between the origin server and the client is 150 milliseconds, the total connection time will be 450 ms (3 × 150 ms):Figure 2. Three handshakes are required to establish a TLS connectionWhen the source server is connected to the Gcore CDN, TLS handshakes are performed with the help of intermediaries: Edge servers located as close as possible to the user (client) and the origin server. Edge servers belong to the same trusted network, so there is no need to establish a connection between them each time; once is sufficient.Through this method, the connection will be established in 190 ms (more than twice as fast). This time includes three handshakes between the client and the nearest edge server (3 × 10 ms), one handshake between servers in the Edge network (130 ms), and three handshakes between the nearest Edge server and the source (3 × 10 ms):Figure 3. TLS connection establishing with Gcore Edge Network3. WebSockets supportWebSocket is a bidirectional protocol for transferring data between a client and a server via a persistent connection. It allows for real-time message exchange without the need to break connections and send additional HTTP requests.In a standard approach, the client needs to send regular requests to the server to determine if any new information has been received. This increases the load on the origin server, reducing the request processing speed. It also causes delays in content delivery because the browser sends requests at regular intervals and cannot send a new message to the client immediately.In comparison, WebSocket establishes and supports a persistent connection without producing additional load by re-establishing the connection. When a new message appears, the server sends it to the client immediately.Figure 4. The difference between content delivery without and with WebSocketWebSocket support can be enabled in the Gcore interface in two clicks.4. Intelligent routingDynamic content delivery can be accelerated by optimizing packet routing. In the Gcore CDN, a user’s request is routed to the closest Edge server, then passes within the network to the closest server to the source.Network connectivity is critical to achieving high-speed delivery, and Gcore has over 11,000 peering partners to ensure this. Once inside the network, traffic can then bypass the public internet and circulate through ISP networks.We constantly measure network congestion, check connection quality, and perform RUM monitoring. This allows our system to intelligently calculate the best possible route for each request our Edge network receives and increases the overall delivery speed, regardless of whether you’re using static or dynamic content.5. Content prefetchingPrefetching is a technique to speed up content delivery by proactively loading it to Edge servers before end users even request it. It is traditionally associated with static content delivery. But it also can accelerate dynamic content delivery by preloading static objects used in dynamically generated answers.In this case, when an end user requests something, the web server will generate the content with linked objects already on the Edge servers. This reduces the number of requests to the origin server and improves the overall web application performance.How to enable dynamic content delivery in Gcore’s CDNTo enable dynamic content acceleration, you need to integrate the whole website with our CDN by following these step-by-step instructions. In this case, you also need to use our DNS service (it has a free plan) to connect the domain of your website with our DNS points of presence for better balancing.What’s next?Modern applications will be more customized and tuned to custom parameters. Providing users with the most relevant content could become a significant competitive advantage for online businesses.Going in parallel with a constant need for decreased latency, this tendency is pushing forward serverless computing, an emerging technology that is focused on running an application code right on cloud Edges. In addition to overall simplifying the app deployment process, it will open a wide range of opportunities for content customization.We are developing serverless computing products to provide users with the best possible performance and improve their overall web experience. We will keep you informed about the progress and significant updates.Discover Gcore CDN possibilities that give your business access to a high-capacity network with hundreds of Edge servers worldwide. It can improve your web application performance and will allow you to personalize the user experience.Learn more about Gcore CDN

How to Speed Up Dynamic Content Delivery Using a CDN

In today’s websites and applications, there are many sections or even pages that are generated according to user properties and preferences. This means that part of the website content is assembled and delivered dynamically as a response to the user’s request.Originally, CDN providers delivered only static web content by caching it on servers around the world, thereby reducing the delivery time to users. They are not designed for dynamic content acceleration.In this article, we explore what makes dynamic content special and how Gcore CDN can speed up its delivery.What is dynamic content?Generally speaking, dynamic content is the content on web pages that is generated when end users request it. Content generation technologies include ASP, JSP, PHP, Perl, CGI requests, and API calls (POST, PUT, and PATCH requests).What the final page with dynamic content will look like depends on distinct factors such as the behavior and preferences of the users on a site, their geolocation, and so on.By using dynamic content, businesses are able to personalize pages. For example:Online stores adapt their product feeds to their customers. Users with different order histories and profiles are served different recommendation feeds, which makes it possible to offer more relevant products and increase conversions.News outlets offer different versions of their website for different readers. Subscribers who have paid for a subscription see full versions of the website, tailored to their interests. For those without a subscription, only the introductory part of the general news block is displayed, along with a pop-up with an offer to purchase a subscription.Franchises localize their sites depending on geolocation. The site’s interface (language, addresses, hours of operation) automatically changes depending on the region in which the user requesting the page is located.With the proliferation of dynamic content on the modern web, there is a challenge in delivering it.What is the challenge of dynamic content delivery?If a business is focused on the global market, content needs to reach users quickly, no matter how remote they are from the origin server. To optimize the delivery of static content, there is a traditional CDN infrastructure consisting of caching servers located around the world.Dynamic content, however, cannot be cached, because it is generated individually for each user. This makes it difficult to use traditional CDNs for sites that contain both types of content. Static site files will be delivered to users from the nearest caching Edge server, while dynamic content will be proxied from the origin, resulting in increased download time.That said, it is still possible to optimize dynamic content delivery. To do so, choose CDNs that provide state-of-the-art delivery acceleration methods. Gcore’s next-gen Edge network architecture uses everything available to accelerate dynamic content delivery as much as possible, and we will look at each of these technologies in detail in this article.How does Gcore’s next-gen CDN accelerate dynamic content delivery?1. Optimized TCP connectionsFor the origin server to respond to a user request for dynamic content on a site via HTTP, a TCP connection must be established between them. The TCP protocol is characterized by reliability: when transmitting data, it requires the receiving side to acknowledge that the packets were received. If a failure occurs and the packets are not received, the desired data segment is resent. However, this reliability comes at the cost of the data rate, slowing it down.Gcore CDN uses two approaches to optimize the speed of the TCP connection:Increasing the congestion window in TCP slow start. TCP slow start is the default network setting that allows you to determine the maximum capacity of a connection safely. It incrementally increases the congestion window size (the number of packets before confirmation is required) if the connection remains stable. When a TCP connection goes through an Edge network, we can increase the congestion window size because we are confident in the stability of the network. In this case, the number of packets will be higher even at the beginning of the connection, allowing dynamic content loading to happen faster.Establishing persistent HTTP connections. By using the HTTP/2 protocol, our Edge network supports multiplexing, which allows multiple data streams to be transmitted over a single, established TCP connection. This means that we can reuse existing TCP connections for multiple HTTP requests, reducing the amount of time needed for traversal and speeding up delivery.Figure 1. Optimized TCP connections within Gcore Edge Network2. Optimized TLS handshakesHTTPS connections use the TLS cryptographic protocol, which secures data transmission and protects it from unauthorized access. To establish a secure TLS connection, three handshakes must be performed between the client and the server during which they exchange security certificate data and generate a session encryption key.It takes a significant amount of time to establish a secure connection. If the RTT (round-trip time) between the origin server and the client is 150 milliseconds, the total connection time will be 450 ms (3 × 150 ms):Figure 2. Three handshakes are required to establish a TLS connectionWhen the source server is connected to the Gcore CDN, TLS handshakes are performed with the help of intermediaries: Edge servers located as close as possible to the user (client) and the origin server. Edge servers belong to the same trusted network, so there is no need to establish a connection between them each time; once is sufficient.Through this method, the connection will be established in 190 ms (more than twice as fast). This time includes three handshakes between the client and the nearest edge server (3 × 10 ms), one handshake between servers in the Edge network (130 ms), and three handshakes between the nearest Edge server and the source (3 × 10 ms):Figure 3. TLS connection establishing with Gcore Edge Network3. WebSockets supportWebSocket is a bidirectional protocol for transferring data between a client and a server via a persistent connection. It allows for real-time message exchange without the need to break connections and send additional HTTP requests.In a standard approach, the client needs to send regular requests to the server to determine if any new information has been received. This increases the load on the origin server, reducing the request processing speed. It also causes delays in content delivery because the browser sends requests at regular intervals and cannot send a new message to the client immediately.In comparison, WebSocket establishes and supports a persistent connection without producing additional load by re-establishing the connection. When a new message appears, the server sends it to the client immediately.Figure 4. The difference between content delivery without and with WebSocketWebSocket support can be enabled in the Gcore interface in two clicks.4. Intelligent routingDynamic content delivery can be accelerated by optimizing packet routing. In the Gcore CDN, a user’s request is routed to the closest Edge server, then passes within the network to the closest server to the source.Network connectivity is critical to achieving high-speed delivery, and Gcore has over 11,000 peering partners to ensure this. Once inside the network, traffic can then bypass the public internet and circulate through ISP networks.We constantly measure network congestion, check connection quality, and perform RUM monitoring. This allows our system to intelligently calculate the best possible route for each request our Edge network receives and increases the overall delivery speed, regardless of whether you’re using static or dynamic content.5. Content prefetchingPrefetching is a technique to speed up content delivery by proactively loading it to Edge servers before end users even request it. It is traditionally associated with static content delivery. But it also can accelerate dynamic content delivery by preloading static objects used in dynamically generated answers.In this case, when an end user requests something, the web server will generate the content with linked objects already on the Edge servers. This reduces the number of requests to the origin server and improves the overall web application performance.How to enable dynamic content delivery in Gcore’s CDNTo enable dynamic content acceleration, you need to integrate the whole website with our CDN by following these step-by-step instructions. In this case, you also need to use our DNS service (it has a free plan) to connect the domain of your website with our DNS points of presence for better balancing.What’s next?Modern applications will be more customized and tuned to custom parameters. Providing users with the most relevant content could become a significant competitive advantage for online businesses.Going in parallel with a constant need for decreased latency, this tendency is pushing forward serverless computing, an emerging technology that is focused on running an application code right on cloud Edges. In addition to overall simplifying the app deployment process, it will open a wide range of opportunities for content customization.We are developing serverless computing products to provide users with the best possible performance and improve their overall web experience. We will keep you informed about the progress and significant updates.Discover Gcore CDN possibilities that give your business access to a high-capacity network with hundreds of Edge servers worldwide. It can improve your web application performance and will allow you to personalize the user experience.Learn more about Gcore CDN

How we solve issues of RTMP-to-HLS streaming on iOS and Android

Long launch times, video buffering, high delays, broadcast interruptions, and other lags are common issues when developing applications for streaming and live streaming. Anyone who has ever developed such services has come across at least one of them.In previous articles, we talked about how to develop streaming apps for iOS and Android. And today, we will share the problems we encountered in the process and how we solved them.Use of a modern streaming platformAll that is required from the mobile app is to capture video and audio from the camera, form a data stream, and send it to viewers. A streaming platform will be needed for mass content distribution to a wide audience.Streaming via the Gcore platformThe only drawback of a streaming platform is latency. Broadcasting is a rather complex and sophisticated process. A certain amount of latency occurs at each stage.Our developers were able to assemble a stable, functional, and fast solution that requires 5 seconds to launch all processes, while the end-to-end latency when broadcasting in the Low latency mode takes 4 seconds.The table below shows several platforms that solve the latency reduction problem in their own way. We compared several solutions, studied each one, and found the best approach.It takes 5 minutes to start streaming on Gcore Streaming Platform:Create a free account. You will need to specify your email and password.Activate the service by selecting Free Live or any other suitable plan.Create a stream and start broadcasting.All the processes involved in streaming are inextricably linked. Changes to one affect all subsequent ones. Therefore, it would be incorrect to divide them into separate blocks. We will consider what can be optimized and how.Decrease of GOP size and speed up of stream delivery and receptionTo start decoding and processing any video stream, you need an iframe. We conducted tests and selected the optimal 2-second iFrame interval for our apps. However, in some cases, it can be changed to 1 second. By reducing the GOP length, the decoding, and thus the beginning of stream processing, is faster.iOSSet maxKeyFrameIntervalDuration = 2.AndroidSet iFrameIntervalInSeconds = 2.Background streaming to keep it uninterruptedIf you need short pauses during streaming, for example, to switch to another app, you can continue streaming in the background and keep the video intact. In doing so, we do not waste time on initializing all processes and keep minimal end-to-end latency when returning to the air.iOSApple forbids recording video while the app is minimized. Our initial solution was to disable the camera at the appropriate moment and reconnect it when returning to the air. To do this, we subscribed to a system notification informing us of the entry/exit to the background state.It didn’t work. The connection was not lost, but the library did not send the video of the RTMP stream. Therefore, we decided to make changes to the library itself.Each time the system sends a buffer with audio to AVCaptureAudioDataOutputSampleBufferDelegate, it checks whether all devices are disconnected from the session. Only the microphone should remain connected. If everything is correct, timingInfo is created. It contains information about the duration, dts, and pts of a fragment.After that, the pushPauseImageIntoVideoStream method of the AVMixer class is called, which checks the presence of a picture to pause. Next, a CVPixelBuffer with the image data is created via the pixelBufferFromCGImage method, and the CMSampleBuffer itself is created via the createBuffer method, which is sent to AVCaptureVideoDataOutputSampleBufferDelegate.Extension for AVMixer:hasOnlyMicrophone checks if all devices except the microphone are disconnected from the sessionfunc pushPauseImageIntoVideoStream takes data from the audio buffer, creates a video buffer, and sends it to AVCaptureVideoDataOutputSampleBufferDelegateprivate func pixelBufferFromCGImage (image: CGImage) creates and returns CVPixelBuffer from the imagecreateBuffer (pixelBuffer: CVImageBuffer, timingInfo: input CMSampleTimingInfo) creates and returns a CMSampleBuffer from timingInfo and CVPixelBufferAdd the pauseImage property to the AVMixer class:In AVAudioIOUnit, add the functionality to the func captureOutput (_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method:AndroidWith Android, things turned out to be simpler. Looking deeper into the source code of the library that we used, it becomes clear that streaming is actually in a separate stream.Considering the life cycle of the component where our streaming is initialized, we decided to initialize it in the ViewModel—it remains alive throughout the life cycle of the component to which it is bound (Activity, Fragment).ViewModel life cycleNothing will change in the life cycle of ViewModel, even in case of changes in configuration, orientation, background transition, etc.But there is still a small problem. For streaming, we need to create a RtmpCamera2() object, which depends on an OpenGlView object. This is a UI element, which means it is eliminated when the app goes to background and the streaming process is interrupted.The solution was found quickly. The library allows you to easily replace the View option of the RtmpCamera2 object. We can replace it with a Context object from our app. Its life lasts until the app is eliminated by the system or closed by the user.We consider the elimination of the OpenGlView object to be an indicator of the app going to background and the creation of this View to be the signal of a return to foreground. For this purpose, we need to implement the corresponding callback:Next, as we mentioned before, we need to replace the OpenGlView object with Context when going to background and back to foreground. To do this, we define the required methods in ViewModel. We’ll also need to stop streaming when ViewModel is eliminated.If we need to pause our streaming without going to background, we just have to turn off the camera and microphone. In this mode, the bitrate is reduced to 70–80 Kbps, which allows you to save traffic.WebSocket and launch of the player at the right timeUse WebSocket to get the required information about the content being ready for playing and to start streaming instantly:Use of adaptive bitrate and resolutionIf we perform streaming from a mobile device, cellular networks will be used for video transmission. It is the main problem in mobile streaming: the signal level and its quality depend on many factors. Therefore, it is necessary to adapt the bitrate and resolution to the available bandwidth. This will help maintain a stable streaming process regardless of the viewers’ internet connection quality.How adaptive bitrate worksiOSTwo RTMPStreamDelegate methods are used to implement adaptive bitrate:Examples of implementation:The adaptive resolution is adjusted according to the bitrate. We used the following resolution/bitrate ratio as a basis:Resolution1920×10801280×720854×480640×360Video bitrate6 Mbps2 Mbps0.8 Mbps0.4 MbpsIf the bandwidth drops by more than half of the difference between two adjacent resolutions, switch to a lower resolution. To increase the bitrate, switch to a higher resolution.AndroidTo use adaptive bitrate, change the implementation of the ConnectCheckerRtmp interface:SummaryStreaming from mobile devices is not a difficult task. Using open-source code and our Streaming Platform, this can be done quickly and at minimal costs.Of course, you can always face problems during the development process. We hope that our solutions will help you simplify this process and complete your tasks faster.Learn more about developing apps for streaming on iOS and Android in our articles:“How to create a mobile streaming app on Android”“How to create a mobile streaming app on iOS”Repositories with the source code of mobile streaming apps can be found on GitHub: iOS, Android.Seamlessly stream on mobile devices using our Streaming Platform.More about Streaming Platform

How we solve issues of RTMP-to-HLS streaming on iOS and Android

Long launch times, video buffering, high delays, broadcast interruptions, and other lags are common issues when developing applications for streaming and live streaming. Anyone who has ever developed such services has come across at least one of them.In previous articles, we talked about how to develop streaming apps for iOS and Android. And today, we will share the problems we encountered in the process and how we solved them.Use of a modern streaming platformAll that is required from the mobile app is to capture video and audio from the camera, form a data stream, and send it to viewers. A streaming platform will be needed for mass content distribution to a wide audience.Streaming via the Gcore platformThe only drawback of a streaming platform is latency. Broadcasting is a rather complex and sophisticated process. A certain amount of latency occurs at each stage.Our developers were able to assemble a stable, functional, and fast solution that requires 5 seconds to launch all processes, while the end-to-end latency when broadcasting in the Low latency mode takes 4 seconds.The table below shows several platforms that solve the latency reduction problem in their own way. We compared several solutions, studied each one, and found the best approach.It takes 5 minutes to start streaming on Gcore Streaming Platform:Create a free account. You will need to specify your email and password.Activate the service by selecting Free Live or any other suitable plan.Create a stream and start broadcasting.All the processes involved in streaming are inextricably linked. Changes to one affect all subsequent ones. Therefore, it would be incorrect to divide them into separate blocks. We will consider what can be optimized and how.Decrease of GOP size and speed up of stream delivery and receptionTo start decoding and processing any video stream, you need an iframe. We conducted tests and selected the optimal 2-second iFrame interval for our apps. However, in some cases, it can be changed to 1 second. By reducing the GOP length, the decoding, and thus the beginning of stream processing, is faster.iOSSet maxKeyFrameIntervalDuration = 2.AndroidSet iFrameIntervalInSeconds = 2.Background streaming to keep it uninterruptedIf you need short pauses during streaming, for example, to switch to another app, you can continue streaming in the background and keep the video intact. In doing so, we do not waste time on initializing all processes and keep minimal end-to-end latency when returning to the air.iOSApple forbids recording video while the app is minimized. Our initial solution was to disable the camera at the appropriate moment and reconnect it when returning to the air. To do this, we subscribed to a system notification informing us of the entry/exit to the background state.It didn’t work. The connection was not lost, but the library did not send the video of the RTMP stream. Therefore, we decided to make changes to the library itself.Each time the system sends a buffer with audio to AVCaptureAudioDataOutputSampleBufferDelegate, it checks whether all devices are disconnected from the session. Only the microphone should remain connected. If everything is correct, timingInfo is created. It contains information about the duration, dts, and pts of a fragment.After that, the pushPauseImageIntoVideoStream method of the AVMixer class is called, which checks the presence of a picture to pause. Next, a CVPixelBuffer with the image data is created via the pixelBufferFromCGImage method, and the CMSampleBuffer itself is created via the createBuffer method, which is sent to AVCaptureVideoDataOutputSampleBufferDelegate.Extension for AVMixer:hasOnlyMicrophone checks if all devices except the microphone are disconnected from the sessionfunc pushPauseImageIntoVideoStream takes data from the audio buffer, creates a video buffer, and sends it to AVCaptureVideoDataOutputSampleBufferDelegateprivate func pixelBufferFromCGImage (image: CGImage) creates and returns CVPixelBuffer from the imagecreateBuffer (pixelBuffer: CVImageBuffer, timingInfo: input CMSampleTimingInfo) creates and returns a CMSampleBuffer from timingInfo and CVPixelBufferAdd the pauseImage property to the AVMixer class:In AVAudioIOUnit, add the functionality to the func captureOutput (_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method:AndroidWith Android, things turned out to be simpler. Looking deeper into the source code of the library that we used, it becomes clear that streaming is actually in a separate stream.Considering the life cycle of the component where our streaming is initialized, we decided to initialize it in the ViewModel—it remains alive throughout the life cycle of the component to which it is bound (Activity, Fragment).ViewModel life cycleNothing will change in the life cycle of ViewModel, even in case of changes in configuration, orientation, background transition, etc.But there is still a small problem. For streaming, we need to create a RtmpCamera2() object, which depends on an OpenGlView object. This is a UI element, which means it is eliminated when the app goes to background and the streaming process is interrupted.The solution was found quickly. The library allows you to easily replace the View option of the RtmpCamera2 object. We can replace it with a Context object from our app. Its life lasts until the app is eliminated by the system or closed by the user.We consider the elimination of the OpenGlView object to be an indicator of the app going to background and the creation of this View to be the signal of a return to foreground. For this purpose, we need to implement the corresponding callback:Next, as we mentioned before, we need to replace the OpenGlView object with Context when going to background and back to foreground. To do this, we define the required methods in ViewModel. We’ll also need to stop streaming when ViewModel is eliminated.If we need to pause our streaming without going to background, we just have to turn off the camera and microphone. In this mode, the bitrate is reduced to 70–80 Kbps, which allows you to save traffic.WebSocket and launch of the player at the right timeUse WebSocket to get the required information about the content being ready for playing and to start streaming instantly:Use of adaptive bitrate and resolutionIf we perform streaming from a mobile device, cellular networks will be used for video transmission. It is the main problem in mobile streaming: the signal level and its quality depend on many factors. Therefore, it is necessary to adapt the bitrate and resolution to the available bandwidth. This will help maintain a stable streaming process regardless of the viewers’ internet connection quality.How adaptive bitrate worksiOSTwo RTMPStreamDelegate methods are used to implement adaptive bitrate:Examples of implementation:The adaptive resolution is adjusted according to the bitrate. We used the following resolution/bitrate ratio as a basis:Resolution1920×10801280×720854×480640×360Video bitrate6 Mbps2 Mbps0.8 Mbps0.4 MbpsIf the bandwidth drops by more than half of the difference between two adjacent resolutions, switch to a lower resolution. To increase the bitrate, switch to a higher resolution.AndroidTo use adaptive bitrate, change the implementation of the ConnectCheckerRtmp interface:SummaryStreaming from mobile devices is not a difficult task. Using open-source code and our Streaming Platform, this can be done quickly and at minimal costs.Of course, you can always face problems during the development process. We hope that our solutions will help you simplify this process and complete your tasks faster.Learn more about developing apps for streaming on iOS and Android in our articles:“How to create a mobile streaming app on Android”“How to create a mobile streaming app on iOS”Repositories with the source code of mobile streaming apps can be found on GitHub: iOS, Android.Seamlessly stream on mobile devices using our Streaming Platform.More about Streaming Platform

Streaming Platform year in review: Updates and results of 2022

Throughout 2022, we worked hard to make our services convenient and useful for you. Now we’re happy to share the results!Minutes of broadcasting2022 brought our Streaming Platform plenty of new clients to greet. All our efforts rounded up to about 57 million transcoded minutes of different kinds of content, which equals almost 108 years!Check out this short video to better appreciate the results. We calculated how many average football matches, Instagram live streams, YouTube videos, Netflix episodes, and TikTok videos would fit into 57 million minutes.Streaming Platform updatesHere are the top updates our team brought to life to improve your experience on our platform.New simplified control panel and improved UI. Creating Streams has never been simpler! Easy set-up, multiple streams consolidation in one single player, organized video hosting options, improved interface, and restreaming opportunities.New cost-effective pricing—per minute billing and free encoding. We introduced our new pricing plan with free adaptive bitrate encoding that counts the length of the original video only. No gigabytes, extra payment for transcoded qualities, or pre-paid commitments. You’ll only pay for the minutes you use! It makes our prices lower than those of our competitors.Improved video encoding. We now compress video better while ensuring the same level of quality is maintained.Video Calls new features and redesign. You can now create a unique visual presence, improve your brand recognition, and simply cover up your personal area or workspace by adding virtual backgrounds on your video calls—blur, static images, or even animated images using AI/ML in a simple browser. Also, you can now share videos, store files, and browse the entire chat history.Low latency for live streams. Now we offer an option of choosing between normal speed and low latency delay via HLS of up to 4 seconds for live broadcasts.Object recognition using AI/ML for UGC, VOD content.New open-source apps on GitHub. We know our users love copying code, so we keep helping them by adding new demos on GitHub: iOS video scrolling like in TikTok, React Native Video Call Demo App.We sincerely thank you for partnering with us this year. In 2023, we will continue to make the Streaming Platform even more convenient and functional to meet all your business needs and keep your viewers happy!

Streaming Platform year in review: Updates and results of 2022

Throughout 2022, we worked hard to make our services convenient and useful for you. Now we’re happy to share the results!Minutes of broadcasting2022 brought our Streaming Platform plenty of new clients to greet. All our efforts rounded up to about 57 million transcoded minutes of different kinds of content, which equals almost 108 years!Check out this short video to better appreciate the results. We calculated how many average football matches, Instagram live streams, YouTube videos, Netflix episodes, and TikTok videos would fit into 57 million minutes.Streaming Platform updatesHere are the top updates our team brought to life to improve your experience on our platform.New simplified control panel and improved UI. Creating Streams has never been simpler! Easy set-up, multiple streams consolidation in one single player, organized video hosting options, improved interface, and restreaming opportunities.New cost-effective pricing—per minute billing and free encoding. We introduced our new pricing plan with free adaptive bitrate encoding that counts the length of the original video only. No gigabytes, extra payment for transcoded qualities, or pre-paid commitments. You’ll only pay for the minutes you use! It makes our prices lower than those of our competitors.Improved video encoding. We now compress video better while ensuring the same level of quality is maintained.Video Calls new features and redesign. You can now create a unique visual presence, improve your brand recognition, and simply cover up your personal area or workspace by adding virtual backgrounds on your video calls—blur, static images, or even animated images using AI/ML in a simple browser. Also, you can now share videos, store files, and browse the entire chat history.Low latency for live streams. Now we offer an option of choosing between normal speed and low latency delay via HLS of up to 4 seconds for live broadcasts.Object recognition using AI/ML for UGC, VOD content.New open-source apps on GitHub. We know our users love copying code, so we keep helping them by adding new demos on GitHub: iOS video scrolling like in TikTok, React Native Video Call Demo App.We sincerely thank you for partnering with us this year. In 2023, we will continue to make the Streaming Platform even more convenient and functional to meet all your business needs and keep your viewers happy!

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.