Live television broadcast online has high standards in avoiding latencies. Online viewers should not be told the end of a program by other social media users in front of the screen. Various approaches are suitable here to minimize the shifts in signal transmission in online TV compared to normal television.
Live events in the sports and entertainment sector such as the Super Bowl, the World Cup or the Eurovision Song Contest in particular benefit from the tension of the unpredictable outcome. This makes them particularly lucrative for online media providers. However, an average delay of the online stream by 30 seconds compared to normal TV programs can be devastating. For example, it is extremely annoying when a football fan sees his team’s player running for a penalty on his cell phone, but is told by television viewers via social media comments that the ball went into the goal. The same applies to spectators who already know the result of a dribble from the reactions in the indoor area during public viewing via the Internet TV on the restaurant terrace. Such scenarios destroy the tension effect of live broadcasts.
Online live TV has less acceptance among viewers due to this slower signal transmission compared to classic television. According to the Limelight Networks State of Online Video 2019 Report, every second viewer in Germany would watch more live sports online, provided that the broadcast was not slower than the conventional TV program. Live events that live even more from interactions with viewers, such as online gaming events, auctions or quizzes, cannot tolerate the slow broadcast of the next online video sequence. They also require the signals to be sent back quickly.
It takes a certain amount of time before a live video is processed and played out via the network to the end device of the user. This is mainly due to the HTTP TCP / IP internet protocol. This was not originally intended for the Internet broadcasting of video content. Encrypting camera videos and making them available on an online player or OTT device takes time. The difference between the transmission of the signal and the final playback on the end device is known as live streaming latency. There are several options to keep this latency as low as possible. The right method varies depending on the content played out and the specific requirements.
Remedy 1: Shorten chunks
By adjusting the chunk sizes, the memory load during transmission to the end devices can be minimized. It looks like this: Before each video playback via the HTTP-TCP / IP protocol, a player stores three video segments – so-called chunks – created by the encoder in the buffer. It takes time. With a chunk length of 10 seconds, the terminal temporarily stores 30 seconds of video material. If you reduce the length to six seconds, there are only 18 seconds to save in advance. The reduced video segments therefore increase the transmission rate. If the chunks are reduced to up to one second, an end-to-end latency of only 6 seconds can be achieved. Common HTTP chunked protocols such as HLS and MPEG-DASH are supported. This procedure is often used when the transmission does not require real-time playback of the signals or interactivity. But it also reaches its limits, because chunks that are too low can require repeated buffering by the player if the data stream has been interrupted or the buffer is empty. Therefore, every workflow should be tested accordingly.
Remedy 2: CMAF encryption
The Common Media Application Format (CMAF) also accelerates the display of signals on the player. A uniform framework is used to save the video data in HLS and MPEG-DASH. This means that content only has to be saved and packed once. No separate data sets of the same audio and video data are required to display online videos on different end devices.
The chunk’s CMAF encryption reduces latency. Video samples without CMAF chunk are only output by the encoder and sent over the network when the complete segment has been created. Only then does the decoder start its work. In contrast, segments with CMAF chunks use coded video chunks, so-called video fragments. The coding ensures that the sections are correctly described and signals the availability of smaller sections to the end devices. These smaller subsegments can be transmitted before the entire video segment is encoded. As a result, the decoder also starts playing before the entire segment is received.
Remedy 3: WebRTC for real-time broadcasts
Google, Mozilla, Opera and other major market players support the open source project Web Real-Time Communications (WebRTC) for communication via browsers and mobile applications. In the first instance it was used for web conferences. But the technology is now also being used for video programs with large numbers of viewers. It uses the UDP network protocol, which is much more effective than TCP / IP network protocols. Chunk segmentation of the data streams by the encoder and intermediate storage are also eliminated.
Without the installation of special plugins or video players, WebRTC also reproduces the content on standard web browsers without any problems. An adapted bit rate ensures optimal picture quality even under variable network conditions, so that the viewer experiences a flawless viewing experience.
To keep viewers happy and loyal, media and broadcast companies need the services and technologies of content delivery network (CDN) providers like Limelight Networks. They support the industry in delivering a high-quality viewing experience to every user and also remain competitive in the lucrative event area. In order to be able to provide appropriate streams, obstacles to web traffic must be removed and interference-free transmission guaranteed. The media delivery infrastructure must remain flexible, as users increasingly follow live events online and become more and more mobile. The challenge is to create an experience in TV quality and in real time through these channels.
- streaminghackershots: © kentoh – stock.adobe.com