CMAF vs HLS vs Dash: The Impact of Multiple Streaming Formats on Observability

monitoring-screens-showing-different-streaming-formats-cmaf-vs-hls-vs-dash

As anyone deep in the video streaming technology stack knows, format fragmentation is a big issue. Part of the problem is that there's no consensus as of yet on which format – CMAF vs HLS vs Dash – is the best. Apple is pushing HTTP Live Streaming (HLS), MPEG-Dash is gaining a lot of traction, and CMAF is an attempt to solve the issues inherent in the other two formats. The result for the streaming engineer? Complexity. You can’t just standardise on one because that would mean alienating user devices that may not support it. So you are stuck supporting all three of them and, unfortunately, any new formats that make their way into the market, such as HESP. Although this may not seem like a big issue, it is, because having to support multiple formats means a more complicated workflow and more complicated monitoring.

The root of the format fragmentation issue: CMAF vs HLS vs Dash

So why are there so many formats? Blame the devices. According to Leichtman Research Group, the number of Internet TV-connected devices used by consumers has grown by over 300% between 2010 and 2020. Combined with the growth of mobile devices capable of streaming, which is at 6.4BN as of 2021 (80% of the world’s population), the result is a lot of devices on which people can watch streaming video. 

This in itself wouldn’t be an issue if there was only one streaming format. Each of those device manufacturers can select HLS, Dash, or even Smooth Streaming and HTTP Dynamic Streaming (HDS), which means streaming operators must support that format if they want to reach that device. As you can imagine, that quickly adds up to a lot of encodes.

The table below compares CMAF vs HLS vs Dash. It shows that most content publishers delivering to both modern and older devices (pre-2016 iPhones and Macs, for example) need multiple formats whether that’s because of chunk/segment approach (fMP4 vs. MPEG-TS) or encryption method. Although CMAF is a “holy grail” approach because it can provide chunks in multiple formats, a publisher can’t cut over to it if they intend to support legacy devices.

 

cmaf-vs-hls-vs-dash-an-overview-and-comparison-infographic-touchstream

The impact of device fragmentation on workflow complexity

When you do the maths to support device fragmentation, the ripple effect is considerable. If you have a library of 100 content titles, for example, you will already be encoding 5 or 6 bitrates for each one. So that means 500 to 600 encodes. Now let’s say that you want to support the three major formats. That’s a total of 1500 to 1800 encodes. This might be manageable, but now you have to consider the proliferation of devices. 

Additionally, you can’t just ignore legacy devices. So let’s say that the device pool you want to support is 50. Now you are talking about a situation in which you have thousands of encodes to manage. Of course, this example might be okay for a static library of VOD assets, but what about when it’s live content or a FAST implementation? Then you are not just encoding one time to support all those devices, bitrates, and formats, you are doing it continuously. 

The ripple effect is that now you need more storage, you need to scale your advertising and security architectures, and, most importantly, you need to figure out a way to monitor everything. The latter includes every encode, the availability of every bitrate, and the performance of every stream to every device.

Monitoring the complexity

Given that there is no solution on the horizon to reduce format fragmentation, you need to ensure you can monitor streaming data from all places like encoders, caches, and players. Most streaming operators rely on player-driven monitoring solutions to ensure a high QoE, but monitoring the player is a reactive approach. So while QoE is important and data should be gathered to reflect what the viewer is experiencing, it can’t be the only vector in your monitoring telemetry. 

Bitrate availability, for example, can be determined at the encoder. Monitoring of encoder output can determine if a specific bitrate has been produced correctly and that all segments are valid. The viewer, then, never finds out that one of the bitrate streams isn’t available because the problem is solved before they encounter it. The challenge with that kind of proactive operations is building a monitoring solution to provide the needed insight. How can you gather information from components throughout the workflow to ensure you are ahead of the complexity related to supporting multiple formats?

👉 You may like: QoS monitoring in OTT: importance, challenges & best practices

How a monitoring harness enables observability across the workflow

Fortunately, or unfortunately, gathering the data from different components within the workflow, such as an encoder, has become easier. Many technology vendors now provide APIs from which to pull log data into visualisation tools. However, that is the unfortunate part. It’s just lots and lots of data which, in and of itself, doesn’t provide you with the insight you need to understand when there’s a problem. You have to go looking for it. 

That’s exactly where a monitoring harness comes into play: it provides you with a means to programmatically gather data from throughout the workflow into a single visualisation dashboard. The key, though, is to ensure the dashboard provides both observability and monitoring capabilities. In this way, operations managers can visually identify when there is a problem in the workflow, such as the availability of a bitrate from the encoder, and operations engineers can drill into the data to understand what the exact problem is (i.e., which bitrate) and then troubleshoot the workflow component (the encoder) to fix the issue before the viewer ever sees it. The result is a much more effective and efficient root cause analysis. You lower your MTTD and MTTR while raising your viewer satisfaction because they never see issues that you have proactively solved.

You can’t master format fragmentation, but you can master monitoring

There really isn’t going to be a solution to the issue of fragmentation. The question of CMAF vs HLS vs Dash is redundant too, since you’re going to have to support all three of them plus a myriad of devices. Moreover, you’ll need to monitor everything to ensure that the complexity of fragmentation doesn’t undermine your QoE. However, you can’t rely on traditional player-based monitoring solutions that are reactive. You need a combination of player and workflow analytics from which to derive observability insights and dig deeper into the data when needed. The most effective way to do that is through a monitoring harness. If you haven’t put one together yet, then it’s time to start looking at how to implement one so that you aren’t washed away in the tsunami of format fragmentation.

Discover how employing Touchstream’s VirtualNOC as a monitoring harness helps you overcome format fragmentation as well as improve root cause analysis and resolution by downloading our Monitoring Harness White Paper.