There are dozens upon dozens of different technologies within streaming video stacks and many more options within each technology category, all of which are evolving and transforming at an astounding rate. It’s no surprise that data unification is a real problem, and whether you monitor streaming data proactively or reactively, finding the source of a problem can be a needle in a haystack with so many different beginning and end points as well as independent tools and technologies.
Why it’s key to monitor streaming data effectively
Quality of Experience (QoE) and Quality of Service (QoS) are key metrics in the competitive streaming industry. Low video quality, high latency, or worse, a complete breakdown of the streaming service, are all detrimental to the viewer experience. With plenty of alternative streaming providers to choose from, this will typically translate into high customer churn, low customer loyalty, and customer management costs shooting through the roof.
How can this be avoided? By making sure you can monitor streaming data effectively. Tracking streaming data allows operators to detect possible issues in advance and receive notifications on sudden problems in real-time. This enables them to react quickly to either prevent issues from affecting the viewer in the first place or, at the very least, resolve them as fast as possible.
Unfortunately, monitoring data in a streaming environment, especially in live stream monitoring, comes with significant challenges.
What makes it so hard to monitor streaming data
Lack of standardisation increases the complexity of monitoring streaming data. Broadcast television is built on proven, established technologies that are based on government-backed standards. This leads to two main advantages: first, it ensures everything operates the same. When consumers move from one city to another, and from one cable operator to the next, everything is relatively the same (except for the guide). Second, it ensures broadcasters can swap out technologies from different vendors without having to rebuild everything. That means the service can continually improve.
“Broadcasting has been around for over 60 years, streaming for a fraction of that.”
Streaming, however, does not operate in the same playing field. One of the main reasons for this is that it’s relatively new. Broadcasting has been around for over 60 years, streaming for a fraction of that. The technologies we take for granted today, HTTP and chunked streaming, are relatively new, as Move Networks brought the concept to market with Microsoft during the 2008 Summer Olympics.
Furthermore, there is a large amount of different technologies within the streaming video stack. Each of these also evolves continuously and rapidly. This makes standardisation, a process that often involves years of development and coordination, extremely difficult. A standard could take three years to be ratified and by that time, the technology likely would have evolved past it.
Data fragmentation makes it difficult to monitor streaming data efficiently
First of all, it’s important to note that fragmentation of the video streaming technology stack isn’t all bad.
“One of the biggest advantages streaming has over broadcasting is access to data. Every component within the workflow creates information which can be tapped into, visualised, and used to make business decisions.”
Sometimes this data is free and clear, available through an API. Other times, it’s contained in a black box and available only through the vendor’s visualisation tools. Regardless of how it can be accessed, it’s there, waiting to be drawn into business intelligence tools through which you make smarter decisions.
Furthermore, this lack of standardisation shows a lot of innovation: different technology approaches provide a lot of choice in how a challenge will ultimately be addressed. Also, although the lack of consistency within the technology stack may require providers to create their own middleware or ad-hoc solutions, they get it done. Streaming works.
“When it comes to providing a reliable, consistent service, fragmentation poses serious challenges with harnessing all the data it creates.”
However, when it comes to providing a reliable, consistent service, fragmentation poses serious challenges with harnessing all the data it creates. Within the broadcast workflow, operators have clear visibility into every component down to the set-top box in the viewer’s home. This ensures a very high-quality service because the operator can troubleshoot any component within the delivery chain.
In contrast, with streaming, this kind of monitoring is very difficult to achieve. Technology vendors are not obligated to any type of data structure, normalisation, or even access. Some technology vendors require the streaming operator to use their custom dashboard, while others may keep their data under lock-and-key (i.e., a black box).
This means that streaming providers must figure out a way to bring together whatever data they can gather from the different components in their technology stack to get a complete view of network performance (especially when using third-party CDNs) and the end-user experience (QoE). There are some excellent visualisation tools out there, such as Splunk, Tableau, Looker, and Datadog, but even with these, getting a complete picture of the streaming workflow when wrestling with fragmented data can be very time-consuming.
Still, it’s critical: this holistic picture can help identify the root cause of outages and other service degradation.
How a monitoring harness helps to monitor streaming data efficiently
The issue with data fragmentation isn’t just the speed at which issues can get resolved. Yes, that’s critical, but it’s more so the ripple effect. As technologies in the workflow need to be upgraded or changed, they might require new visualisation tools. That requires training not only on how to use the visualisation tools but also how to enable observability as well as how to connect the data points together again.
“As technologies in the workflow need to be upgraded or changed, they might require new visualisation tools.”
A monitoring harness, however, is a much different approach. It’s the idea of creating a monitoring pipeline within an OTT workflow in which all of the components connect programmatically. So, when a component needs to be swapped out or upgraded, it is simply connected to the harness. At the end of the harness is the same visualisation tool, with no new training needed. What’s better? The only adjustments that need to be made are within the logic of the harness: ensuring the data that comes from the connected component is normalised. The result is much more efficient streaming as operations engineers gain the observability they need, at the speed they need it, without any concern about the stack underneath.
👉 You may like: Monitoring harness white paper
Accept the challenges & adjust how you monitor streaming data
The standardisation of data within the streaming industry is a long way off. As such, streaming operators will contend with data fragmentation for the foreseeable future. There will always be work to be done connecting data sources so that observability can happen. However, the main question is where that work to monitor streaming data is completed.
If it’s at the top of the monitoring stack, with customised visualisations and logic to normalise data, the impact of new technologies in the workflow will always slow down the ability for operations to ensure the highest quality streaming experience. If it’s completed within a harness, software engineers can ensure datasets connect within the data lake at the time of API connection. As a result, operations engineers will always use the same tool, the same familiar dashboards, and the same comfortable processes so that viewers always receive the same great service.
Want to know more about how to monitor streaming data efficiently with Touchstream? Contact us here.