The One Thing You Must Have To Ensure A Great Streaming Experience

Engaging content? A good recommendation engine? The ability to connect on-prem, cloud, and third-party data sources? AI to automate basic tasks based on data thresholds? Sure, all of those are good and even, in most cases, necessary to be able to make quick business decisions when there’s an issue with the streaming video experience. But there’s something even more basic that’s needed, a fundamental feature which provides your operations team the ability to affect change across your workflow.

Observability.

Why Visibility Isn’t Enough

Before we explain observability, let’s start with visibility.

Imagine you are looking out the window of a cabin retreat. What might you see? One tree? Two trees? Three? A mountain. A rock. Maybe even a deer? Those things you see, though, are just visible elements within your scope of vision. When you have visibility, that’s all you get: the individual pieces. For streaming video monitoring, that means you have dozens of screens upon the walls of your NOC displaying different sources of data such as CDN performance, geographic availability, uptime, etc. This data can come from logs, programmatic connections to components in your video streaming technology stack, or even third-parties.

Monitoring, as we often describe it, with screens of data, is just visibility--and visibility isn’t enough in and of itself.

Having visibility into all the data is hugely important. Without that, your streaming video monitoring would truly be guesswork. But just being able to see doesn’t provide operations engineers with any insight. Even if different data sources are packaged into visualization interfaces which perform calculations to display an indicator when something is wrong, it’s not enough. That’s because those calculations are missing other relevant data sources, data from different components within the streaming technology stack that may have a bearing. It’s not that you don’t have visibility into those other data sources as well, it’s just that your operations engineers have to figure out the relationship between them all manually.

The ultimate problem with having only visibility, is just that: issues in your streaming workflow and viewer experience are visible, but disconnected.

Monitoring

What You Need is Observability

Now it’s time to define observability.

Let’s return to the previous example of looking out the window of your cabin retreat. Whereas visibility only shows you the elements, observability connects them together. Looking at those trees, and mountains, and the deer, you observe the delicate balance of an ecosystem. You make the observation that all of those things work together to support each other and that imbalance in one, say too many deer, would cause problems with other parts of the ecosystem: the trees would get decimated which, when they die, would leave the mountains more open to erosion. Observability means drawing conclusions from seeing all of the data elements and the relationships each data set, like trees, has to the others, such as mountains.

Consider the following problem: a viewer experiences a degradation in video quality. The hunt is on to find the root cause. The data from the player shows how it transitioned to subsequent lower streams. That’s what you have in terms of player visibility. But what caused the transition? More player data, again visibility, shows there bandwidth is available. Yet you still can’t draw a conclusion from this. There’s nothing observable. So you look at CDN logs. Yet another source of data. In the logs, you can see all of the bitrates are in the cache and the cache is responding correctly to player requests. 

With visibility, you can see each data source but independent of the others. With observability, those elements become connected. In the example problem with a degradation in video quality, you wouldn’t even have to examine the CDN logs because the encoder would have already thrown an error which your monitoring system would have picked upon, correlating it to the errors from the player session data, and displaying an indicator.

Building for Observability

Observability isn’t something that just happens. Yes, conclusions can be made manually by clever engineers when looking at siloed datasets through different visualization tools. But a good monitoring strategy and architecture, with a flexible harness for connecting data from any source within the workflow, makes observability happen. It doesn’t leave it to human intuition or interpretation. It takes all that data and connects it, correlates it, and visualizes it in a way which makes observability clear. Imagine being able to see your streaming workflow from end-to-end, with clear visual indicators which identify when something is having issues or failing. Think of it as green, yellow, and red lights. The key, though, is that all of the data is related meaning operations engineers can easily observe problems and dig into details as needed.

For observability to be available to anyone in the organization, for conclusions to be presented as part of the act of monitoring, the monitoring system itself must do some of the heavy lifting of enabling observability. It should be embraced at the inception, during the development of the monitoring strategy. Because with that in hand, building a monitoring harness will be based on decisions rooted in enabling observations, not just visibility. These decisions include selecting tools and partners who provide programmatic access to data. It includes choosing a data platform, typically a log analytics tool, which can help normalize incoming data and enable correlations. And it also includes developing interfaces which visualize those observations so that engineers, operations, product managers, and anyone in the organization can act more quickly to ensure a consistent and reliable experience for viewers.

To find out more, download out monitoring harness White Paper.

Get Whitepaper