The Day Your Streaming Video Workflow Meets the Cloud

Written by

Brenton Ough

May 31, 2021

Share on

The Day Your Streaming Video Workflow Meets the Cloud

Most telecommunications companies and network operators that embraced streaming did so from a broadcast perspective: they kept all of the technology and equipment behind their firewall and under their own roof. It wasn’t because there were no cloud alternatives, it was because that was what they were used to doing. Internal processes were built around monitoring network hardware, not SaaS solutions.

But to operate a scalable streaming service, with potential viewers in geographic regions well outside the normal radius, equipment can’t be contained in localized data centers. That model doesn’t provide the elasticity needed to support sudden audience growth nor the low latency demanded by users for use cases like live sports and interactive experiences. Bringing every request back to a centralized location goes against the very distributed nature of streaming. What’s more, as streaming continues to evolve and operators continue to transition to newer technologies, many of the components in the streaming stack are becoming virtualized or even serverless. Consider encoding. These boxes, once confined to racks in the data center, have found their way into the distributed fabric of global computing.

So how do you know when to virtualize a workflow component? How do you know when to replace one cloud-based component with a different distributed version? We’ll take a look at that, and more, in this article.

When the Cloud Makes Sense

The migration of streaming video components to the cloud, and from one cloud technology (such as virtualization) to another (like serverless functions) is a natural evolution of streaming architectures: the need for scalable and resilient services for a global user base. But migrating technologies shouldn’t be taken lightly. Yes, the architecture needs to be able to grow efficiently and effectively based on audience demand, but it may not make sense for a component to be virtualized, turned into a microservice, or even made into a serverless edge function. It’s not a one-size-fits-all approach. 

The first step is to determine the operational benefit of migrating the component. Will it have a meaningful impact on key metrics such as video startup times, rebuffer ratio, and bitrate changes? Also, will transitioning the component make it easier to support? If the answer is “yes” to both questions, then it probably makes sense to plan a path for migration.

The second step, though, can complicate that path: determining how to monitor the new version. When the migration is from hardware to software, or from software to cloud, this can be a significant challenge as the transition could involve an entirely new approach (such as replacing hardware probes with software versions; a type of transition in and of itself). Of course, having a monitoring harness in place can make things much easier as the new version can be programmatically connected to the harness enabling operations to continue using existing dashboards and visualizations. In contrast, if no harness has been established, understanding the monitoring implications of the technology transition is critical to continuing that path of migration. Without a way to integrate the new version into existing monitoring systems will make it more difficult to achieve observability.

The Reality of Parallel Workflows

There are lots of considerations to make when transitioning to a new version of workflow technology. Obviously, operationalization is critical: can it be monitored, managed with existing CI/CD pipelines, etc? But there’s also the business to consider. The streaming platform, whether just out of the gate or well established, has paying subscribers. There are viewers with expectations of consistency, quality of experience, and reliability. This means that an operator can’t simply cut over from on-premise equipment or software to a cloud-based version or replace one cloud-based version with a different one. The two versions must operate side-by-side so the monitoring infrastructure, built to handle the previous version, can be adapted and modified. There shouldn’t be a complete cut-over until it’s been established that the new, cloud-based versions can support existing viewership. 

streaming video workflow - overview of two strategies - Touchstream infographic-1Overview of two different streaming video workflow strategies: cut-over vs. parallel

The Old Is New Again

As hardware becomes virtualized, as software moves from servers to cloud-based instances, operators might be inclined to decommission the old equipment or the previous version of the component. But there actually might be benefits in keeping it around.

The cloud, although highly distributed, isn’t disaster-proof. Even though your video technology stack components are now scalable and elastic, the cloud can still go down. There have been documented instances of even large cloud providers, like Amazon, going offline for a couple of hours. 

Remember those encoding boxes, stacked to the ceiling? Rather than dismantling them, they can be used as a critical part of a redundancy strategy. If there does happen to be a cloud failure, then requests can fall back to the older hardware-based components within your data centers. This could include purpose-built boxes, like encoders, or more commodity boxes, such as caches. Yes, viewers may experience a bit more lag (although employing commercial CDNs as part of delivery can help mitigate that) and it will cost you some extra coin to have both systems running simultaneously, but at least the content will continue to flow until cloud resources can be brought back online in the event of an outage. You can even deliver a small percentage of production traffic out of the redundant system (keeping it “warm”) to regional subscribers so there are no latency issues with request response times.

Embracing Transition

Being in a constant state of transition, moving from one version of streaming technology to another, is just part of the reality now. But it’s more than just migrating protocols or codecs. It’s also about virtualization and serverless functionality. It’s about embracing the cloud, the edge, the mist, and the fog. Transition within streaming workflows is about the very nature of the technology. Building and providing a future-proof service means accepting the fact that video technology stack components are not only going to change in functionality but also in their very nature. And when you decide to migrate from one instantiation, like hardware, to another, like the cloud, you can’t just cut-and-run. The existing technologies must remain running until the newer version has been proven and can be integrated into existing monitoring frameworks (such as a monitoring harness). When that’s done, the old versions can become part of a redundancy strategy so that viewers are never left without the content they pay for.

Popular posts

Continual Transformation Is a Natural by-product of Delivering Streaming Video

March 31, 2021
Share on

The Day Your Streaming Video Workflow Meets the Cloud

May 31, 2021
Share on

The Word of the Day Is Interoperability

June 11, 2021
Share on

A Live Streaming OTT Guide: Why Comprehensive Live Stream Monitoring Is Essential

April 17, 2020
Share on

The One Thing You Must Have To Ensure A Great Streaming Experience

January 28, 2021
Share on