Why Streaming Microservices Are the Future of OTT

illustration showing how streaming microservices work

There is no arguing the cloud and streaming microservices radically changed the way operators engineer their streaming services. Traditional broadcasting involves lots of hardware racked and stacked in data centres because those companies service viewers only within their geographic reach. Streaming, though, is a loosely-federated collection of different technologies which can be installed and operated from anywhere in the world. This makes streaming technology ideal to service viewers wherever they wish to watch content.

Although this approach to delivering content represents the future of how people watch video, it also introduces a host of new challenges, such as scale. Massive numbers of simultaneous users can, unlike in traditional broadcast, overwhelm resources if there isn’t enough capacity. That’s what makes the cloud so important to streaming.

Streaming product developers and operations engineers understand the need for the cloud which is why most of the stack is already there: encoders, transcoders, DRM servers, caches, monitoring probes, etc. Everything that can be virtualised has been so that delivery capacity is dynamic. The stack can scale up and down depending how many users are requesting content and at what bitrate--obviously, this is something impossible to achieve with hardware, on the same timeline, and with the same elasticity. 

The cloud is the only feasible way streaming operators can meet regional and global demand for content without spending unpredictably high amounts of money on physical infrastructure. In that sense, the next step towards true scalability and redundancy are streaming microservices.

The cloud has evolved

Although the streaming video tech stack is an evolution of the broadcast tech stack, it too is evolving because of how the cloud is changing. When streaming operators first adopted the cloud as their primary infrastructure, it was all about virtualised instances. What they realised was that it was much easier (and cheaper) to manage, maintain, and monitor virtualised infrastructure. For example, the number of server instances could be increased programmatically in relation to demand. That’s a stark contrast to using physical servers which need hands to rack-and-stack.

The problem is, virtualization doesn’t provide the kind of scale that streaming really needs. Spinning up a new server instance still requires quite a bit of time, and in some cases, a reservation with the cloud provider (there are only so many instances available for specific configurations). 

The cloud is less about virtualised servers and more about containers.

These are slimmed down operating systems tailored to run a specific application. Using Docker and Kubernetes, streaming engineers are able to quickly and easily scale technology components within the workflow. With DevOps and CI/CD pipelines, streaming technology teams have so much more control over the elasticity of their infrastructure and the deployment of the technologies in the stack. Still, scale is a challenge. Yes, containers are better at scale than virtualised instances, but there are resource constraints. 

Think about it like this: a bare metal server used by a cloud service provider may be able to host 100 virtual machines, but it can host 1000 containers.

Why you should use containers and streaming microservices

Many streaming operators, if not most, have embraced DevOps and containers to develop and deploy their technology stack. However, this has only worked up to a point, with the problem being that the tech stack is growing in complexity. A combination of device proliferation coupled with non-standard protocols and codecs means a fragmented workflow where software is being continually expanded to ensure it can handle the complexity. This means bigger, fatter containers which is exactly opposite of the value containers are supposed to provide: lighter, slimmer, and more focused deployment.

Although virtualizing streaming workflow components into containers is definitely more scalable than instances, being able to scale more effectively is going to require a different use of them. Think about it this way: you are using NGINX as your cache, and you know there are a lot of features that simply aren’t needed to support that functionality. What if you could tailor the NGINX open source version specifically for only those features needed, reducing its footprint, and keeping the container light? 

Reducing workflow components into functions within containers would result in a new kind of development methodology. Instead of complete applications within the containers, you would have just the required or necessary features of the application. 

You would have microservices.

The benefits of using streaming microservices 

Despite how the transition from physical hardware to cloud infrastructure improves scalability, there are still challenges for streaming operators to meet their core objective of delivering a high-quality video experience at scale. Here are just a few of them:

Shifting demand. Viewers watch content at different times. In broadcast, the infrastructure never changes because it can meet all the demand. In streaming, though, the infrastructure must grow at certain times during the day and shrink during others, to reduce cost. Failing to spin up enough infrastructure results in unhappy users (read: refunds, cancellations, churn, loss of revenue). Spinning up too much infrastructure means you waste money with unneeded infrastructure. 

Changing requirements. New protocols, formats, and devices require changes to the technology stack to ensure video content can be delivered to users. In streaming, these changes happen extremely rapidly. Because there’s no way to predict where users will be when they watch content, it’s impossible to predict where to put all of the workflow components. Put them too far away from requesting users, and latency becomes an issue. Deploy them everywhere, and cost needlessly increases.

Global expansion. Unless specifically prevented by licensing agreements, streaming content can be delivered to anyone, anywhere in the world, and that’s exactly what is happening. Not only are streaming operators expanding into new geographies (which is made much easier without physical infrastructure), their viewers are travelling more as well. Too much fixed infrastructure and viewers may have a much worse experience outside their normal geography.

Those challenges are what make containers ideal for the streaming tech stack. They can be deployed easily, across any cloud provider in any geography, and technologies within the containers can be iterated, reimaged, and re-deployed quickly. 

But if the software is too bloated within the container, it can create unforeseen problems. First, the containers are larger. They take up more memory, storage, and compute resources (so you can deploy less of them within the same footprint). Second, they become harder to manage. It’s far more difficult to troubleshoot a large program, like NGINX, than it is a smaller application. Finally, performance can suffer. Just like in a virtualised instance, there may be unnecessary features taking up cycles within the container.

Shifting away from seeing containers as just an easier way to manage the deployment of virtualised services, and more as a way to deploy just a sliver of functionality, also mirrors how the cloud is changing. The density is not only increasing, but so is the scope. Providers are not only adding more capacity, they are expanding outward to include everything from cell towers, to home gateways, to the devices themselves. The cloud is a thickening, massive resource of computing power making it harder and harder to see the bare metal underneath. Think of it as a fabric, a stretchy one. 

benefits of streaming microservices - Touchstream infographicScalability requires being able to quickly expand capacity and availability. Because microservices are extremely lightweight, they can be spread throughout that fabric ensuring that functionality is available where it’s needed rather than contained in specific geographic spaces. This makes possible several critical aspects needed for tomorrow’s streaming services.

Cost-effective elasticity

Part of the challenge with instances or containers is planning. In many cases, operators need to provision their cloud instances weeks in advance. Despite the cloud coming off as having infinite resources, it doesn’t. The capacity is based upon the number of physical servers which can either support X# of virtual machines or Y# of containerised services. As such, instances and capacity must be requested, especially if the services intended to be deployed require very specific configurations which may only be available in very short supply. 

With a streaming microservice architecture leveraging serverless functions, the idea of reserving instances is gone.

Elasticity can happen almost as organically as requests come. This is because the cloud architecture model in this future shares resources across all of the physical infrastructure. As a result, scaling up may require just percentages of all the shared resources, rather than a dedicated server.

Improved operationalisation

Microservices are, by nature, loosely coupled, and the architecture is self-healing. This means that if a component in the workflow goes down, redundant functions can pick up the load (they can even be deployed in alternate clouds) and continue to scale as normal. There is nothing to “tip over,” which puts less stress on operations. Without having to worry about capacity issues related to interfaces on specific services (different from when a workflow component is deployed on a server which has a NIC with a traffic ceiling), operations can fully focus on service optimization to improve QoE.

More agile development

Unlike monolithic software, such as applications which have been virtualised, the development of microservices can happen much faster. 

Set within the DevOps CICD pipeline, these services can be iterated quickly and deployed into sandbox environments for testing/QA and then deployed within weeks rather than months. What’s more, as component functionality continues to break down into constituent microservices, the development time can continue to drop. This means that small, but sometimes very impactful iterative changes to component operation can be developed, tested, and deployed in just days.

Streaming microservices are more than just a development methodology

Microservices are the future of all highly-scalable and elastic systems.

This is so because they represent a fundamental shift in the nature of programming: the evolution from static to fluid. Where virtualised instances and even containers are relegated to memory and compute spaces on individual machines, microservices encapsulated in serverless and cloud functions can exist anywhere. They can even be spread across multiple physical machine resources as the cloud becomes more of a fabric that itself can expand fluidly (especially as cloud providers re-architect their offerings using infrastructure-as-a-code and NFV which are actually microservice models).

Embracing microservices as the future of streaming architectures is to break free from the static infrastructure paradigm and adopt the idea of a single fabric of distributed resources. That means that workflow components (what we often call “features” in traditional software speak) are broken down into smaller bits of code and employed into compute, storage, and memory. In essence, doing this enables the larger functionality but in a much more liquid state. 

Depending on where you are in this evolution of cloud and streaming video technology deployment, you have some decisions in front of you. You can choose to embrace the cloud and containers or not. If you already have an established DevOps and CI/CD pipeline, you can choose to keep expanding features of software within containers or reduce them to bare-minimum, purpose-built functions. Either way, you are at an inflection point which may have profound consequences on your ability to provide a high-quality video experience at scale.

Whatever you decide to do, it’s clear that microservices are a stepping stone towards even deeper integration with the compute fabric. Breaking apart workflow components into a smattering of self-contained services (which can be containerised) is part one. Converting those containerised services into serverless functions is the next step. 

Microservices monitoring best practices in the OTT industry

Of course, shifting to a microservice-based architecture isn’t something that should, or will, happen overnight--but it’s already happening, and if you haven’t started,  you’re behind. The decision to embrace it as the future of your tech stack deployment needs to be aligned with other operational needs. First and foremost is monitoring

This requires you to understand the challenges of monitoring microservices, especially in terms of how to ensure that you can effectively monitor a workflow component that has been broken into functions, and how to connect those functions together to create a meaningful picture of the health and performance of that component as a whole that can be used to drive operational improvement (or troubleshooting). 

In that sense, deploying a monitoring harness helps, but even that is a new way to look at monitoring which must be carefully evaluated before deployment. 

Whether you jump into streaming microservices now or next year, you should start testing. In fact, the best way to truly understand the impact of such an evolutionary change to your tech stack is to run a microservices-based architecture in parallel with the current one. That way, you can do side-by-side comparisons on key metrics to understand just what the performance, scalability, and cost benefits are. 

Just avoid not making a decision and letting the evolution of the cloud, and your architecture, pass you by. Doing so may just mean that your customers pass you by for a service that gives them more consistent service, no matter where they are.

Request a demo