Blog

How Locally Embedded Caches Fix the Capacity Problem for ISPs (and Why Content Providers Should Care)

Introduction

Whether you are a content provider or an Internet Service Provider (ISP), live streaming events likely cause some heartburn. Why? Bandwidth and quality. As bandwidth becomes constrained, viewing quality often degrades for subscribers. Unfortunately, bandwidth constraint is both an inherent function of the network and a natural result of streaming itself.

Two Challenges for ISPs

Two challenges loom for ISPs. First, ISPs can experience bandwidth capacity issues, especially if they’re further downstream. Typically, smaller ISPs buy bandwidth from larger ISPs. Second, most content today is delivered using unicast, a one-to-one streaming method, which is a much bigger obstacle to overcome. With unicast streaming, every single viewer gets their own content stream, even if they are watching the same content as their neighbor.

The result? Constrained bandwidth and a negative impact on Quality of Experience (QoE) metrics:

  • Time-to-First-Byte increases–Meaning slower start-up times.
  • Rebuffering rate increases–The spinning ball shows up more often and for longer.
  • The average bitrate goes down–Lower quality picture resolution for the viewer (e.g. getting a standard definition version when you expect 1080p), and packets getting dropped (e.g. the stream gets cut off so you have to press play again).

One common solution for ISPs is to purchase more upstream capacity on an as-needed basis. This approach, however, is not an ideal solution, as it can be expensive and complex.

What About Content Providers?

As for Content Providers, they have no say in the matters of streaming bandwidth or viewing quality at all. Effectively solving the problem requires a better approach than just increasing bandwidth. Locally embedded caching is one new approach that can benefit both ISPs and Content Providers, and this article explains how.

How Upstream Capacity Impacts QoE

Let’s do some quick back-of-the-napkin math to describe what happens in an ISP network.
  1. An upstream network provides an ISP with 10Gbps capacity.
  2. The ISP has 10,000 subscribers.
  3. Twenty-five percent of those subscribers decide to stream a popular sporting event, like an NFL football game, at the start of the game.
  4. Because the ISP offers symmetrical gig fiber, and players always try to get the highest possible bitrate, these 2,500 subscribers are all requesting the same 4.5Mbps stream simultaneously.
2500 * 4.5Mbps = 11,250Mbps or 11Gbps

But it doesn’t stop there:

  1. Another 10% of subscribers join about 30 minutes into the game.
  2. Thankfully, there is still some capacity left considering the 11 gbps exceeded the original 10 gbps capacity (especially given that other subscribers were likely requesting other internet content that also needed to be backhauled from the upstream provider). The players responded by requesting lower bitrates (and buffering).Let’s say they all dropped down to 2.5Mbps. That leaves about 1Gbps of capacity.
  3. But these 1000 new subscribers hit the highest bitrate first, requesting 4.5 gbps of content.
  4. This results in slower load times for all subscribers because their players immediately downshift to lower bitrates, causing everyone to drop another bitrate or two.
  5. Of course, this whole situation continues to fluctuate–not only as people drop their stream or request the live feed–but also as other subscribers engage with other internet content. It’s a mess.
Despite not operating in the network, Content Providers are also affected by capacity issues. In most cases, Content Providers are watching QoE metrics in real-time during live streaming events like sports. The data they use to monitor QoE, however, is often received directly from the player–which doesn’t provide clarity into upstream capacity issues. When a subscriber is connected to a commercial CDN that is peered with a large ISP, the content provider’s delivery team can easily triage an issue. But when a subscriber is two or three network hops away and the only data is from the player itself, tracking down where a problem occurred within the delivery pipeline becomes difficult. This common scenario illustrates how upstream bandwidth issues negatively impact subscriber QoE: lower bitrates, more buffering, slower startup times, and even dropped packets. Ultimately, when QoE drops, subscribers get upset– which both streaming operators and ISPs know all too well.

How Caching Fixes The Capacity Problem (Without More Capacity)

Although ISPs can always buy more upstream capacity on an as-needed basis (usually at a premium), it’s not a sustainable long-term solution. One big game every six months may be manageable, but live streaming video, particularly live sports streaming, is on the rise. According to Statista, over 159 million people watched live-streamed sports at least once per month in 2023, and industry experts predict this will increase exponentially. Buying capacity when it’s needed poses two problems: First, it requires that the capacity is available. Second, it requires that the capacity is affordable. Of course, ISPs can permanently increase their capacity, but this may not make good financial sense if their current capacity is sufficient for average daily requests. As for Content Providers, there is one true solution: Partner with ISPs downstream that deploy delivery caches in their last-mile networks.

The Benefits of Embedded Caches

By embedding caches in their network, rural ISPs can improve QoE KPIs without adding a single bit of capacity:

Request collapsing:

Because caches are HTTP servers, they can collapse many requests for the same content into a single request for that content. So if 1,287 of those 2,500 subscribers request the same live sports stream at the same time, the number of requested streams will be significantly reduced. The locally embedded cache delivers high-quality streams to individual subscribers, while upstream capacity remains open for other needs.

Higher average bitrate:

Using the symmetrical gig provided to your subscribers (rather than backhauling through upstream bandwidth), bitrates can stay much higher with a locally embedded cache delivering streams to subscribers.

Less buffering time:

Without the bitrate shifts that result from upstream bandwidth beyond capacity, subscribers get a much more consistent streaming experience. There’s also no need to buffer since the content session is being handled locally (rather than through your network, to the internet and back again).

Faster startup time:

When upstream capacity is maxed out, new requests for streamed content can all compete with each other. This results in some viewers experiencing slower start times because their request is waiting in a queue for a response. With local embedded caching, this doesn’t happen; the cache responds to requests for content, instead of an upstream CDN.

Small Investment, Big Impact

Here’s the good news: investing in locally embedded caches won’t break the bank for rural ISPs, and Content Providers can partner with ISPs already utilizing embedded caching today.

For ISPs looking to implement a locally embedded cache, there are options. ISPs that want to be in the business of managing a CDN can build and implement their own caches, using open source software like Varnish or NGINX. Now, ISPs and Content Providers have another option: partnering with a technology solution like Netskrt. With Netskrt, ISPs can implement a last-mile CDN to deliver the high-quality streaming video that subscribers–and Content Providers–expect, regardless of traffic spikes or multi-hop network locations. More than just a cache, Netskrt appliances are fully-serviced and intelligently managed, enabling content pre-positioning. In other words, Netskrt technology allows Content Providers to intentionally cache popular content so that upstream backhaul happens one time only: when getting the content into the embedded cache.

ISPs that implement embedded caches reap many benefits, and Content Providers partnering with those ISPs can reap even more. With a last-mile CDN like Netskrt, ISPs can better serve their subscribers and their content provider partners, thanks to an easy, fully-managed service that’s constantly monitored for optimal performance.

By implementing locally embedded caches in hard-to-reach subscriber networks, ISPs and Content Providers are guaranteed to improve overall viewer satisfaction, from watching the big game to accessing the big gaming download.

Check out the Netskrt whitepaper and find out how a regional ISP in New York state, deployed Netskrt’s last- mile CDN to improve their viewer QoE during live sports streaming events.

Scroll to Top