Advertisement

How to Succeed as a Modern Enterprise: Breaking Down Content Distribution

By on

Click to learn more about author Doug Kennedy.

Digital transformation is rapidly accelerating, which is enormously exciting as industries evolve for the future. But change does not come without some bumps in the road. One area that enterprises have struggled to figure out is efficient content distribution.

Content distribution isn’t something that people talk much about, or at least they didn’t prior to the pandemic. Problems like security, automation, and IT staffing issues garnered more attention in recent years. After all, mechanisms for content distribution have been around for a long time; all the kinks are worked out, right?

Not exactly. The desire to move to the cloud for greater flexibility and cost savings has made content distribution significantly more complicated. Problems get bigger and much more expensive – and things like security, automation, and staffing become inextricably tied as enterprises grapple with network health.

So, how does content distribution work, and why is it essential to create successful modern environments?

The Importance of Efficient Content Distribution

Briefly, content distribution involves getting all the patches, applications, and other updates – Office 365 updates, third-party applications, internally developed applications, and even operating systems – to every endpoint that needs them. All of this “stuff” is essential to maintaining network security and system performance and, in the case of updates, to properly connect and interoperate with customer and partner systems, keeping them bug-free.

Content delivery networks (CDNs) are used to deliver this vital content faster and reduce latency. A traditional HTTP-based CDN is made up of a bunch of servers, which are geographically distributed in a way that they are positioned between the receiving server and the server that sends the content. Once a user requests content, the data is split into smaller chunks and sent through all the incorporated servers. The same goes for the next user who requests the same content – and so on and so forth.

This sounds simple enough, and maybe it used to be. But the world changed when COVID-19 struck. With entire workforces leaving the corporate nest at the beginning of the pandemic, scheduled maintenance and regular content distribution no longer happened at the rate they should because they had to go over VPNs instead of traditional corporate networks.

Content distribution via VPN takes much longer and dramatically reduces network performance, if updates go through at all, which is sometimes difficult to ascertain. As a result, content distribution has often been scuttled in favor of being able to function and complete day-to-day work.

This has put organizations in a precarious position. Bad actors know that system hygiene has been lax, and they’ve been more than willing to take advantage of the tiniest unpatched hole, conducting cyberattacks that can wreak havoc on an organization. Because of this, many enterprises have been looking to move to cloud-based environments, which has opened a whole new set of issues.

Content Distribution at Scale

Imagine 30,000 employees all trying to download a piece of content from the cloud at once. The time and the disruption to processes required to accomplish this would be staggering, not to mention that it would be difficult to ascertain whether every endpoint completed its download. This scenario is completely prohibitive, and it’s been the only option until relatively recently. There had been no way for companies to reliably scale content distribution in the cloud, which is why so many organizations still rely on conventional servers or hybrid environments.

To overcome these obstacles and ensure that systems function as they should, many organizations now leverage peer-to-peer computing models because of their ability to perform specific tasks remarkably quickly and at an enormous scale. When leveraged for content distribution, peer-to-peer technology takes unused, available compute and storage capacity that exists on endpoint devices, and it stores and shares content without the need for local server infrastructure. It also minimizes traffic over the wide area network (WAN) and rapidly distributes content to the endpoints.

Modern peer-to-peer content distribution systems are very smart. They identify the best location to which content can be accessed via a single download, which then serves all the devices in the same subnetwork. Because of the intelligence built into the most advanced systems, they can recognize traffic running across the network and change course in advance to avoid congestion and conflicts with other enterprise traffic. Additionally, they can sufficiently maintain a connection in the most adverse conditions to be able to complete the download regardless of quality of network or duration. These factors make content distribution via peer-to-peer far more reliable than other methods.

These systems can also manage the storage of content locally and oversee the content cache in a manner that doesn’t impact the end-user experience. All of this goes on seamlessly in the background in real time across hundreds or thousands of locations, but it doesn’t happen haphazardly. Administrators have complete visibility and control, viewing content as it is being distributed by location or content type. If an administrator notices a problem or the system indicates an issue, the admin can pause, resume, or reprioritize the distribution flow.

Notably, peer-to-peer models don’t require any HTTP servers. Instead, they work by connecting the users via WebRTC, without the need for a plugin to be installed on the user’s device. Content is downloaded once from a CDN (which is still the mechanism for content delivery) to a peer, and because the CDN provider has cached that content globally, the peer-to-peer network can get the content from the closest location, thus reducing latency and speeding the download. Content is grabbed in chunks so if the CDN server was to become unavailable, the content request would simply go to the next location using http or https ­– in other words, delivery won’t stop if one roadblock gets in the way.

There are also different ways peer-to-peer systems can be deployed. BitTorrent, for example, sources content from many peers at the same time. Other vendors create a daisy chain of peers so that no peer is overwhelmed with requests. In this model, a request on the local network is for a specific piece of content. If no one has it, then the system selects a peer to start the download from the parent office or the CDN location, depending on the client location. They grab a chunk and pass it off to the next peer, then save it to disk. Thus, the transfers are all done in memory and over the network, which is faster than introducing disk into the picture. This is repeated until the content is downloaded. If a peer drops off the network, the next one picks up exactly where it left off – there is no content retransmission.

Peer-to-Peer Content Distribution in Action

So, what does this look like in practice? Suppose you’re an international retailer. Traditionally speaking, you would have had to deploy and maintain thousands of servers all over the world for the sole purpose of staging content for local distribution and providing a preboot execution environment (PXE) for their OS deployment. A server would be needed in every store and every distribution center. This is a massive expense and waste of resources. Couple it with the fact that some locations’ network connectivity might be poor, making it difficult to maintain a connection long enough to download content reliably. This very common setup seems outdated at best.

Now, consider what this scenario looks like with an intelligent peer-to-peer solution at work. All those servers would be gone, and your software would be distributed and machines imaged across the globe in record time, regardless of what kind of environment you had at home base (on-premises, hybrid, or cloud) – without reliability concerns, without performance issues. The difference between approaches is staggering. Using peer-to-peer, your organization would save millions of dollars each year by eliminating hardware and network costs. You’d gain all of the time back those employees devote to managing servers, which could be redirected to other priorities – all the while ensuring that endpoints stay properly configured and up to date for a more secure, highly optimized network.

Additionally, with the right peer-to-peer content distribution solution, bandwidth is no longer an issue, and users have content immediately available at each location for new installs or updates because once content is downloaded, it is always available to serve to any device that requires it. This can mean a difference of hours, days, or sometimes even weeks saved in getting a new piece of software delivered and installed for the user.

If the peer-to-peer solution is properly architected and sufficiently intelligent, you eliminate the single point of failure as well by having the content available from multiple sources in the local subnet, whereas the loss of a single distribution server in a traditional client server model can bring all software distribution to a halt. An intelligent peer-to-peer solution will always make sure that content storage is sufficiently distributed locally so that loss or removal of any endpoint or multiple endpoints will not preclude content from being available to other users.

Taking It to the Cloud

Innovation in the space has been in overdrive since the pandemic started, and now a single download solution is possible in a cloud environment, eliminating bandwidth issues altogether. These next-generation content distribution systems are extremely flexible and can handle hybrid or full cloud environments, which gives enterprises the green light to move forward with their digital transformation as they see fit.

Shifting to a peer-to-peer content distribution solution is a smart move in any infrastructure environment, but now it enables digital transformation to make sense by getting rid of network strain and delivering content securely at massive speed and scale. It also ensures that content is actually delivered so that enterprises can operate safely, reliably, and efficiently. Peer-to-peer should be considered the optimal model for both the present and the future of content distribution.

Leave a Reply