Home > Articles > Home & Office Computing

  • Print
  • + Share This
This chapter is from the book

Scalable Media Transmission

As illustrated in technical detail later in this chapter, the Internet is primarily a one-to-one medium. The only supported connections on the Internet are between two computers—there is no concept of "broadcasting" on the Internet as a whole. In fact, the term unicast has been coined to describe the Internet function of sending media to just one user. Any webcast is simply many unicasts, one to each individual viewer. Each of these unicasts uses up more bandwidth at the source of the broadcast, goes through all the bottlenecks present on the path to the source of the broadcast, and uses additional processor power on the media server for that broadcast.

Since the Internet went mainstream in the mid 1990s, several major technologies have been created to address the problem of scalable media transmission (large audiences in the thousands or millions).


In the mid 1990s, multicast and the Mbone (for Multimedia Backbone) were all the rage. Multicast allows every machine on the same network (using the same router) to share and receive only one copy of a live media broadcast, as shown in Figure 5-7. Basically, it could make the Internet benefit from some of the efficiencies enjoyed by traditional radio or television. And it was a standard Internet feature built into all the routers. However, the feature was optional; by default, most routers had multicasting turned off. No worries—the Mbone consisted of a technique for people to connect to the "multimedia backbone" created by this network of multicast-enabled routers. Essentially, a company that wanted to be on the Mbone, but whose ISP was not, could "tunnel" through its ISP (much like dialing into an office over a virtual private network) to the Mbone.

Figure 5.7Figure 5-7 Multicast allows multiple machines to share and receive only one copy of a live broadcast.

As any Google search on Mbone shows, the bulk of the excitement about the Mbone starts and ends in 1996. Part of the problem was the fact that at that time, a T-1 was quite expensive, broadband was hardly deployed, and multicast was a way to quickly soak up bandwidth. There was no financial incentive on the part of ISPs to enable a feature that promoted high-bandwidth applications. Though entire books were written about how Mbone could (and might have) revolutionized media delivery on the Internet, most of this did not come to fruition.

A subtle irony exists in that multimedia webcasts, such as Internet radio, are today plagued by the curse of popularity: Bandwidth cost rises as a function of audience, instead of being a large fixed cost like offline radio broadcast. A properly multimedia-enabled Internet with multicast routing can solve this. Yet, multicast as it is designed still does not address the financial accounting needs (such as usage tracking and controls) that would give ISPs the incentive to enable it. In addition, it is to a large degree an all-or-nothing proposition; a few multicast-enabled routers don't help much—it takes a majority (almost all) to make a difference.

Work on multicast protocols continues today, however, and they have found their niche inside corporate networks. Multicast can be used to effectively reduce the amount of bandwidth used within the corporation by live webcasts. Inside the enterprise, the relatively high bandwidth (100 to 1,000 megabits per second or 100,000 to 1,000,000Kbps) combined with the capability to control the end-to-end networking make multicast a practical choice.


Multicast is complicated to set up and debug and is not supported by most ISPs; quite literally, multicast has never quite been ready for prime time. However, Chapter 6, "Enterprise Multicast," describes how multicast can actually be successful within private business networks.

Content Delivery Networks

By 1997, the Internet had expanded to a large mainstream prominence. Several major, Internet-wide brownouts had people theorizing that the Internet might suddenly just stop working due to traffic growth. The scalability of the Internet for websites alone was in question, and many believed that the growth of streaming media applications could be the final blow to a functional Internet.

A large part of the problem was due to the inefficiencies in long-haul data transmission. As data traveled between major ISPs at major exchange points, bottle-necks and traffic problems prevented the data from getting through, even though there was plenty of bandwidth at the destination and source. Figure 5-8shows how data moves from source to destination through major exchange points.

Figure 5.8Figure 5-8 Data takes many hops along an indirect path to get from the server to the requesting computer.

The source of the content had a lot of bandwidth. The consumers had sufficient bandwidth to receive the content. The problem was getting the data to the "edge" of the network where the consumers were, at the dialup or broadband ISPs. One solution already in use was to host content at several different locations and direct users to the most local server. Content delivery networks (CDNs) designed a way to automate the process, and automatically distribute the content to these servers at the edge of the network. (See Chapter 4, "Internet Video Transport." for more data on CDNs).

Figure 5.9Figure 5-9 "Edge server" scenario: content is cached at servers close to consumers.

This solution worked fantastically for web pages and so-called static content, such as graphics and large media files. Anything that could be served from a web server benefited from this approach.

If a web server is located in New York but has viewers in London, a CDN copies the static files for that site (quite possibly beforehand) over to a local server in London. Thus, the delay in retrieving these files is low. The main HTML page might still be served from New York, but all the larger files—graphics, multi-media files, and so on—are served from a London facility from machines operated by that CDN. The source in this example would be New York, and the edge in this case would be London, as shown in Figure 5-10.

Figure 5.10Figure 5-10 HTML served from New York; graphics and media files served from a local server in London.

A CDN operates many servers in different places around the country or world, and thus can increase scalability as well as reduce delay. A few web servers that only have to serve HTML pages, but can offload the graphics and multimedia serving to hundreds of servers around the world, can scale to millions of users where it might have been limited to tens of thousands before. For static media, CDNs are a proven concept. For applications that permit pre-caching of content (sending the files out to edge servers before they are requested) before demand hits, CDNs are a good solution.

CDNs have failed, however, to adequately address the needs of real-time media. Radio stations, live video webcasts, and similar applications all have similar scaling issues, but they do not succumb to the same CDN approach.

To distribute real-time audio or video to thousands or millions of consumers via edge servers, it is necessary to get the media file to those edge servers in real-time. As mentioned earlier, packet loss and delays inhibit this. If the stream is being generated now (as with a live concert) and packets are lost on the way to an edge server, everyone connected to that edge server experiences that packet loss.

Some CDNs try to mitigate this by sending the stream to the edge servers multiple times over different paths (in the hopes that one of the streams arrives intact). Other CDNs have explored going around the Internet and using satellites to beam the show to each CDN edge server—a good idea in theory, but quite expensive in reality. The high-profile live webcasts of concerts and events to mainstream audiences using CDN technologies have ranged from spectacular failures to qualified successes. And even the most prominent CDNs have had to repeatedly reconfigure their live 24/7 streaming audio deployments to make them stable and functional.

It would seem that CDNs are challenged only by live media streams and can deliver on-demand and downloaded media just fine. There is more to the problem than just getting the content from the server to the edge, however; even with edge networking, there are network barriers between the edge server and the client.

By using a CDN, web pages seem fast because they are small and because it doesn't matter to the user whether a static page is downloaded in 1 or 2 seconds. Audio and video are not so forgiving. Even for non-live streams, packet loss between the edge server and the client can still get in the way of media delivery.

The term last mile has been coined to describe the part of the network that connects the end user with the Internet. As shown in Figure 5-11, the last mile comprises the dial-up modem, cable modem, DSL, or wireless access between the end user, up to the ISP central office, and up to the source of the ISP's Internet connectivity.

Figure 5.11Figure 5-11 The "last mile."

You can see that there are several points of failure between the edge server and the consumer, whether DSL or cable modem or dialup. In many cases, a shared "cloud" exists where frames (packets) are sent from the local building where wires run to the source of the ISP's bandwidth. These clouds are often shared between several competing ISPs and can actually bottleneck the traffic flowing from the consumer to the Internet. For instance, a DSL modem connection might be capable of 1.5 megabits per second (1500Kbps)of data transfer, but during a "stormy" peak period the cloud can carry only 200Kbps of traffic down to the user reliably. And as a connection is only as fast as its slowest intermediate link, traffic between any two points (say, the edge node and the backbone ISP) can similarly affect delivery of real-time media.

CDNs definitely have a place in the content delivery puzzle, but moving real-time media over a nonreal–time Internet continues to challenge Internet infrastructure builders.

Distributed or Peer-to-Peer (P2P) Networking

When consumers first started using the Internet, a marked distinction existed between servers and clients. Servers were Unix-based workstations; clients were slow PCs connected via modem. Today, the world is radically different. Servers are off-the-shelf PCs running a variety of operating systems including Windows, Linux, and Mac OS X. Users have broadband cable and DSL connections on fast computers that they leave running all the time. Peer-to-Peer (P2P) is a networking paradigm that exploits the new reality that users are no longer second-class citizens. The term peer (not to be confused with network peering in an earlier section) describes a machine on a network capable of serving as well as consuming content.

P2P networking can be used for a variety of tasks, obviously including music sharing, but we are interested in media delivery. P2P uses the consumers of content as servers, and does it in an automatic way: Peers just start finding other peers that have the appropriate content, instead of having to go to the source media server.

The complicated part of using peer networking is that it adds unreliability and randomness to an already unreliable and error-prone problem—real-time media delivery. P2P has excelled when it has transmitted media files because everyone who downloads a file instantly becomes another source, and (assuming users leave their machines running) new seekers of a given file can get it from previous users of the file, as shown in Figure 5-12.

Figure 5.12Figure 5-12 P2P media delivery creates a pyramid effect, whereby new users obtain content from other users rather than a single server.

P2P generally provides a cost savings because it offloads bandwidth demands to users. More interesting for our purposes is the fact that P2P networking can also provide increased scalability like a CDN because the peers are essentially many small edge nodes.

Different P2P approaches to content delivery have been used to solve a variety of different content delivery problems. The most famous use of P2P involves reducing bandwidth costs for on-demand audio or video downloads (Napster). Another popular use of P2P techniques has been efficiently delivering live broadcasts on the public Internet or within a corporate intranet in a manner similar to multicasting but implemented in software.

Sometimes P2P networks are just considered an extension of CDN technology with more lower bandwidth nodes. Other times, P2P networks are considered a more traditional, Internet-like way to balance the use of resources (bandwidth, connectivity, and CPU time) on the Internet.

The Stigma of P2P Media Distribution

Whereas CDNs have existed since the late 1990s and are an established and respected way to deliver content reliably, P2P technologies carry a sort of stigma because of their extensive use in software and music piracy applications. How-ever, it is an undeniable fact that P2P networks represent a substantial portion of Internet traffic, including audio and video delivery. Thus, while mainstream media publishers and vendors may be reluctant to consider P2P technology, adult content distributors, Internet advertisers, and video game publishers are already experimenting with and using P2P media distribution.

Many well-funded P2P technology companies avoid using the term P2P altogether in their pursuit of the media distribution market. They use terms such as outer edge networking, grid, mesh, and distributed downloads to re-brand their techniques and avoid controversial connotations of P2P.

As with CDNs, P2P networks are not designed to interoperate. Just as each CDN does things a bit differently and creates their own proprietary delivery network, P2P vendors create their own secure private P2P networks for media delivery.

One of the major problems (not a transport problem) of P2P is that putting transient or permanent copies of media all over the Internet is often not the desired effect, especially when the media is expensive to create as in music and video. Aside from the obvious legal problems created by applications that employ P2P to share files freely and with anyone, the various closed and secure P2P applications still create ephemeral partial copies of media all over the Internet. Content providers would love to have the best of both worlds—the tremendous cost savings of P2P delivery along with the tremendous centralized control available with traditional client server and CDN approaches. P2P solutions targeting large content providers have done their best to provide encryption, file fragmentation, security, control, and to generally make P2P solutions look exactly like their CDN counterparts, simply at a tremendously lower price point and a potentially deeper level of network efficiency.


If your content is popular and in high demand, you can actually count on the end users to spend their own money to distribute it for you, if you are unconcerned about controlling or tracking who gets it. It has been remarked that it costs money to distribute popular media in the offline world, but it costs money to prevent popular media from being distributed online.

  • + Share This
  • 🔖 Save To Your Account