Home > Articles > Networking > Routing & Switching

  • Print
  • + Share This
This chapter is from the book

1.10 Multicast Performance in Routers

When deploying multicast, it is important to consider whether the routers in a network are well suited to support multicast. Just as some cars provide speed at a cost of safety, some routers provide unicast performance at a cost of multicast. As high-end routers are built to scale to terabits and beyond, router designers sometimes compromise multicast performance to optimize unicast forwarding. The two most important considerations when evaluating a router for multicast are state and forwarding performance.

A router must keep forwarding state for every multicast group that flows through it. Pragmatically, this means (S,G) and (*,G) state for PIM-SM. It is important to know how many state entries a router can support without running out of memory. MSDP-speaking routers typically keep a cache of Source-Active messages. Likewise, knowing the maximum number of Source-Active entries a router can hold in memory is crucial.

The obvious next question is "how many entries should a router support?" Like many questions in life, there is no good answer. Past traffic trends for multicast are not necessarily a reliable forecast for the future. Traffic trends for the Internet in general are rarely linear. Growth graphs of Internet traffic frequently resemble step functions, where stable, flat lines suddenly yield to drastic upward surges that level off and repeat the cycle.

The best policy is to select a router that can hold far more state than even the most optimistic projections require and monitor memory consumption. When state in a router begins to approach maximum supportable levels, take appropriate action (upgrade software or hardware, redesign, apply rate limits or filters, update your resume, and so on). With the exception of the Ramen worm attacks (see Chapter 5), state has not been much of a problem yet. Of course, as with mutual funds, past performance does not ensure future success.

Forwarding performance is characterized by throughput and fanout. Throughput describes the maximum amount of multicast traffic a router can forward (in packets per second or bits per second). Fanout describes the maximum number of outgoing interface for which a router can replicate traffic for a single group. As port densities in routers increase, maximum supported fanout becomes a critical factor. Also, it should be understood how increasing fanout levels affects throughput. As is the case with state, it is important to be aware of the performance limits, even if the exact amount of multicast traffic on the network is not known.

Forwarding performance is primarily a function of hardware. The switching architecture a router uses to forward packets is usually the most important factor in determining the forwarding performance of a hardware platform. Shared memory switching architectures typically provide the best forwarding performance for multicast. A shared memory router stores all packets in a single shared bank of memory.

Juniper Networks' M-series routers employ a shared memory architecture that is very efficient for multicast. In this implementation, multicast packets are written into memory once and read out of the same memory location for each outgoing interface. Because multicast packets are not written across multiple memory locations, high throughput levels can be realized regardless of fanout.

Some routers are based on a crossbar switching architecture. The "crossbar" is a grid connecting all ports on the router. Each port shows up on both the X and Y axes of the grid, where the X axis is the inbound port and the Y axis is the outbound port. With the crossbar architecture, packets wait at the inbound port until a clear path is on the crossbar grid to the outbound port. Inbound traffic that is destined for multiple egress ports must be replicated multiple times and placed in multiple memory locations. Because of this, routers with crossbar architectures usually exhibit multicast forwarding limitations.

Router designers sometimes work around this inherent challenge by creating a separate virtual output queue dedicated to multicast and giving the queue higher priority than the unicast queues. Unfortunately, this technique can cause multicast traffic to suffer head-of-line blocking, which occurs when packets at the head of the queue are unable to be serviced, preventing the rest of the packets in the queue from being serviced as well. Such a design assumes multicasts are a small percentage of total traffic because a router incorporating this design would be inefficient under a high multicast load.

1.10.1 RP Load

A cursory look at PIM-SM suggests that RPs should experience high load because they provide the root of all the shared trees in their domain. However, last-hop routers usually switch to the SPT immediately (SPT switchover is described in Chapters 2 and 3), so the shared tree is typically short-lived. One mechanism that can cause RPs to experience high load, though, is the PIM-SM register process.

As we will see in forthcoming discussions of PIM-SM (see Chapter 4), routers that learn of a new source inform the RP in their domain by encapsulating the multicast packets into unicast packets and sending them to the RP. The RP must decapsulate and process these packets. If a router sends these encapsulated packets at a very high rate, the RP can be overrun while trying to process them. To prevent this from occurring, Juniper Networks routers configured as RPs require a special interface that is used to decapsulate these packets in hardware.

1.11 DISCLAIMERS AND FINE PRINT

Throughout this book, reference is made to RFCs (Request for Comments) and Internet Engineering Task Force (IETF) Internet-Drafts. Internet-Drafts are submitted to the IETF as working documents for its working groups. If a working group decides to advance an Internet-Draft for standardization, it is submitted to the Internet Engineering Steering Group (IESG) to become an RFC. RFCs are the closest things to the official laws of the Internet. For a good description of Internet-Drafts and the various types of RFCs, visit http://www.ietf.org/ID.html.

It is not uncommon for protocol-defining Internet-Drafts never to reach RFC status. Likewise, vendors do not always implement protocols exactly as they are defined in the specification. Internet-Drafts that are not modified after six months are considered expired and are deleted from the IETF Web site. All RFCs and current Internet-Drafts can be found at the IETF's Web site. A good way to find an expired Internet-Draft is by searching for it by name at http://www.google.com. A search there will usually find it on a Web site that mirrors the IETF Internet-Drafts directory without deleting old drafts. Unless otherwise stated, all Internet-Drafts and RFCs mentioned in this book are current at the time of writing. These documents are constantly revised and tend to become obsolete very quickly.

Similarly, the implementations of Juniper Networks and Cisco System routers, the routers most commonly found in ISP networks, are described throughout this book. The descriptions and configurations are meant to assist engineers in understanding the predominant implementations found in production networks and provide a starting point for configuration. They are not the official recommendations of these vendors. It is also important to note that these vendors are constantly updating and supplementing their implementations. For officially supported configurations, it is best to contact these vendors directly.

1.12 WHY MULTICAST?

In less than a decade, the Internet has gone from a little known research tool to a dominant influence in the lives of people around the globe. It has created an age in which information can be disseminated freely and equally to everyone. The Internet has changed the way people communicate, interact, work, shop, and even think. It has forced us to reconsider many of our ideas and laws that had been taken for granted for decades.

Any person on earth with a thought to share can do so with a simple Web page, viewable to anyone with a connection onto the network. When considering the revolutionary impact their achievements have had on the way people interact, it is not ludicrous to mention names like Cerf, Berners-Lee, and Andreessen in the same breath as Gutenberg and Bell.

Nearly every aspect of communication in our lives is tied in one way or another to the Internet. Noticeably absent, however, in the amalgamation of content that is delivered prominently across the Internet is video. Video is an ideal fit for the Internet. While text and pictures do well to convey ideas, video provides the most natural, comfortable, and convenient method of human communication.

Even the least dynamic examples of video reveal infinitely more than the audio-only versions. For example, accounts of the 1960 Nixon-Kennedy debates varied widely between those who had watched on TV and those who had listened on the radio. So why then is video restricted primarily to the occasional brief clip accessible on the corner of a Web page and not a dominant provider of content for the Internet?

The answer is simple: The unicast delivery paradigm predominant in today's Internet does not scale to support the widespread use of video. Earlier attempts, such as the webcasts of the Starr Hearings and the Victoria's Secret fashion show, have failed to demonstrate otherwise.

The easiest target for video's lack of pervasiveness on the Internet has always been the limited bandwidth of the "last mile." It has often been argued that potential viewers simply do not have pipes large enough to view the content. However, with the proliferation of technologies like digital subscriber line (DSL) and cable modems, widespread residential access to video of reasonably adequate quality exists. Furthermore, for years, the number of people employed in offices with broadband Internet connectivity has been substantial. Finally, with nearly every college dorm room in the United States (and increasingly throughout the world) equipped with an Ethernet connection, client-side capacity is quickly becoming a nonissue.

The server side, on the other hand, has principally relied on unicast to deliver this content. The cost required to build an infrastructure of servers and networks capable of reaching millions of viewers is simply too great, if even possible. Compare that to the cost of delivery with multicast, where a content provider with only a server powerful enough and bandwidth sufficient to support a single stream is potentially able to reach every single user on the Internet.

Interestingly, while it has always been viewed as a bandwidth saver, the previously mentioned efficiency underscores multicast's capability as a bandwidth multiplier. With a multicast-enabled Internet, every home can be its own radio or television station with the ability to reach an arbitrarily large audience. If Napster created interesting debates on copyright laws, imagine the day when everyone on earth will be able to watch a cable television channel multicast from your very own PC.

It is worth noting that multicast need not be used solely for video. Multicast provides efficient delivery for any content that uses one-to-many or many-to-many transmission. File transfer, network management, online gaming, and stock tickers are some examples of applications ideally suited to multicast. However, multimedia, and more specifically video, is widely agreed to be the most interesting and compelling application for this delivery mechanism.

The brief history of the Internet suggests the inevitability that it someday will be a prevalent vehicle for television and radio, as all data networks converge onto a single common IP infrastructure. Accepting this, multicast provides the only scalable way to realize this vision. With such great potential for providing new services, it is logical to wonder why multicast has not been deployed ubiquitously across the Internet. In fact, to this point, the deployment has actually been somewhat slow.

The current number of multicast-enabled Internet subnets is miniscule compared to the overall Net. There is no single, simple answer why this is the case. The reasons include a collection of realities, concerns, and myths. Any discussion of multicast's benefits should also address these issues. In most cases, recent developments have been made that allay these concerns.

1.12.1 Multicast Lacks the "Killer App"

It took Mosaic, the first modern browser, to truly harness the power of the World Wide Web, resulting in unparalleled permeation. Many have argued that multicast needs the same "killer app" to fuel an explosion of growth. However, a closer look reveals that many of today's multicast applications are more than sufficient; they just happen to work without multicast.

A common technique used by some of the most popular multimedia applications is to attempt to access the content first via multicast, then failing over to unicast, if unsuccessful. To the end user, the result is the same. The selected show looks the same, and the favorite song sounds the same, whether delivered through unicast or multicast. The true difference exists in the amount of content available. Because of unicast's inability to scale, there are fewer shows to view and fewer songs to hear.

But the applications are plenty "killer."

1.12.2 The Content versus Audience Chicken-and-Egg Scenario

An intriguing phenomenon has emerged that has been a significant hindrance to deployment. Many multimedia content providers have been slow to provide multicast content because of the limited number of capable viewers. Conversely, because of this limited amount of enticing content, there has been a perceived lack of demand from end users for multicast availability, thus resulting in a small audience.

This deadlock can be broken by multicast-enabled ISPs, partnering with content providers, to market this content to end users. This type of content provides a differentiator for these ISPs to attract more customers. To compete for these customers, more ISPs deploy multicast. Soon, multicast becomes a standard part of Internet service, expected by all end users. Eventually, ISPs that are not multicast-enabled are at a distinct, competitive disadvantage. In the meantime, content continues to increase, fueling the demand cycle.

Content providers can use the example of HDTV as inspiration. Soon after the introduction of HDTV, some TV stations began to broadcast their programming in the new format, even though very few people had the hardware that could take advantage of this technology. Despite having a miniscule audience to enjoy HDTV, these pioneering broadcasters made content available, which began to give consumers the incentive to purchase the new TV sets. Likewise, by providing an abundance of multicast content on the Internet, content providers give end users the incentive to demand access to this content from their ISPs.

1.12.3 The "How Do We Charge for It?" Syndrome

The first question most ISP product managers ask when considering deployment of multicast is nearly always, "How do we charge for it?" The question that should be asked, however, is "How do we make money from it?" For years ISPs have struggled with the business case for multicast. The early model was somehow to charge the users of the service. ISPs adopting this model have generally met disappointing results. While they may have found a market of enterprise and virtual private network (VPN) customers willing to pay for the service, Internet users found this model to be less than enticing.

This lack of success is predictable because it neglects to consider one of the paramount philosophies making the Internet so popular: Delivering a raw IP connection to end users, through which many services can be derived, will be far more profitable than trying to charge users for each of the services they consume.

Imagine if, in the first few years after the Web was invented, ISPs had decided to charge their customers extra fees for the HTTP packets that traversed their connection. It might have changed the way people used the Web. Users may not have surfed so freely from site to site. Instead, ISPs quickly discovered that if they provided a simple connection, with no stifling rules or extra charges, people used the network more. In sacrificing revenue from "toll-taking," they enjoyed explosive growth as more customers used the network for more services. Unfortunately, many ISPs view multicast along this toll-taking model.

By deploying multicast, ISPs are enabling new services to be provided. It brings traffic onto the network that wasn't previously deliverable. ISPs that have provided multicast as a free part of their basic IP service have realized little revenue directly from multicast. But they have gained customers they would not have otherwise attracted. Moreover, providing multicast has lured the most valuable of customers—content providers. ISPs have long known that content begets customers. Internet users recognize the value and performance benefits of being able to access sites directly connected to their ISP's network.

ISPs that have offered multicast as just another basic, value-added service, like DNS, have been viewed by many as leaders, but that does not mean direct revenues from multicast cannot be realized. As in the case of unicast, the higher layers should provide advanced billable multicast services, while the network layer should be responsible for simply routing packets. Following the example of the Web, providers of higher-layer services, such as content hosting and application service providers (ASPs), will likely find a significant market for multicast content hosting.

1.12.4 Multicast Protocols Are Complex and May Break the Unicast Network

The protocols used to deploy multicast in a scalable way on the Internet today can certainly be considered nontrivial (enough to warrant the necessity for this book!). RPF, a central concept in multicast, represents a significant change of paradigm from the traditional destination-based unicast routing.

Designers and operators of networks agree that a cost that cannot be ignored is included in deploying and maintaining multicast routing protocols, even if it involves no new hardware and simply "turning on" features already available in software. They also agree that the addition of any new protocol into a network offers the potential to introduce new bugs that can impact the stability of the network. This dilemma is faced when introducing any new technology into a network. Ultimately, the benefits provided by the new features must be weighed against the risk and cost of deployment.

Much of the complexity of multicast routing protocols has stemmed from the traditional view that multicast should provide many-to-many delivery in addition to one-to-many. To support this ASM model, the network must provide the control plane of source discovery. Recently, it has been widely agreed that the most "interesting" and commercially viable applications for multicast require only one-to-many delivery. By sacrificing functionality that may be considered somewhat less important on the Internet, much of the complexity of these protocols can be eliminated.

SSM is a service model that guarantees one-to-many delivery and can be realized with a subset of functionality from today's multicast protocols. By moving the control plane of source discovery to higher-layer protocols (like a click in a browser), the required multicast routing protocols become radically simpler. This enables a reduction of operating and maintenance costs that cannot be overstated.

1.12.5 Cannibalization of Unicast Bandwidth Revenues

Throughout history, new technologies have evolved that have forced businesses to consider cannibalizing profitable incumbent technologies for new products. Generally, those who fail to embrace change get surpassed by those who do. When the automobile was first invented, imagine the dilemma faced by horse-drawn carriage makers as they pondered whether they should start building cars. Because multicast provides such efficient use of resources, some ISPs have been concerned that they will lose revenue as their customers consume less bandwidth. This view is no less shortsighted than that held by our unwise carriage-building friends.

While multicast reduces the resources required for a single session of content, it brings new content on the network. It brings more customers who will eventually demand more bandwidth for higher-quality streams. And, as mentioned earlier, multicast can be used as a traffic multiplier, consuming more bandwidth through the network as more receivers join. The lessons learned on the Internet are no different than those of previous revolutionary technical breakthroughs. History does not look favorably upon the unwillingness to sacrifice limited short-term revenues in favor of products with limitless growth potential.

1.12.6 End-to-End Connectivity Required

For multicast to work properly, every layer 3 device on the path from source to receiver must be configured to be multicast-enabled. Pragmatically, this means every link on the Internet must be configured for PIM-SM, the de facto standard multicast routing protocol. If even one link in this path is not configured properly, multicast traffic cannot be received. This barrier can be a significant one as this path may transit many networks, each run by a different entity.

Because of this restriction, many consider multicast to be relegated to a hobbyist toy until the entire Internet is enabled. However, end-to-end multicast connectivity may not always be a requirement for applications to enjoy the benefits of multicast.

A hybrid unicast-multicast content delivery infrastructure can be built that provides the best of both worlds. A deployment of unicast-multicast "gateways" can be used to support the ubiquity of unicast with the scalability of multicast. Content can be multicast across an enabled core network to devices that can relay it to unicast-only hosts. This distributes the load that unicast must handle, relying on multicast to simply provide a back-end feeder network for the content gateways.

1.12.7 Lack of Successful Models

Some multicast critics have suggested that no profitable services have ever been based on multicast. This observation fails to notice two communications media that have enjoyed commercial success for decades. Radio and broadcast television are based on a delivery mechanism that can be considered a special case multicast. Radio and television stations transmit data (their audio and/or video signal) across a one-hop, multiaccess network (the sky). Receivers join the group by tuning in their radio or TV to the group address (channel) of the station.

While radio and broadcast television do not use a packetized IP infrastructure (yet), the delivery mechanism used to provide content to receivers is decidedly multicast.

1.12.8 Not Ready for Prime-Time Television

After watching a 300Kbps Internet video stream on a 6-square inch section of a PC monitor, one's first inclination is definitely not to get rid of the family's 25-inch TV. While this can be considered reasonably good quality to expect on the Internet, it doesn't begin to compare to the quality and dependability that are expected from broadcast television. The bandwidth needed to approach this level of quality is orders of magnitude greater than that commonly found in most homes.

The quality and reliability of voice on the century-old public switched telephone network (PSTN) well exceeds that found in mobile phones. However, the functionality and limitless potential for features have enabled people to tolerate a lower voice quality in return for greater flexibility.

Likewise, the Internet has many inherent benefits that are difficult to match with broadcast communications. Despite having limited reach and no way to charge or exactly measure its audience, radio has been a viable business for the better part of a century. The Internet, with its bidirectional communication, provides the capability to log the exact behavior of every single viewer. After gazing upon an enticing advertisement, the viewer can instantly order the promoted product with the click of a mouse.

Additionally, the content that is available on television and radio is provided only by those with expensive studios and stations. On the Internet, anyone with a server and a connection can provide content accessible across the globe. Finally, multicast video on demand, generally believed to be impossible, is becoming a reality thanks to clever techniques that are being pioneered by innovative content delivery companies.

Initially, it is likely multicast video will be primarily niche content not commonly found on television, such as foreign TV channels or high school sporting events. As new technologies evolve, such as set-top boxes and hand-held devices, and as bandwidth to the home increases, the Internet will become an extremely attractive vehicle for television and radio. Multicast provides the scalability to make this a reality.

1.12.9 Susceptibility to DoS

In the ASM service model, receivers join all sources of a group. While this functionality is ideal for applications such as online gaming, it leaves receivers open to denial-of-service (DoS) attacks. Any malicious user can send traffic to a multicast group, flooding all the receivers of that group, which greatly concerns content providers.

It is first worth noting that all IP traffic is susceptible to DoS, a reality in a network providing any-to-any connectivity. In fact, DoS is not even unique to the digital world. Throwing a brick through a storefront window, putting eggs in a mailbox, or parking a car in the middle of the street are only a few of an infinite number of analogs in the brick-and-mortar world. It just so happens that ASM DoS attacks are a bit easier to execute and have the potential to affect more users than their unicast counterparts. SSM, however, guarantees that the receivers will join only a single source. While DoS is not impossible with SSM, it is far more difficult to attack SSM receivers.

1.12.10 Unfriendly Last Mile Technologies, Less Friendly Firewalls

Multicast provides its benefits at the network layer. It is generally transparent to layer 2 technologies such as frame relay, ATM, and Ethernet, which means it is sometimes broadcast out all ports. Many of the high-speed last mile deployments of DSL and cable modems utilize primarily layer 2 infrastructures. Many of these architectures will be unable to realize the efficiencies supplied by multicast. Fortunately, in the world of data communications, the only constant is change. Service providers realize they must be agile enough to modify their offerings when needed to contend in this fiercely competitive landscape. As multicast becomes a standard part of the Internet, these providers will be motivated to make the necessary software or hardware upgrades to support it.

Multicast is predominantly delivered via User Datagram Protocol (UDP). Those concerned with security find UDP traffic inherently scarier than its connection-oriented counterpart, Transmission Control Protocol (TCP). Many firewalls and other security devices do not even support multicast. Once again, as multicast becomes ubiquitous across the Internet, makers of these devices will add support for the services their customers demand. Similarly, common practices will be developed to allay the security vulnerabilities that exist today with multicast traffic.

1.12.11 The Need for Multicast

In global emergency situations, multicast can play a crucial role in delivering vital communication to millions of Internet users, providing extra communications capacity at a time when heretofore conventional methods are strained to the breaking point. Indeed, nowhere has this been more precisely demonstrated than in the tragic events of September 11, 2001. In the early hours following the terrorist attacks in New York and Washington, most news Web sites were inaccessible as extraordinarily large numbers of users attempted to simultaneously access these sites.

At Northwestern University, CNN was rebroadcast as a multicast feed on the Internet and quickly gathered an audience of over 2,000 viewers. At the time, this multicast audience was believed to be the largest for a single feed in history. However, the size of this audience was infinitesimal compared to the number of users that wanted desperately to view this coverage and learn what was happening. As millions tried in vain to view pictures, video, text, anything that could have described the horrific events unfolding that day, users on multicast-enabled networks were able to watch real-time video accounts throughout the entire day.

Users on networks not enabled for multicast were forced to scramble to find radios and televisions. On September 11, 2001, multicast enabled Internet users to stay informed; in the future, multicast can be used to deliver critical information regarding public safety and security.

1.12.12 Final Outlook

The free and open dissemination and collaboration of information provided by the Internet is among humankind's most powerful achievements. While the Internet has enjoyed unparalleled growth and has saturated nearly every element of our culture, it is poorly equipped to support multidestination traffic without multicast.

On enterprise and financial networks, multicast has enjoyed modest success for years; on the Internet, it has the capability to support content with the potential to be no less revolutionary than the World Wide Web. The reasons for its slow deployment across the Internet vary widely from validity to misunderstanding. In all cases, these obstacles are surmountable, especially given recent enhancements such as SSM. Finally, history has suggested the eventual convergence of all data networks onto a single IP infrastructure; multicast makes this forecast attainable.

  • + Share This
  • 🔖 Save To Your Account