Home > Articles > Networking

  • Print
  • + Share This

2.2 Predictable Per-hop Behavior

The goal in a QoS-enabled environment is to enable predictable service delivery to certain classes or types of traffic regardless of what other traffic is flowing through the network at any given time. An alternative expression of this goal is the process of aiming to create a multiservice IP network solution where traditional bursty traffic may share the same infrastructure (routers, switches, and links) as traffic with more rigorous latency, jitter, bandwidth, and/or packet loss requirements. Regardless of whether you focus on enterprise, access, or backbone networks (or some combination of them all), the end-to-end path followed by a single user's packets is merely a sequence of links and routers. So, your attention must initially be drawn to the dynamics of a router's forwarding behavior. Although a traditional router chiefly focuses on where to send packets (making forwarding decisions based on the destination address in each packet and locally held forwarding table information), routers for QoS-enabled IP networks must enable control of when to sends packets. You need to look more closely at those elements of a router that affect when packets are actually forwarded.

2.2.1 Transient Congestion, Latency, Jitter, and Loss

Each router is the smallest controllable convergence and divergence point for tens, hundreds, and thousands of unrelated flows of packets. In most data networks, traffic arrives in fluctuating bursts. On regular occasions, the simultaneous arrival of packet bursts from multiple links, which are all destined for the same output link (itself having only finite capacity), leave a router with more packets than it can immediately deliver. For example, traffic converging from multiple 100Mbit per second Ethernet links might easily exceed the capacity of a 155Mbit per second OC-3/STM-1 wide area circuit, or traffic from a number of T3/E3 links may simultaneously require forwarding out along a much smaller T1/E1 link. To cope with such occasions, all routers incorporate internal buffers (queues) within which they store excess packets until they can be sent onwards. Under these circumstances packets attempting to pass through the router experience additional delays. Such a router is said to be suffering from "transient congestion."

The end-to-end latency experienced by a packet is a combination of the transmission delays across each link and the processing delays experienced within each router. The delay contributed by link technologies such as SONET or SDH circuits, "leased line" circuits, or Constant Bit Rate (CBR) ATM virtual circuits is fairly predictable by design. However, the delay contributed by each router's congestion-induced buffering is not so predictable. It fluctuates with the changing congestion patterns, often varying from one moment to the next even for packets heading to the same destination. As you recall from Chapter 1, "The Internet Today," this randomly fluctuating component of the end-to-end latency is commonly referred to as "jitter."

Another issue is packet loss. Given that routers have only finite buffering (queuing) capacity, a sustained period of congestion may cause the buffer(s) to reach their capacity. When packets arrive to find buffer space exhausted, packets must be discarded until buffer space becomes available.

Clearly, you have a problem. The traditional router has, effectively, only a single queue for each internal congestion point (for example, in Figure 2.3 an output interface is draining the queue as fast as the interface speed allows) and no mechanism to isolate different classes or types of traffic from the effects of other traffic passing through it. The vagaries of the unrelated traffic passing through the shared queue at each internal congestion point is likely to have a heavy influence on each traffic stream's latency, jitter, and packet loss. Some types of traffic (for example, TCP connections carrying email) tolerate latency better than they tolerate packet loss, suggesting that long queues are ideal. However, other types of traffic—for example, User Datagram Protocol (UDP) carrying streaming video or audio—prefer that packets be discarded if held too long by the network, suggesting that shorter queues are better.

Figure 2.3 First-in, first-out queuing on a Best Effort router.

Consider the scenario in Figure 2.3. Packets arrive from each input port at a maximum rate of Y1 through to Yn packets per second (pps). The outbound link extracts packets from the queue at X pps. Take the total input rate as Y, the sum of (Y1 + Y2 + ... Yn). When Y is less than X, packets will not need to wait in the queue. However, it is more than likely that Y can burst well above X; in which case, the queue sees a net growth in size. The number of packets (P) in the queue after some interval (T) is expressed as P = T x (Y – X). A packet arriving at time T and finding the queue partially full experiences additional latency of X x P seconds (because the packet must wait for the queue to drain at X pps). If a packet arrives when the queue is full (P = L, the available queue space), the packet has nowhere to go and is dropped. Jitter comes from the fact that the components of Y are bursty and not correlated.

The preceding description also holds if you express the input and output rates in bits per second and the available queue space in bits (or bytes). If packets had a fixed length, a simple relationship would exist between the two forms of expression. However, in a typical IP environment packets are not of fixed length, adding further variability to the relationship between output link rate, the number of backlogged packets, and the latency experienced by backlogged packets.


Latency can also be a function of the subnet technology—for example, the backoff scheme of Ethernet. However, backoff on Ethernet simply reveals itself as temporal unpredictability of the "link."

2.2.2 Classification, Queuing, and Scheduling

So what do you need to improve? The latency, jitter, and packet loss characteristics of any given IP network ultimately boil down to QoS characteristics of links and the dynamics of queue utilization and queue management within each router.

If network load exceeds service rate, a single queue at each internal congestion point is no longer sufficient. Instead, you need a queue for each identifiable class of traffic for which independent latency, jitter, and packet loss characteristics are required.

Each of these queues should have its own packet discard policies (for example, different thresholds beyond which packets are randomly or definitely discarded). Of course, the multiple queues per output interface are useless without a mechanism for assigning packets to the correct queues. A classification method is required over and above the router's traditional next-hop forwarding lookup. Finally, the queues must all share the finite capacity of the output link they feed into. This requirement implies the addition of a scheduling mechanism to interleave packets from each queue and, thus, mediate link access in a controllable and predictable manner.

For the purposes of this book, the preceding requirements can be captured as a statement that QoS-enabled networks require routers that can differentially Classify, Queue, and Schedule (CQS) all types of traffic as needed (see Figure 2.4). For the purposes of this book, such routers will be said to have a CQS architecture.

Figure 2.4 Per-hop Classify, Queue, and Schedule enables independent queuing and scheduling.

Later in this book, you look at various methods available for classifying traffic, comparing their relative complexities and the inherent granularity with which each scheme isolates different classes of traffic within an aggregate stream of packets. You also evaluate queuing schemes—the most important part of which is the queue's packet-dropping policy. These policies can range from simply dropping the most recently arrived packet when a queue reaches a hard limit (for example, it runs out of space) to making preemptive randomized drop decisions on the most recently arrived packet (based on how close the queue is to filling up and/or certain attributes carried within the packet itself). Finally, you consider the temporal effects of different scheduling algorithms on a network's capability to isolate different traffic classes from each other.

2.2.3 Link-Level QoS

Sometimes a router's scheduler must do more than simply interleave traffic at the IP packet level. The scheduler's capability to smoothly interleave traffic belonging to different queues depends on how quickly the outbound link can transmit each packet. For high-speed links (such as 155Mbit per second SONET or SDH circuits) a 1,500-byte IP packet takes less than 80 microseconds to transmit. This allows the scheduler to divide the link's bandwidth into slots up to 80 microseconds long—a very reasonable number, which drops to 20 microseconds on 622Mbit per second (OC-12 or STM-4) circuits. However, at the edges of the Internet many links are operating at 1Mbit per second or slower—in the 56 to 128Kbit per second range for Integrated Services Digital Network (ISDN) in North America and Europe and down to 28.8Kbit per second in the case of many dial-up modem connections.

A 1,500-byte IP packet takes around 94 milliseconds to transmit over a 128Kbit per second link, blocking the link completely during this time. Regardless of whether jitter-sensitive traffic has been classified into a different queue, those packets experience a 94-millisecond jitter when the scheduler pulls the 1,500-byte packet from another queue. Clearly, this poses some problems if QoS-sensitive applications are to be supported on the far side of typical low-speed access links.

The basic solution is to perform additional segmentation of the IP packets at the link level in a manner transparent to the IP layer itself. The CQS architecture is then applied at the link level by queuing segments rather than whole packets, thus allowing the scheduler to interleave on segment boundaries (see Figure 2.5). By choosing the smaller segment size appropriately, such an approach enables jitter-sensitive IP traffic to avoid being backlogged behind long IP packets. (However, nothing is gained for free—segmentation decreases overall transmission efficiency because each segment carries its own header to allow later re-combination of segments.)

Figure 2.5 Segmentation before scheduling improves interleaving on low speed links.

Although ATM was originally designed for high-speed links, its design reflects a similar concern with minimizing the interval over which traffic on a given class could hold the link. The ATM cell is short by design, and each ATM switch is an example of a CQS architecture. Arriving cells are queued for transmission according to the contents of their virtual path identifier (VPI) and virtual channel identifier (VCI) header fields. Taken together the VPI/VCI identify the VC to which the cell belongs, encoding both path information (where should the cell go next) and service-class information. Good ATM switches have queues for each traffic class on a per-port basis and have schedulers feeding cells out each port in accordance with the bandwidth guarantees given to each class.

2.2.4 Analogies

A real-world example of the CQS process is available from the airline industry. Airport check-in areas typically utilize a form of CQS architecture to provide different levels of service to different classes of passengers. The congestion point is represented by a set of check-in agents who are processing passengers as quickly as possible and at a moderately consistent rate (ignoring for a moment the variability of processing time caused by difficult passengers!). The link speed of this congestion point is represented by the aggregate passenger processing rate of the check-in agents. (The airline can add and remove agents to vary this speed.)

The arrival of passengers for check-in is a bursty process, typically peaking during the hour before a flight's scheduled departure time. Most of us are very familiar with the queues that build up during the sudden arrival of a group of passengers. If you've arrived along with many other passengers, your wait for check-in can be quite long. If you've arrived during a lull in activity, you may be checked-in quite soon after entering the check-in area.

The airlines typically like to provide expedited check-in service to their premium customers (for example, first-class passengers, or those in the higher frequent flyer club status levels). To do so, separate lines (queues) are established prior to the check-in agents. The classification of passengers into the appropriate queue can take a number of forms. Sometimes the airlines leave it to the passengers themselves to pick the appropriate queue; at other times an airline representative performs a perfunctory ticket check and directs people to the queue appropriate to their ticket or frequent-flyer class. It is worth noting here that classification doesn't need to take into account every piece of available information, only the relevant information. For example, although the passenger's identity (name) is important information during the check-in process, someone's name is largely irrelevant to the queue assignment at check-in.

The act of pulling a passenger from one of the queues represents a scheduling decision. Typically, check-in agents are dedicated to each queue (class of passenger), providing a minimum rate of service to that queue regardless of any blockages affecting other queues. To achieve efficient usage of agents, when a high-priority queue empties, the associated agents usually begin (temporarily) processing passengers from the lower-priority queues. By appropriate distribution of check-in agents, the premium class of customers experience faster (shorter lines and, thus, lower latency) and more predictable (lower jitter) check-in service than those in lower classes.

A related real-world example of queuing and scheduling can be seen in the designs of major highways and freeways. Consider exit ramps, which are a form of output buffering for cars. Exit ramps feeding onto smaller roads typically terminate at a controlled intersection. The lights controlling the flow of traffic from the off-ramp into the local road act as a coarse scheduler. When cars begin exiting the highway faster than they are being fed onto the local road, the exit ramp itself begins to fill up. During the morning and evening peak traffic hours, many cars may be consistently arriving that the exit ramp overflows, causing traffic chaos on the main highway itself. Fortunately for drivers, cars are not "dropped" when the exit ramp overflows (although drivers may choose to continue on and search for another exit).

Finally, an example of classification and scheduling can be found at the toll booths that are sometimes placed across major highways. Typically a multilane highway fans out to many more toll lanes, and a self-classification process ensues as cars approach the tollbooths and pick their preferred lane. Particular lanes may be set aside for trucks, or priority lanes restricted to cars holding special electronic passes—motorists are advised of the appropriate self-classification rules prior to arriving at the tollbooths. (This example does not have an equivalent to controlled scheduling because each lane processes cars independently of other lanes.)

As you will see in the following chapters, CQS router architectures may be implemented in a number of permutations, each with its own specific consequences for the QoS characteristics of the IP network as a whole. The fundamental task of each router hop now becomes

  • To know where to send the packet (conventional forwarding)

  • To know when to send it (the additional QoS requirement)

  • To complete the preceding tasks independently of other traffic sharing the router

  • + Share This
  • 🔖 Save To Your Account

Related Resources

There are currently no related titles. Please check back later.