Home > Articles > Operating Systems, Server > Microsoft Servers

  • Print
  • + Share This

2.4 Illustrative Examples

This section presents a number of examples to illustrate the previously described concepts.

2.4.1 Push Provisioning and FIFO Traffic Handling

Most existing networks employ little (if any) QoS mechanism and provide a relatively low QE product. Consider a typical enterprise network (consisting of both LAN and WAN links) in which employees access internal Web sites. Users may be able to surf the Web fairly painlessly (assuming that the targeted Web servers are not a bottleneck). The extent of QoS mechanism present in these networks is that the network manager monitors the network usage level and from time to time (as the number of users on the network grows) adds capacity to (reprovisions) the network. It may take 1 second for a typical Web query to complete, or it may take 5 seconds, depending on the time of day and the activity level of other users on the network. The QoS is relatively low but is nonetheless satisfactory for the application.

This mode of operation corresponds to the top-left cell in the table in Figure 2.3. Traffic is handled using FIFO queuing, and the network occasionally is reprovisioned in a push manner. Instead of employing sophisticated QoS mechanisms to improve the QE product, the network manager increases quality as necessary by adding capacity (compromising efficiency). Because the service quality required by Web surfing is relatively low, relatively minor increases in capacity may be sufficient to meet the needs of the users. To the extent that service must be improved in the LAN (versus the WAN), efficiency may not be a concern at all.


Network capacity may be increased by physically reprovisioning or by logically reprovisioning. An example of physical reprovisioning is replacing a 10Mbps interface card with a 100Mbps interface card. An example of logical reprovisioning is reconfiguring a 128Kbps ATM VC to a 256Kbps ATM VC. For the purpose of this discussion, both are considered to be forms of push provisioning.

2.4.2 Using Aggregate Traffic Handling to Raise the QE Product

While reprovisioning in the LAN may be reasonable, adding capacity to WAN links is typically quite expensive. If the network manager finds that he or she is continually adding capacity to WAN links to maintain the required quality of Web-surfing service, it may be appropriate to explore alternatives. If Web surfing is deemed mission-critical in the enterprise network, QoS mechanisms may be employed to improve the QE product of the WAN network with respect to some important subset of Web-surfing traffic. This will make it possible to offer Web surfers improved service quality while stemming the rate at which capacity must be added.

Consider Figure 2.5.

Figure 2.5 Improving the QE Product by Combining Push and Aggregate Traffic Handling

To improve the quality of certain Web-surfing traffic without adding further capacity, the network manager might configure R1 to recognize important traffic originating from the Web servers and mark it with an appropriate DSCP. Routers transmitting onto WAN links would be configured to grant the marked traffic relative priority.

This is quite an efficient approach because no resources are added to the network. However, while it provides a QoS that is better-than-best-effort (BBE), it still represents a relatively low QoS. It promises no quantifiable latency bounds. Latency might degrade significantly if an unusually high number of users decided to Web-surf simultaneously (thereby overwhelming the higher-priority queues in the routers). This condition would be especially severe if all simultaneous users were collocated. In this case, unusually high demands would be placed on a single WAN link. Thus, the QoS would depend on the number of simultaneous Web-surfing users and their location in the network topology. However, because Web-surfing does not demand particularly high service quality, this approach may be appropriate. The next example discusses the provisioning of higher-quality services.

2.4.3 Supporting Higher-Quality Services in the LAN

Consider an IP telephony application. Users of this application each require a guarantee from the network to carry 64Kbps, with a maximum end-to-end latency no higher than 100 milliseconds. A higher latency renders the service useless. Furthermore, users expect that an IP telephony session will not degrade in quality as the call progresses. Clearly, the IP telephony application requires a higher-quality service than the Web-surfing application. In a LAN environment, the higher quality may be offered effectively by using a combination of aggregate traffic handling and overprovisioning. This is illustrated in Figure 2.6.

Figure 2.6 Providing Telephony-Quality Service Using Push Provisioning and 802 User Priority

In this example, each switch in the LAN is configured with a high-priority queue and a standard queue. Switches or hosts at the periphery of the LAN are configured to recognize IP telephony traffic and to mark it with the appropriate 802 user priority so that it is directed to the high-priority queues. Because bandwidth is relatively plentiful in the LAN, and because the bandwidth consumed by IP telephony sessions is relatively low, the high-priority queues will remain relatively underutilized and will offer the low-latency, high-quality service required. The simple combination of aggregate traffic handling and push provisioning raises the QE product enough to provide high-quality telephony service with only moderate overprovisioning.

2.4.4 Supporting Higher-Quality Services in the WAN

If it is necessar y to support the same high-quality service across the WAN, the combination of push provisioning and aggregate traffic handling may not suffice. In Figure 2.7, two LANs are interconnected by a 1.5Mbps WAN link. Assume that push provisioning is used to configure the routers driving the WAN link (R1 and R2) to recognize telephony traffic and to direct it to a high-priority queue.

Figure 2.7 Supporting Telephony Service Across a WAN Link

As long as telephony calls remain local to one of the LANs illustrated, capacity may be sufficient to provide high-quality service to all simultaneous telephony sessions. However, the WAN link is capable of supporting only a small number of simultaneous telephony sessions. Beyond some threshold, one additional telephony session will increase the utilization of the high-priority queue in R1 or R2 and will compromise the latency bounds provided by the queue. The marginal telephony session not only will experience compromised service itself, but also will compromise service to those sessions already in progress, as illustrated in Figure 2.8.

Figure 2.8 The Marginal Session Congests the WAN Link, Compromising Service to All

The service provided to telephony traffic in this example is of low integrity and low quality. This occurs because the simple push provisioning mechanism aggregates all telephony traffic into the high-priority queue indiscriminately.

To maintain a high QoS, the network manager may overprovision the WAN link to accommodate the worst-case number of simultaneously occurring telephony sessions. However, this is likely to be prohibitively expensive. Instead, the network manager can raise the QE product by employing a mechanism to restrict use of the high-priority queue to a limited number of telephony sessions. This can be achieved using QoS signaling for explicit admission control, as described in the following section.


In theory, it is possible to achieve similar effects without QoS signaling, using only push provisioning and an implicit form of admission control. If R1 and R2 were made sufficiently intelligent, they could be designed to identify traffic associated with individual telephony sessions and could be configured to direct traffic only from the first N sessions to the high-priority queue (where N is the number of sessions that can be simultaneously accommodated without compromising service quality). To be generally effective, this requires cumbersome functionality in routers. Furthermore, there may be multiple such routers in the path. It is necessary to coordinate these routers so that they direct traffic from the same N sessions to the high-priority queue. It is quite complex to achieve such coordination using push mechanisms only.

2.4.5 Raising the QE Product of the WAN Link by Using Signaling

In Figure 2.9, R1 and R2 are capable of RSVP signaling.

Figure 2.9 Using ExplicitAdmission Control to Raise the QE Product of the WAN Link

Hosts initiating telephony sessions generate signaling messages describing the session. R1 and R2 participate in RSVP signaling, explicitly admitting (H1) or rejecting (H2) each session based on the resources available. (Devices participating in signaling for the purpose of admission control are known as admission control agents). In this manner, routers can reject admission to sessions that would result in excess utilization of their high-priority queue (thereby protecting the integrity of pre-existing sessions, as illustrated in Figure 2.10).

Figure 2.10 The Marginal Session Experiences Low-Quality Service but Does Not Compromise the Quality of Service Available to Pre-existing Sessions

In a general topology, it may be necessary to coordinate admission control among multiple admission control agents along a traffic path. To this end, admission or rejection messages propagate along the traffic path so that all admission control agents are capable of coordinating the set of sessions admitted to their high-priority queues. Traffic from rejected sessions can then be redirected to the best-effort queue in each agent.

This approach combines per-conversation signaling and aggregate traffic handling to raise the QE product of the network. In the simple example illustrated, this approach is applied to the bandwidth-constrained WAN link. As a result, it is possible to provide high-quality telephony service (albeit to some limited number of simultaneous sessions) without over-provisioning the network.

Call Blocking

The approach described in the previous example raises the QE product by using signaling to block calls that would result in overutilization of high-priority resources. By doing so, it makes it possible to provide high-quality service to some limited number of calls. The utility of such an approach depends largely on the statistical distribution of telephony sessions over time. For example, assume that 1,000 potential IP telephony users are evenly distributed across the enterprise network. In the worst case, it will be necessary to support 500 telephony sessions across the WAN link at any point in time (two users per session). However, in most cases, the actual number of simultaneous sessions will be quite small. For example, if the number of simultaneous sessions is typically four, with occasional spikes to five and beyond, then the approach described works quite well. Admission control in the routers can be limited to admit capacity for four sessions. Occasionally, requests for a fifth or sixth session will be rejected, resulting in a blocked call or a busy signal.

To provide the same service quality without admission control, the network manager would have no choice but to increase the capacity of the WAN link. In fact, to guarantee the equivalent service quality without using admission control would require provisioning for 500 simultaneous sessions! This clearly would result in inefficient use of network resources. A middle ground could be struck, overprovisioning to a lesser degree. However, partial overprovisioning without admission control does not guarantee service integrity and quality. It assures these only to the extent that the provisioned threshold is not exceeded. If the statistics of call distribution over time are such that the provisioned threshold is exceeded, service will be compromised to all sessions at that time.

Note that the definition of high QoS does not preclude blocked calls. Rather, it stipulates that admitted calls should be provided good service with high integrity. If it is necessary to never block calls, there is no choice but to overprovision accordingly. Issues Regarding the use of Signaling as a Mechanism for Raising the QE Product of a Network

Signaling in the context of RSVP will be discussed in depth in Chapter 5. Because it plays an important role in supporting a high QE product, however, certain related issues are discussed briefly in this section.

Signaling Costs

Signaling can improve the QE product of a network. However, this comes at a cost. Signaling itself requires network resources. Any form of signaling generates additional network traffic. Because of its soft state, RSVP signaling does so continually (albeit at low volumes). In addition, for the signaling to be useful, it is necessary for network devices to intercept signaling messages and to process them. This consumes memory and processing resources in the network devices. In addition to the impact of signaling on device resources, the processing of signaling messages in each device introduces latency. Hosts experience this latency as a delay in obtaining the requested QoS.

Signaling Density

In the example illustrated previously, only routers attached to the WAN link (R1 and R2) participate in signaling. Routers and switches within each of the LANs do not. Within the LANs, it is more cost-effective to provide the required service quality by overprovisioning than by requiring each device to participate in signaling.

In general, certain devices (including switches and routers) are obvious candidates to be configured as admission control agents. Typically, these are devices that are responsible for relatively bandwidth-constrained segments or subnetworks. Where resources are plentiful, it is rarely necessary to appoint admission control agents. Thus, the density of distribution of admission control agents can be reduced where compromises in efficiency can be tolerated. This reduces overhead at the cost of a reduction in QE product. This effect is illustrated in Figure 2.4.

Dense distribution of admission control agents improves the QE product of a network by improving the topology awareness of the admission control process. This effect is explained briefly in the related sidebar in this section. Signaling and topology awareness are discussed in detail in Chapter 5.

Signaling and Topology Awareness

Consider the simple network illustrated in Figure 2.11.

Figure 2.11 Sample Network

Assume that all routers illustrated participate in RSVP signaling. Now assume that a QoS session requiring 64Kbps is initiated between H1 and H2, and that another session requiring 64Kbps is initiated between H1 and H4. One RSVP request for 64Kbps would traverse R1, R2, and R3. Another RSVP request for 64Kbps would traverse R1, R2, and R4. The routers would admit these resource requests because they would not result in overcommitment of resources on any of the routers’ interfaces. If instead H2 and H3 each attempted to simultaneously initiate a 64Kbps QoS session to H1, then R2 would prevent one of these sessions from being established in order to avoid over-committing resources on segment b. More generally, R2 could admit two simultaneous requests for 64Kbps if one were for resources on segment b and the other for resources on segment c. However, if both were for resources on the same segment at the same time, one of the requests would not be admitted.

Thus, RSVP signaling makes it possible to admit or reject resource requests based on the current availability of resources in the specific devices whose resources would be required. This results from two facts. First, end systems generate RSVP signaling in real time as the need for resources arises. Second, the end systems address RSVP messages to the same address that data traffic is sent. As a result, the messages follow the data path and are available to each network device along the path. Throughout the rest of this book, this characteristic of RSVP signaling will be referred to as topology-aware admission control.

Push provisioning, by contrast, provides neither the dynamic nature nor the topology awareness of RSVP signaling. In push provisioning, resources are effectively preassigned to specific sets of traffic at the time classifiers are configured in network devices. Some volume of traffic will appear at each device and will match the installed classifiers, thereby claiming against allocated resources. The network manager has only limited knowledge regarding the volumes of traffic that will appear at each device. As a result, it is difficult to provide high-quality guarantees with push provisioning.

As mentioned before, the topology awareness supported by RSVP signaling is maximized when each device in the network acts as an admission control agent. Because this may be costly in terms of overhead, the network manager likely will limit the density of signaling-aware devices. The following example illustrates the effects this has on the QE product offered by the network illustrated in Figure 2.11.

Assume that the network manager reduces the density of signaling-enabled network devices by disabling the processing of QoS signaling messages in R2, R3, and R4. Only R1 now participates in signaling. In effect, it becomes the admission control agent for itself as well as the remaining routers in the network. In this case, the router’s downstream interface has a capacity of 128Kbps (on segment a). If R1 were configured to apply admission control based on this capacity, it might admit requests of up to 64Kbps from both H2 and H3 simultaneously (or from both H4 and H5 simultaneously). This would overcommit the resources on segment b (or c), thereby compromising the service quality offered.

The service quality could be maintained if R1 was configured to limit admission of resource requests to 64Kbps. However, this would result in inefficient use of network resources because only one conversation could be supported at a time, when in fact two could be supported if their traffic were distributed appropriately. Alternatively, all 64Kbps links in the network could be increased to 128Kbps links to avoid over-commitment of resource requests, but the increased capacity would be used only if hosts H2 and H3 (or H4 and H5) required resources simultaneously. If this were only rarely the case, such overprovisioning would also be inefficient.

In general, a reduction in the density of admission control agents reduces the QE product that can be offered by a network. This is because the network manager has imperfect knowledge of network traffic patterns. In the previous example, if the network manager knew with certainty that hosts H2 and H3 (or hosts H4 and H5) never required low latency resources simultaneously, they could be offered high-quality guarantees without signaling and without incurring the inefficiencies of overprovi-sioning. In smaller networks, it is very difficult for the network manager to predict traffic patterns. In larger networks, it tends to be easier to do so because of the lower variance in traffic patterns. Thus, reductions in the density of signaling-aware devices tend to compromise the QE product less in large networks than in small networks.

Aggregation of Signaling Messages

In the case of standard RSVP signaling, messages are generated for each conversation in progress. In parts of the network through which a large number of conversations frequently occur, it is possible to aggregate per-conversation signaling messages into a smaller number of messages regarding aggregate resources. Aggregate signaling reduces demands on admission control agents and reduces overhead (as compared with per-conversation signaling). Of course, it also reduces the QE product.

  • + Share This
  • 🔖 Save To Your Account

Related Resources

There are currently no related titles. Please check back later.