Home > Articles > Operating Systems, Server > Microsoft Servers

  • Print
  • + Share This

2.3 The Value of Different QoS Mechanisms in Raising the QE Product of a Network

The QoS mechanisms introduced in the previous chapter can be used in isolation or in various combinations to improve the QE product of a network to varying degrees. The following section ranks the various QoS mechanisms in terms of their impact on the QE product and discusses various combinations of these mechanisms.

2.3.1 Overhead

When considering the impact that a particular QoS mechanism has on the QE product of a network, it is important to also consider the cost of the mechanism in terms of deployment costs and management burden. The term efficiency was defined to specifically exclude these costs. Throughout the rest of this book, the term overhead will be used to discuss the various costs associated with specific QoS mechanisms. Overhead includes the following components, among others:

  • Marginal hardware cost (processors, memory, and so on)

  • Marginal software cost

  • Management burden

  • Increased likelihood of failure

As mentioned previously in this chapter, the value of a particular QoS mechanism should be based on weighing the improvement in the QE product against the increased overhead.

2.3.2 Tabulating the Impact of QoS Mechanisms on QE Product and Overhead

Figure 2.3 ranks the general QoS mechanisms described in the previous chapter and illustrates various combinations of them. Methods for combining the QoS mechanisms will be discussed throughout the book. In general, mechanisms and combinations of mechanisms in the lower-right corner of the table will offer a greater impact on QE product but also will incur increased overhead. Mechanisms and combinations of mechanisms in the upper-left corner will offer less impact on QE product but will incur less overhead.

Figure 2.3 Combinations of QoS Mechanisms and Their Impact on QE Product and Overhead Traffic-Handling Mechanisms

The rows of Figure 2.3 correspond to various traffic-handling mechanisms. The topmost row corresponds to traditional FIFO queuing. The middle row corresponds to aggregate traffic-handling mechanisms such as DiffServ, 802 user-priority, and the use of ATM VCs to carry multiple conversation requiring similar QoS. The bottom row corresponds to per-conversation traffic handling, such as that implied by the original vision of per-conversation RSVP/IntServ or the use of per-conversation ATM VCs. Moving from top to bottom, these mechanisms offer greater impact on the QE product of a network, but they also incur the costs of increased overhead. Provisioning and Configuration Mechanisms

As noted in the previous chapter, traffic-handling mechanisms must be configured and provisioned in a consistent manner. Thus, each traffic-handling mechanism can be combined with various provisioning and configuration mechanisms.

The columns in Figure 2.3 correspond to various provisioning and configuration mechanisms. The leftmost column corresponds to the lowest overhead approach to provisioning and configuration: simple push provisioning. The middle column corresponds to aggregate signaling, and the rightmost column corresponds to per-session conversation. Moving from left to right, these mechanisms offer greater impact on the QE product of a network, but they also incur the costs of increased overhead. Combinations of Traffic Handling and Provisioning and Configuration Mechanisms

Various cells in Figure 2.3 represent combinations of the corresponding traffic-handling mechanism with the corresponding provisioning and configuration mechanism. For example, the top-left cell represents the status quo in which push provisioning is used with FIFO queuing. This approach provides no improvement in QE product, but it also incurs no overhead. The lower-right cell represents the other extreme: per-conversation signaling combined with per-conversation traffic handling. This is the original RSVP/IntServ model. It may offer significant improvement in QE product, but at a significant increase in overhead. Other cells represent various compromises between the two extremes. For example, the middle cell in the rightmost column represents the use of per-conversation signaling to gain admission to aggregate traffic-handling classes. The center cell represents the use of aggregate RSVP to establish DiffServ “trunks” that provide a certain service level between edges of a DiffServ network to an aggregation of conversations (aggregate RSVP is discussed in detail in Chapter 5).

The various cells represent only examples, and these examples are not exhaustive. Certain examples will be discussed in further depth throughout the book. Different combinations may be appropriate for different types of subnetworks. For example, in a routed network handling traffic of many different conversations, the combination of aggregate traffic handling with per-conversation signaling likely will offer a significantly better QE product than the status quo at a moderate increase in overhead. Beyond this, the marginal improvement in QE product offered by combining per-conversation traffic handling with per-conversation signaling is likely to be quite small relative to the marginal increase in overhead. Thus, the manager of a large routed network likely would find a ‘sweet spot’ in combining aggregate traffic handling with per-session signaling. Managers of other types of networks might prefer other combinations. Continuous Nature of QoS Mechanisms and Density of Distribution

The table shown in Figure 2.3 illustrates how variations in traffic handling and signaling mechanisms can impact QE product and overhead. This table represents differing levels of traffic handling or provisioning and configuration mechanisms in discrete steps. In reality, however, these are not constrained to discrete steps. FIFO queuing represents no traffic handling, while per-conversation traffic handling represents a very fine granularity of traffic handling. Between these two extremes is a continuum of possibilities, representing different degrees of aggregation. The same is true for provisioning and configuration. At one extreme there is push provisioning only. At another extreme, there is per-conversation signaling. In between, a continuum of aggregate signaling possibilities exists.

Increasingly finer granularities of signaling and of traffic handling can offer an ever-increasing QE product. A third factor to consider is the density of distribution of the mechanism. When a mechanism is densely distributed, each device in the network topology applies the mechanism. When it is sparsely distributed, only certain key devices apply the mechanism. More dense distributions result in a higher QE product, but they also increase overhead. More sparse distributions result in a lower QE product but lower overhead.

Figure 2.4 illustrates the continuous nature of the three factors: traffic handling, provisioning, and configuration and density of distribution.

Figure 2.4 Traffic Handling, Provisioning, and Configuration and Density of Distribution Can Be Applied on a Continuum

  • + Share This
  • 🔖 Save To Your Account

Related Resources

There are currently no related titles. Please check back later.