Home > Articles

  • Print
  • + Share This
This chapter is from the book

Assigning the Correct QoS System

Where the entire end-to-end delivery system is under single control, you can use the simple approach to assigning QoS protocols and configurations discussed in the "Allocating Network Resources" section of Chapter 5. The implementation of this chapter's systems depends on how much control you have—for example, a priority set by your system at an edge could effectively be reclassified in the core, rendering your original configuration redundant. There is an element of cooperation and agreement dependent on your individual organization's unique arrangement with the third-party system.

QoS solutions are achieved through the use of traffic conditioners in the end-to-end network. The traffic-conditioning model implemented in RFC 2475, "An Architecture for Differentiated Services," includes classifier, policer (meter), marker, shaping, and dropping mechanisms. Cisco routers use many traffic-conditioning implementations to provide these functions. The methods used depend on the objective of the QoS policies implemented and the traffic present in the network.

The methodologies discussed so far in this book remain central to the overall system management and ultimate APM infrastructure. Central to these methodologies is the profile of the application. You must understand the application characteristics and dependencies, and then assign QoS based on a combination of those characteristics and the business delivery requirements.

The traffic-conditioning mechanisms used are as follows1:

  • Classifier—Determines what classes packets belong to by inspecting various fields within the packet header including, but not limited to, source address, destination address, protocol identifier, source port, and/or destination port. The Cisco Network-Based Application Recognition (NBAR; see Chapter 6) can be used to further classify traffic based on Layer 4 through Layer 7 information within the protocol data unit. Classification is most often done using access control lists, but NBAR can be leveraged to police file-sharing applications and worms, which typically do not use well-known port numbers.

  • Meter—Measures the speed of a traffic stream. The metering process can be used to affect the operations of the marker and shaper/dropper. Traffic that conforms to the committed information rate (CIR) is treated differently than the traffic that exceeds the CIR. A dual leaky-bucket algorithm is used with policing mechanisms that meter traffic. This enables the administrator to configure a committed burst (Bc) and excess burst (Be) rate that can be controlled with different exceed and violate actions. Policing can be used to re-mark and/or drop traffic.

  • Marker—Responsible for turning on prioritization bit values in the Layer 2 and/or Layer 3 headers as follows to signify the importance of traffic:

    • IP traffic employs the IP Precedence or DSCP fields.

    • Layer 2 traffic depends on the technology used.

    • Ethernet has three 802.1p priority bits in the 802.1 header that can be marked.

    • Frame Relay includes a Discard Eligible (DE) bit, which can be marked to signify that the frame should be dropped when the service provider experiences congestion.

    • ATM has a Cell Loss Priority (CLP) bit that is similar in use to Frame Relay.

  • Shaper—Limits bandwidth used on a link by queuing traffic that exceeds the set rate. Shaping is implemented only on egress links and proves especially useful in Frame Relay hub-and-spoke architectures. Frame Relay hubs may be running at DS-3 speeds (44.736 Mbps), whereas small sites may only have a fractional DS-1 access pipe of 128 kbps. The central site (hub) can flood the remote site link with excess traffic and not use resources properly. Framing would shape the traffic stream from the central site to a maximum rate of 128 kbps to the remote site.

  • Dropper—Discards out-of-profile packets. This mechanism proves very useful in metropolitan Ethernet environments where a service provider will give the customer a Gigabit Ethernet handoff, but only allow 1 Mbps of bidirectional traffic based on the customer contract. The service provider would have the benefit of instantly provisioning. Dropping mechanisms can be deterministic in the case of weighted random early detection (WRED) or undeterministic in the case of tail drop.

All nodes in an internetwork perform traffic forwarding, queuing, and congestion-avoidance (WRED) procedures. Other QoS mechanisms are used, depending on the node's physical location within the network. As a general rule, edge nodes (ingress and egress) perform classification, marking, policing, shaping (only on egress), and dropping. A core router's main function in the network is to forward packets at high speeds. The CPU- and memory-intensive tasks associated with edge device QoS functionality would burden core routers.

You need to consider how the demarcation lines within your environment theoretically reshape your delivery topology, remembering QoS is deployed primarily at the enterprise LAN-WAN boundary. Based on the demarcation points, there may be a virtual shift in the delivery model.

Table 7-1 identifies various Cisco QoS mechanisms and their application to the QoS building blocks.

Table 7-1 Cisco IOS Traffic-Conditioning Mechanisms

Traffic-Conditioning Mechanism

Examples

Classification

Modular QoS CLI (MQC) IP to ATM

(Class of service) NBAR

QoS Policy Propagation over BGP (QPPB)

Route maps

ACCESS control lists

Marking

Committed access rate (CAR)

Class-based marking

QPPB

Route maps

Metering

Weighted fair queuing (WFQ)

Class-based WFQ (CBWFQ)

Priority queuing (PQ)

Custom queuing (CQ)

Weighted round-robin (WRR)

Modified-deficit round-robin (MDRR)

CAR

MQC policing

Class-based low-latency queuing (LLQ)

Shaping

Generic traffic shaping (GTS)

Frame Relay traffic shaping (FRTS)

Virtual circuit (VC) shaping

Dropping

WRED

Flow-based weighted random early detection (FRED)

CAR


  • + Share This
  • 🔖 Save To Your Account