Cisco IP Telephony Flash Cards: Weighted Random Early Detection (WRED)
Why You Need Quality of Service (QoS)
The networks of yesteryear physically separated voice, video, and data traffic. Literally, these traffic types flowed over separate media (for example, leased lines or fiber-optic cable plants). Today, however, network designers are leveraging the power of the data network to transmit voice and video, thus achieving significant cost savings by reducing equipment, maintenance, and even staffing costs.
The challenge, however, with today’s converged networks is that multiple applications are contending for bandwidth, and some applications such as, voice can be more intolerant of delay (that is, latency) than other applications such as, an FTP file transfer. A lack of bandwidth is the overshadowing issue for most quality problems.
When a lack of bandwidth exists, packets can suffer from one or more of the following symptoms:
Delay—Delay is the time that is required for a packet to travel from its source to its destination. You might witness delay on the evening news, when the news anchor is talking through satellite to a foreign news correspondent. Because of the satellite delay, the conversation begins to feel unnatural.
Jitter—Jitter is the uneven arrival of packets. For example, consider that in a Voice over IP (VoIP) conversation, packet 1 arrives. Then, 20 ms later, packet 2 arrives. After another 70 ms, packet 3 arrives, and then packet 4 arrives 20 ms behind packet 3. This variation in arrival times (that is, variable delay) is not dropping packets, but this jitter can be interpreted by the listener as dropped packets.
Drops—Packet drops occur when a link is congested and a buffer overflows. Some types of traffic, such as User Datagram Protocol (UDP) traffic (for example, voice), are not retransmitted if packets are dropped.
Fortunately, quality of service (QoS) features that are available on Cisco routers and switches can recognize your “important” traffic and then treat that traffic in a special way. For example, you might want to allocate 128 kbps of bandwidth for your VoIP traffic and also give that traffic priority treatment.
Consider water that is flowing through a series of pipes with varying diameters. The water’s flow rate through those pipes is limited to the water’s flow rate through the pipe with the smallest diameter. Similarly, as a packet travels from its source to its destination, its effective bandwidth is the bandwidth of the slowest link along that path.
Because your primary challenge is a lack of bandwidth, the logical question is, “How do you increase available bandwidth?” A knee-jerk response to that question is often, “Add more bandwidth.” Although adding more bandwidth is the best solution, it comes at a relatively high cost.
Compare your network to a highway system in a large city. During rush hour, the lanes of the highway are congested, but the lanes can be underutilized during other periods of the day. Instead of just building more lanes to accommodate peak traffic rates, the highway engineers add carpool lanes. Cars with two or more riders can use the reserved carpool lane. These cars have a higher priority on the highway. Similarly, you can use QoS features to give your mission-critical applications higher-priority treatment in times of network congestion.
Some of the QoS features that can address issues of delay, jitter, and packet loss include the following:
Queuing—Queuing can send higher-priority traffic ahead of lower-priority traffic and make specific amounts of bandwidth available for those traffic types. Examples of queuing strategies that you consider later in these Quick Reference Sheets include the following:
Priority Queuing (PQ)
Custom Queuing (CQ)
Modified Deficit Round Robin (MDRR) queuing
Weighted Fair Queuing (WFQ)
Class-Based WFQ (CB-WFQ)
Low Latency Queuing (LLQ)
Compression—By compressing a packet’s header or payload, fewer bits are sent across the link. This effectively gives you more bandwidth.