Home > Articles > Networking > Routing & Switching

  • Print
  • + Share This
This chapter is from the book

The Need for a QOS-based Internet

The provisioning of adequate resources for an application (such as bandwidth for fast relay through the network) is not a simple process. Because of its complexity, internets in the past treated all applications' traffic alike and delivered the traffic on a best-effort basis, something like the postal service does for regular mail. That is, the traffic was delivered if the network had the resources to support the delivery. However, if the network became congested, the traffic was simply discarded. Some networks have attempted to establish some method of feedback (congestion control) to the user in order to request that the user reduce the infusion of traffic into the network. But as often as not, this technique is ineffective because many traffic flows in data networks are very short, maybe just a few packets in a user-to-user session. So, by the time the user application receives the feedback, it has finished sending traffic. The feedback packets are worthless and have done nothing but create yet more traffic.

The best-effort concept means traffic is discarded randomly; no attempt is made to do any kind of intelligent traffic removal. This approach results in more packets being discarded from applications that require high bandwidth and that place more packets into the network than are discarded from applications with lesser requirements and fewer packets sent into the network. So, the biggest "customers," those needing more bandwidth, are the very ones who are the most penalized! Assuming the customer who is supposedly given a bigger "pipe" to the network is paying more for that pipe, then it is reasonable to further assume that this customer should get a fair return on his or her investment.

It is charitable to say that the best-effort approach is not a very good model. What is needed is a way to manage the QOS in accordance with the customer's requirements and investment.

Label Switching and QOS

In the past few years, it has become increasingly evident that internets need to differentiate between types of traffic and to treat each type differently. We will have more to say shortly about this need, but for this discussion, we need first to define quality of service. The term was first used in the Open Systems Interconnection (OSI) reference model to refer to the ability of a service provider to support a user's application requirements with regard to bandwidth, latency (delay), jitter, and traffic loss. You may notice that these categories4 are quite similar to the list of reasons for the use of label switching, discussed earlier.

The provision of bandwidth for an application means the network has sufficient capacity to support the application's throughput requirements, measured, say, in packets per second.

The second service category is latency, which describes the time it takes to relay a packet from a sending node to a receiving node. Another term for latency is round-trip time (RTT), which is the time it takes to send a packet to a destination node and receive a reply from that node. RTT includes the transmission time in both directions and the processing time at the destination node. Applications, such as voice and video, have strict latency requirements. If the packet arrives too late, it is not useful and is ignored, resulting in wasted bandwidth and a reduction in the quality of the service to the application.

The third service category, jitter, was discussed earlier. It is the variation of the delay between packets and usually occurs on an output link, where packets are competing for the router's outgoing links' bandwidth. Variable delay is onerous to speech. It complicates the receiver's job of playing out the speech image to the listener.

The last service category is packet loss. Packet loss is quite important in voice and video applications, since the loss may affect the outcome of the decoding process at the receiver and may also be detected by the end user.

The Contribution of Label Switching

You might ask what label switching has to do with QOS. It does not have anything to do with certain aspects of the QOS categories, such as raw bandwidth. However, I stated earlier that label switching can be a valuable tool to combat latency and jitter, two important QOS operations for delay-sensitive traffic, such as video and voice, and for fast Web responses. Since label switching speeds up the relaying of traffic in an internet, it follows that the technology will reduce latency and improve jitter. Indeed, an internet that does not use label switching runs the risk of experiencing unacceptable QOS performance for delay-sensitive traffic.

Of course, label switching unto itself will not solve the delay and variable delay problems that are systemic to data networks. If we are connected to a low-bandwidth network, label switching is not going to give us more bandwidth, but I am stating label switching will ameliorate delay and jitter problems significantly.

  • + Share This
  • 🔖 Save To Your Account