Home > Articles > Networking

  • Print
  • + Share This
This chapter is from the book

MPLS and Quality of Service

For QoS, the integrated services model (InServ) specifies two classes of services—controlled load (CL) and guaranteed service (GS)—and uses a signaling protocol known as Resource Reservation Protocol (RSVP). Briefly, the quality of CL end-to-end connections (IETF RFC 2211) is intended to be equivalent to that provided by the traditional best effort service in a lightly loaded network. Here is an example: A large percentage of packets is successfully transmitted to the recipient and latency is no greater than the minimum delay for packets circulating in a lightly loaded network. To ensure compliance with these conditions, applications addressing CL requests (via RSVP) supply the network with an estimate of the traffic they are likely to generate via the parameters of a "leaky-bucket." This so-called traffic specification (Tspec) is used by each network node on the flow path to carry out admission control. The following are possible mechanisms for implementing CL:

  • Priority queuing—It uses two queues, a high priority queue subject to CL traffic admission control and a best-effort queue.
  • Weighted fair queuing (WFQ)—It enables you to regulate the way link capacity is shared between various traffic flows. All flows have access to the full connection bandwidth, but when several flows have packets in the queue, the service rate of each flow is proportional to its assigned "weight." By selecting the appropriate weights, you can therefore reserve capacity for CL more efficiently.

    • Class-based queuing (CBQ)—This is an alternative algorithm that also permits rate control for various classes of traffic.
    • Random early detection (RED)—This protects CL traffic to some extent from any unresponsive best-effort flows.

    RED is an active queue management mechanism that tends to ensure a fairer distribution of bandwidth between contending flows.

    Additionally, low latency queuing (LLQ), which is in fact Class Based Weighted Fair Queuing with a Priority Queue (know as PQCBWF), is a critical mechanism that supports both data class of service and VoIP.

  • Weighted random early detection (WRED)—This combines the capabilities of the RED algorithm with IP precedence. This combination provides for preferential traffic handling for higher-priority packets. It can selectively discard lower-priority traffic when the interface starts to get congested and can provide differentiated performance characteristics for different classes of service. WRED is also RSVP aware and can provide an integrated services controlled-load QoS.

The guaranteed service (IETF RFC 2212) permits applications with strict requirements for both assigned bandwidth and packet delay. It ensures that all packets are delivered within a given time and not lost as a result of queue overflow. This service is first invoked by the sender, who specifies the Tspec and QoS requirements. Resource reservation is performed in the reverse direction with the receiver specifying the desired level of service (Rspec). As for CL, Tspec corresponds to the parameters of the leaky-bucket.

The InServ model did not achieve the success anticipated because its implementation is much more complex than the best-effort model. The fact that all routers must be RSVP-capable and able to store the details of every reserved CS and GS flow, although feasible on small networks, makes it unwieldy when applied to large backbones. Additionally, the guarantees defined in the two service classes tend to be either too strict (GS) or too vague (CL) for most applications.

The differentiated services model (DiffServ) relies on a broad differentiation between a small number of service classes. DiffServ support over MPLS is documented in IETF RFC 3270. Packets are identified as belonging to one class or another via the content of the differentiated services (DS) field in the IP header. Packets are generally classified and marked at the network edge depending on the type of service contract or service level agreement (SLA) between the customer and the service provider. The different classes of packet then receive different per-hop behaviors (PHBs) in the network core nodes. Service differentiation, therefore, implies differential tariffs depending on the QoS offered to flows and packets belonging to different classes. The DiffServ architecture consists of a set of functional elements embodied in the network nodes, as follows:

  • The allocation of buffering and bandwidth to packet aggregates corresponding to each PHB
  • Packet classification (FEC)
  • Traffic conditioning, metering, and shaping

The DiffServ architecture avoids the requirement to maintain per-flow or per-user state within the network core, as is the case of InServ. The DS field (IETF RFC 2474) replaces existing definitions in the type of service (TOS) byte in IPV4 and the traffic class byte in IPv6. Six bits of the DS field are used in the form of the DS code point (DSCP) to identify the PHB to be received by a packet to each node.

Packets must first be classified according to the content of certain header fields that determine the aggregates defined in the user's SLA. Each aggregate is checked for conformity against SLA traffic parameters, and the contents of the DSC field are suitably marked to indicate the appropriate level of priority and PHB. The flow produced by certain aggregates can be reshaped to make these conform to the SLA.

In addition to best effort, considered to be the default PHB, two other PHBs have been defined by the IETF: expedited forwarding (EF) (IETF RFC 2598) and assured forwarding (AF) (IETF RFC 2597). These attributes are further discussed in Chapter 9, "Quality of Service." Service implementations using DiffServ include a virtual leased line for Vo IP via EF PHB and a so-called Olympic service using the AF PHB group where the four AF classes are used to create four service qualities referred to as platinum, gold, silver, and bronze.

Differentiating Service with Traffic Engineering

Deploying different tunnels satisfying a variety of engineering constraints can be done via DiffServ traffic engineering (DS-TE). Figure 3-4 depicts the implementation of DiffServ traffic engineering.

Figure 3-4

Figure 3-4 Different Tunnels Satisfying Different Engineering Constraints

For example, with DS-TE in Figure 3-4:

  • R1 can build a voice tunnel and a data tunnel to every POP.
  • If R1 sends a data packet in a data tunnel (with EXP = Data), it gets the correct QoS for data.
  • If R1 sends a voice packet in a voice tunnel (with EXP = Voice), it gets the correct QoS for voice.

Class of service–based traffic engineering tunnel selection (CBTS) provides a mechanism for dynamically using different tunnels—that is, dynamically steering packets to the designated DS-TE tunnel depending on the destination or class of service (CoS). Therefore, CBTS involves minimum configuration and automatic routing and rerouting when required. CBTS complements DS-TE to achieve dynamic QoS-based routing over an MPLS core where each CoS is transported over a tunnel engineered for its specific requirements; finally, CBTS achieves strict QoS with "right-provisioning" using the mechanism available with this technology, instead of wasteful "over-provisioning."


For multicast VPN (MVPN) implementation, the VPN multicast flow is encapsulated inside an IP multicast GRE packet at the provider edge (PE) replicated inside the MPLS cloud. This encapsulation and replication are performed via regular IP multicast methods toward the far PE, which unwraps the GRE packet to obtain the customer multicast packet. The multicast destination of the GRE packet is unique per multicast domain (that is MPLS VPN). Two kinds of multicast trees can be created in the core: default-mdt and data-mdt. The default-mdt is the basic vehicle that allows the VPN routing and forwarding (VRFs) in the PEs to establish PIM neighbor relationships and pass multicast data between the PEs. All the multicast-enabled PEs of a VRF are members of the default-mdt. The "all" requirement means that PEs that are not interested in particular (S,G) flow still get it. The data-mdt is a traffic-triggered multicast tree created separately from the default-mdt that consists only of the PEs that want to get a particular customer (S,G). Figure 3-5 summarizes the multicast VPN implementation.

Figure 3-5

Figure 3-5 Multicast PIM Instances and Adjacencies

We have provided an overview of the MPLS operation with traffic engineering, quality of service, and multicast descriptions for use in an MPLS-based network. The next section discusses the benefits of MPLS as a technology foundation for service development and deployment.

  • + Share This
  • 🔖 Save To Your Account