Home > Articles > Networking

  • Print
  • + Share This
This chapter is from the book

Traffic Engineering

Traffic engineering is the process of routing data traffic to balance the traffic load on the various links, routers, and switches in the network and is most applicable in networks where multiple parallel or alternate paths are available. Fundamentally, traffic engineering involves provisioning the network to ensure sufficient capacity exists to handle the forecast demand from the different service classes while meeting their respective QoS objectives. Current routing on IP networks is based on computing the shortest path where the "length" of a link is determined by an administratively assigned weight. Reasons to deploy traffic engineering include the following:

  • Congestion in the network due to changing traffic patterns
  • Election news, online trading, or major sports events
  • Better utilization of available bandwidth
  • Route on the path that is not the shortest
  • Route around failed links/nodes; fast rerouting around failures, transparently to users like SONET Automatic Protection Switching (APS)
  • Building of new services—virtual leased-line services
  • VoIP Toll-Bypass applications, point-to-point bandwidth guarantees
  • Capacity planning traffic engineering improves aggregate availability of the network

Additional reasons to consider traffic engineering are that IP networks route based only on destination (route) and ATM/FR networks switch based on both source and destination (PVC and so on). Some large IP networks were built on ATM or FR to take advantage of source and destination routing, and overlay networks inherently hinder scaling (see "The Fish Problem" in Figure 3-17). MPLS-TE allows you to do source and destination routing while removing the major scaling limitation of overlay networks. Finally, MPLS-TE has since evolved to do things other than bandwidth optimization, which is discussed in detail in Chapter 8, "Traffic Engineering."

The challenge with destination leased cost routing is that alternate links are often underutilized, as shown in Figure 3-17.

Figure 3-17

Figure 3-17 IP Routing and the Fish

To demonstrate how traffic engineering addresses the problem of underutilized links, we will take an example in Figure 3-18 by first defining the traffic engineer terminology:

  • Head-End—A router on which a TE tunnel is configured (R1)
  • Tail-End—The router on which the TE tunnel terminates (R3)
  • Mid-point—A router through which the TE tunnel passes (R2)
  • LSP—The label-switched path taken by the TE tunnel; here it's R1-R2-R3
  • Downstream router—A router closer to the tunnel tail
  • Upstream router—A router farther from the tunnel tail (so R2 is upstream to R3's downstream, and R1 is upstream from R2's downstream)
Figure 3-18

Figure 3-18 Traffic Engineering Terminology

Continuing the traffic engineering building block, information distribution is done via a link state protocol, such as IS-IS or OSPF. The link state protocol is required only for traffic engineering, not for the implementation of Layer 3 VPNs. A link state protocol is required to ensure that information gets flooded and to build a topology of the entire network.

Information that is flooded includes link, bandwidth, and attributes. After available bandwidth information is flooded, a router can calculate a path from head to tail. The TE head-end performs a constrained SPF (CSPF) calculation to find the best path. CSPF is just like regular IGP SPF, except that it takes required bandwidth into account and looks for the best path from a head to a single tail, not to all devices.

Note that control capabilities offered by existing Internet Gateway Protocols (IGPs) are adequate for traffic engineering. This makes actualizing effective policies to address network performance problems difficult. IGPs that are based on shortest path algorithms contribute to congestion problems in autonomous systems within the Internet. SPF algorithms generally optimize based on a simple additive metric. These protocols are topology driven so bandwidth availability and traffic characteristics are not factors in routing decisions. (Refer to IETF RFC 2702, "Requirements for Traffic Engineering over MPLS.")

In practice, there has been zero impact from CSPF CPU utilization on even the largest networks. After the path is calculated, you need to signal it across the network.

To reserve any bandwidth so that other LSPs cannot overload the path and to establish an LSP for loop-free forwarding along an arbitrary path, a path setup is done via PATH messages from head to tail and is similar to "call setup." A PATH MESSAGE carries a LABEL_REQUEST, whereas RESV messages are done from tail to head and are analogous to "call ACK." RESV messages transport the LABEL.

Other RSVP message types exist for LSP teardown and error signaling. The principles behind path setup are that you can use MPLS-TE to forward traffic down a path other than that determined by your IGP cost and that you can determine these arbitrary paths per tunnel head-end.

Figure 3-19 describes the path setup operation.

Figure 3-19

Figure 3-19 Path Setup

After having established the TE tunnel, the next step in deploying MPLS-TE is to direct traffic down the TE tunnel. Directing traffic down a TE tunnel can be done by one of the following four methods:

  • Autoroute—The TE tunnel is treated as a directly connected link to the tail IGP adjacency and is not run over the tunnel. Unlike an ATM/FR VC, autoroute is limited to single area/level only.
  • Forwarding adjacency—With autoroute, the LSP is not advertised into the IGP, and this is the correct behavior if you are adding TE to an IP network. However, it might not be appropriate if you are migrating from ATM/FR to TE. Sometimes advertising the LSP into the IGP as a link is necessary to preserve the routing outside the ATM/FR cloud.
  • Static routes
  • Policy routing

With autoroute and static route, MPLS-TE provides for unequal cost load balancing. Static routes inherit unequal cost load sharing when recursing through a tunnel. IP routing has equal-cost load balancing but not unequal cost. Unequal cost load balancing is difficult to implement while guaranteeing a loop-free topology. Therefore, because MPLS does not forward based on IP header, permanent routing loops do not occur. Further, 16 hash buckets are available for the next hop, and these are shared in rough proportion to the configured tunnel bandwidth or load-share value. Autoroute, forward adjacency, and static and policy routing are further explained in Chapter 8. To summarize, MPLS-TE operational components include the following:

  • Resource/policy information distribution
  • Constraint-based path computation
  • RSVP for tunnel signaling
  • Link admission control
  • LSP establishment
  • TE tunnel control and maintenance
  • Assignment of traffic to tunnels

MPLS-TE can be used to direct traffic down a path other than that determined by your IGP cost. Fast Reroute (FRR) builds a path to be used in case of a failure in the network and minimizes packet loss by avoiding transient routing loops. To deploy FRR, you must pre-establish backup paths such that when a failure occurs, the protected traffic is switched onto backup paths after local repair and the tunnel head-ends are signaled to recover. Several FRR modes, such as link node and path protection, exist. In link protection, the backup tunnel tail-head is one hop away from the point of local repair (PLR). In node protection, the backup tunnel tail-end is two hops away from the PLR. Figures 3-20 and 3-21 depict link, node, and path protection mechanisms.

Figure 3-20

Figure 3-20 FRR Link and Node Protection

Figure 3-21

Figure 3-21 Path Protection

One application for MPLS-TE is to implement a virtual lease line (VLL) with bandwidth guarantees. This can be done via MPLS-TE or differentiated service-traffic engineering (DiffServ-TE) with QoS. Diff-Serv is covered in the next section of this chapter. Figure 3-22 shows an example of VLL deployment via MPLS-TE.

Figure 3-22

Figure 3-22 Virtual Leased Line Deployment

The next section discusses class of service implementation based of the differentiated service architecture or DiffServ. Details of DiffServ are described in Chapter 9. The next section highlights the architecture and provides linkage to service development.

  • + Share This
  • 🔖 Save To Your Account