Home > Articles > Security > Network Security

Troubleshooting Any Transport over MPLS Based VPNs

  • Print
  • + Share This
MPLS Layer 3 VPNs allow a service provider to provision IP connectivity for multiple customers over a shared IP backbone, while maintaining complete logical separation of customer traffic and routing information. Learn more about MPLS Layer 3 VPNs in this sample chapter from Cisco.
This chapter is from the book

Multiprotocol Label Switching (MPLS) Layer 3 VPNs are described in Internet Draft draft-ietf-l3vpn-rfc2547bis (RFC2547bis). MPLS Layer 3 VPNs allow a service provider to provision IP connectivity for multiple customers over a shared IP backbone, while maintaining complete logical separation of customer traffic and routing information. Each customer VPN consists of a several geographically dispersed sites. IP connectivity between sites is provisioned over the provider backbone.

There are two basic VPN models:

  • The overlay model, in which there is no exchange of routing information between the customer and the service provider
  • The peer model, in which routing information is exchanged between customer and service provider

MPLS Layer 3 VPNs conform to the peer model, but unlike other peer VPN architectures, each customer's routing information is maintained in separate routing and forwarding tables.

Figure 6-1 illustrates a service provider backbone with two MPLS VPNs provisioned.

In Figure 6-1 there are two VPNs, mjlnet_VPN and cisco_VPN. Each VPN has three sites, with site 1 in each VPN connected to Chengdu_PE, site 2 connected to HongKong_PE, and site 3 connected to Shanghai_PE.

The MPLS VPN topology is very flexible. The service provider can configure intranet and extranet topologies, such as hub-and-spoke and full-mesh, simply by controlling the distribution of customer routes between service provider (edge) routers.

The service provider can also act as a backbone to carry traffic between different sites of another service provider. This is known as the carrier's carrier topology.

Finally, service providers can combine to offer VPN connectivity to a customer, with some customer sites connected to one provider and other customer sites connected to other providers. This is called an interprovider VPN.

Figure 6-1

Figure 6-1. MPLS VPNs

Technical Overview

There are two main components in an MPLS VPN backbone, the customer routing and forwarding tables maintained on the provider (edge) routers, and the underlying mechanism used to transport customer traffic. When a customer data packet arrives on the ingress service provider edge router, it is encapsulated with a MPLS (VPN) label that corresponds to the best route in the appropriate customer routing and forwarding table. Then it is forwarded over an MPLS label switched path (LSP) to the egress service provider edge router. Alternatively, MPLS VPN traffic may be tunneled over a non-MPLS network using IP or GRE, L2TPv3, or IP/IPSec.

An understanding of both components is essential for fast and effective troubleshooting of MPLS VPNs. A brief review of MPLS and MPLS VPN operation is included here, beginning with a description of the MPLS architecture.

MPLS Architecture

MPLS is an IETF standard, which builds upon early work done by companies such as Cisco, Ipsilon, Toshiba, and IBM. MPLS allows routers to switch packets based on labels rather than doing Layer 3 lookups. Routers that switch packets based upon labels are known as Label Switch Routers (LSRs).

MPLS offers a number of benefits, including closer integration between IP and ATM, the capability to remove BGP configuration from core routers, and applications such as VPNs and traffic engineering (MPLS/TE).

MPLS Forwarding

When an IP packet arrives at the edge of the MPLS network, the ingress LSR classifies the packet into a Forwarding Equivalence Class (FEC). The FEC is a classification that describes how packets are forwarded over an MPLS network. This can be based upon network prefix (route), quality of service, and so on. In this chapter, it is assumed that classification into an FEC is based on a network prefix (an entry in the routing/forwarding table of the ingress LSR).

Once classification has taken place, the ingress LSR imposes a label on the packet. This label corresponds to the FEC and functions as an identifier that allows LSRs to forward the packet without having to do a Layer 3 lookup.

At each hop through the MPLS backbone, the label is swapped, until the packet reaches the penultimate LSR in the path through the MPLS network. Note that, although the label is swapped, it still corresponds to the same FEC.

The penultimate hop LSR may remove or pop the label before forwarding the packet to the egress LSR. This is called penultimate hop popping, and it saves the egress LSR from having to do a label lookup, remove the label, do a Layer 3 lookup, and finally forward the packet. Instead, because the label is removed at the penultimate hop, the egress LSR can simply do a Layer 3 lookup and forward the packet accordingly.

Note that penultimate hop popping is performed only for labels corresponding to directly connected networks or aggregate routes on the egress LSR.

Figure 6-2 illustrates the forwarding of an IP packet across the MPLS backbone.

The path that a packet takes across the MPLS network is known as a Label Switched Path (LSP).

Figure 6-2

Figure 6-2. Label Switched Path

MPLS Modes

MPLS can operate in two modes:

  • Frame-mode is used over Ethernet, Frame Relay, PPP (including POS), HDLC, and ATM PVCs.
  • Cell-mode is used between label switching controlled ATM (LC-ATM) interfaces. ATM cells sent and received on LC-ATM interfaces carry labels in the VCI or VPI and VCI fields of the ATM cell headers. A device that switches ATM cells between LC-ATM interfaces using label values contained in the VPI/VCI fields in the cell headers is known as an ATM-LSR.


The precise form of the MPLS label differs depending on whether frame-mode or cell-mode MPLS is used, as detailed in the sections that follow.


In frame-mode, the label is carried as a "shim" header between the Layer 2 and Layer 3 headers. MPLS labels are 4 octets long and consist of a 20-bit label, a 3-bit Experimental (EXP) field, a bottom of label stack (S) bit, and an 8-bit Time-to-Live (TTL) field. This is illustrated in Figure 6-3.

Figure 6-3

Figure 6-3. MPLS Label

The Label field carries the label value itself. This corresponds to an FEC. The Exp field, in spite of its name, usually carries quality of service information. The bottom of label stack bit is used to indicate the bottom of the stack label. The Time-to-Live (TTL) field serves exactly the same function as that contained within the IP packet header. The TTL field is decremented by 1 at every hop, and if it reaches 0, the labeled packet is discarded. This mechanism provides protection against forwarding loops in the MPLS network, as well as limiting the forwarding scope of the packet.


In cell-mode, the label is carried in the VPI/VCI fields of the ATM cell header, as shown in Figure 6-4.

Figure 6-4

Figure 6-4. MPLS Label Carried in the VPI/VCI Fields of the ATM Cell Header

Note that when the original packet is segmented into cells on the ingress ATM-LSR, the first of those cells also carries the label or labels in the form shown in Figure 6-3. This is to preserve any other information, such as quality of service, carried in the EXP bits.

Label Stack

A labeled packet is said to contain a label stack. The label stack consists of one or more labels. In a simple MPLS VPN environment, the label stack consists of two labels. If MPLS VPN traffic is being carried over an MPLS traffic-engineering (TE) tunnel, the label stack may consist of two, three, or four labels, depending on how TE is configured.

The outermost (top) label in a stack is used to carry the packet over the MPLS backbone between ingress and egress LSRs. This outer label is the IGP label.

Because the outermost label has only local significance, LSRs must use a signaling protocol to exchange label to prefix bindings. The signaling protocol can be either Cisco's proprietary Tag Distribution Protocol (TDP) or the Label Distribution Protocol (LDP).

If traffic is being carried over a traffic engineering (TE) tunnel, the outermost label corresponds to the TE tunnel. In this case, the label signaling protocol can be either the Resource Reservation Protocol (RSVP), or Constraint-based Routed Label Distribution Protocol (CR-LDP). Cisco routers use RSVP to signal traffic engineering tunnels.

Note that although the outermost (IGP) label may be either TDP/LDP or RSVP signaled, in this book the term "TE label" is used where appropriate to distinguish RSVP signaled labels.

When MPLS VPN traffic is being transported, the innermost (bottom) label corresponds to either:

  • The VPN Routing and Forwarding instance (VRF, which is discussed later in this chapter)
  • The outgoing interface on the egress PE router

This is called the VPN label. Figure 6-5 illustrates the format of the labeled packet as it is transmitted.

Figure 6-5

Figure 6-5. Labeled Packet

Label Information Base, Label Forwarding Information Base, and Cisco Express Forwarding

Labels are stored in three separate types of tables on Cisco routers:

  • The Label Information Base (LIB)—The LIB contains all label bindings received from peer LSRs, or a subset of label bindings that correspond to the best routes for network prefixes. Whether the LSR retains all labels or just a subset depends on the mode of label retention that it is using.
  • The Label Forwarding Information Base (LFIB)— The LFIB contains only those labels that correspond to the next-hop of the best route for each network prefix. The LFIB also contains outgoing interface information.

The LFIB is used for label swapping within the MPLS backbone.

  • The Cisco Express Forwarding (CEF) tables—The CEF tables contain information from the routing table, including prefixes, next-hops, and outgoing interfaces. The CEF tables also interface to the LIB and contain labels associated with prefixes.

CEF is used for label imposition at the edge of the MPLS network on the ingress LSR.

Control and Data Planes

There are two channels or planes of communication between LSRs in an MPLS network:

  • The control plane—Used to exchange routing information and label bindings
  • The data (or forwarding) plane—Used for the transmission of labeled or unlabeled packets

LSP Control, Label Assignment, and BGP Routes

The way that LSPs are established within the MPLS network depends on whether the LSRs are using independent or ordered LSP control, as described in the following list:

  • Independent LSP control—When independent control is used, LSRs assign labels to prefixes (FECs) independently. This means that labels are assigned irrespective of whether other LSRs have assigned labels.

Once labels have been assigned (or bound) to prefixes, these bindings are advertised to peer LSRs.

  • Ordered LSP control—When ordered control is used, an LSR assigns labels to prefixes (FECs) for which it is the egress LSR. If an LSR is not the egress for a prefix, it does not assign a label until the next-hop LSR has sent a label binding for the prefix in question.

It is possible for both independent and ordered control to coexist within a network.

In both independent and ordered control mode, labels are assigned to all prefixes in the routing table, with the exception of BGP routes (in regular MPLS operation). Instead, BGP routes are assigned the label that corresponds to their next-hop. This means, for example, that if BGP route has a next-hop of, and prefix is assigned label 25, then the BGP route will also be assigned label 25. This seemingly insignificant fact has pretty significant consequences.

When a packet enters an MPLS network, the ingress LSR does a route lookup, and if the longest match is a BGP route, the (IGP) label corresponding to the route's next-hop will be imposed on the packet. The packet is then forwarded across the MPLS backbone, and as long as the LSRs in the path have a label corresponding to the BGP next-hop, they are able to forward the packet.

When the packet arrives at the egress LSR, the packet is forwarded out of the correct customer interface. The upshot is that only edge LSRs need to run BGP. Core LSRs can simply run the IGP used to advertise BGP next-hop information.

It is useful to remember this fact when troubleshooting MPLS VPNs because customer routes are advertised across the MPLS VPN backbone using MP-BGP. VPN packets, therefore, use a label corresponding to the MP-BGP next-hop to cross the backbone. The MP-BGP next-hop for VPN routes is the advertising PE router's BGP update source.

Downstream Label Distribution

Label bindings are distributed from downstream to upstream LSRs. Downstream LSRs are closer to the destination network than upstream LSRs. Label distribution, therefore, takes place in the opposite direction to traffic flow.

Figure 6-6 illustrates downstream label distribution.

Figure 6-6

Figure 6-6. Downstream Label Distribution

Downstream label distribution can be either one of the following:

  • Unsolicited downstream label distribution—LSRs that use unsolicited label distribution do not wait for label bindings to be requested before advertising them to their upstream neighbors.
  • Downstream-on-demand label distribution—If LSRs use downstream-on-demand label distribution, an LSR can request a label for a prefix from its downstream peer.

Figure 6-7 illustrates downstream-on-demand label distribution.

Figure 6-7

Figure 6-7. Downstream-on-Demand Label Distribution

Label Retention

After receiving label bindings from its peers, an LSR must decide whether to retain all these bindings or only those that correspond to the best routes in the network. The two modes of label retention are as follows:

  • Liberal label retention—If an LSR operates in liberal label retention mode, all label bindings sent to it from other LSRs are retained.

The advantage of this mode of operation is that the LSR can failover to an alternate LSP if the original LSP fails. The disadvantage is that more memory is required to store the labels.

  • Conservative label retention—An LSR operating in conservative label retention mode retains only those label bindings that correspond to the best route for each network prefix. Any other label bindings are simply discarded.

The advantage of this mode is that less memory is required to store label bindings. The disadvantage is that it takes longer to failover to an alternate path if the original LSP fails.

Label Distribution Protocols

A number of label distribution protocols can be used within a MPLS network depending upon the particular applications being used. Label distribution protocols include Tag Distribution Protocol (TDP), Label Distribution Protocol (LDP), RSVP, Multiprotocol Extensions for BGP-4 (MP-BGP), and Protocol Independent Multicast (PIM).


The LDP and Cisco's proprietary TDP can both be used to advertise labels bindings for IGP prefixes. Although TDP and LDP are similar, there are a number of differences. Table 6-1 outlines some of the primary differences between LDP and TDP.

Table 6-1 LDP versus TDP



IETF standard protocol

Cisco proprietary protocol

Uses an all-routers multicast address ( for directly connected neighbor discovery

Uses local broadcasts

Uses UDP and TCP port 646 for neighbor discovery and session establishment

Uses UDP and TCP port 711

Provides optional MD5 authentication

No optional MD5 authentication provided

Extensions to the RSVP

Extended RSVP is used in MPLS networks to signal TE tunnels. TE LSP tunnels can be used to make better use of bandwidth by taking advantage of underutilized paths through the network.

TE LSPs can be reserved based upon bandwidth requirements and administrative policies. TE LSPs can follow an explicit or dynamic path. Irrespective of whether they are explicit or dynamic, however, paths must conform to any bandwidth and administrative requirements.

Extensions to OSPF and IS-IS facilitate the flooding of link bandwidth and policy information throughout the MPLS network. This allows the TE tunnel initiating (head-end) LSR to calculate the path using a constrained shortest path (CSPF) algorithm.

Once the path has been calculated, the tunnel is signaled using RSVP Path and Resv messages. Path messages contain a LABEL_REQUEST object (among others) and travel hop-by-hop along the path described to the tunnel tail-end. Resv messages contain a LABEL object and travel back along the path from the tail-end to the head-end LSR. The purpose of the LABEL_REQUEST object is, as the name suggests, to request a label binding for the LSP. The purpose of the LABEL object is to distribute label bindings for the LSP. TE tunnels use downstream-on-demand label distribution.

Figure 6-8 illustrates TE LSP tunnel signaling using extensions to RSVP.

Figure 6-8

Figure 6-8. TE LSP Tunnel Signaling Using Extensions to RSVP

Extensions to RSVP for traffic engineering are discussed in RFC 3209. Other useful documents include RFC 2702, which describes requirements for traffic engineering over MPLS, draft-ietf-isis-traffic, which describes IS-IS extensions for traffic engineering, and RFC 3630, which describes TE extensions for OSPF.


MP-BGP is used in a MPLS VPN environment to advertise customer routes, associated labels, and other attributes. MP-BGP is discussed further in the next section, "MPLS Layer-3 VPNs." Multiprotocol extensions for BGP are discussed in RFC 2858.

MPLS Layer-3 VPNs

MPLS VPNs can be provisioned over a shared provider backbone. They allow IP connectivity between customer sites in a VPN.

RFC2547bis uses a number of terms to describe devices at customer sites and within the service provider's backbone:

  • Customer edge (CE) routers—Routers at the customer site that are directly connected to the service provider network.
  • Customer routers—Other routers within the customer site that are not directly connected to the service provider network.
  • Provider edge (PE) routers—Routers within the service provider backbone that connect to customer sites. Note that in an MPLS network, PE routers also function as edge LSRs.
  • Provider (P) routers—Routers within the service provider backbone that do not connect directly to customer sites. Note that in an MPLS network, P routers also function as LSRs.

Figure 6-9 illustrates CE, PE, and P routers.

Figure 6-9

Figure 6-9. CE, PE, and P Routers

Overlapping Address Space

Different customers VPNs might use overlapping IP address space. To allow overlapping IP address space to be distinguished within the MPLS VPN backbone, PE routers translate customer routes into VPN-IPv4 prefixes.

A VPN-IPv4 prefix consists of an 8-byte Route Distinguisher (RD) and the original 4-byte IPv4 prefix. RDs are different for customer VPNs, which ensures that VPN-IPv4 prefixes are unique.

Figure 6-10 illustrates the format of a VPN-IPv4 prefix.

Figure 6-10

Figure 6-10. VPN-IPv4 Prefix Format

The RD is encoded using the format shown in Figure 6-11.

Figure 6-11

Figure 6-11. RD Format

The Type field is 2 bytes, and the Value field is 6 bytes. The three currently defined RD types are 0, 1, and 2. The Value field is broken into the Administrator and Assigned Number subfields, as shown in Figure 6-12.

Figure 6-12

Figure 6-12. Administrator and Assigned Number Subfields

If a Type 0 RD is specified, then the Administrator subfield and Assigned Number subfields are 2 bytes and 4 bytes, respectively, and are encoded as shown in Figure 6-13.

Figure 6-13

Figure 6-13. RD Type 0 Encoding

As shown in Figure 6-13, when using a type 0 RD, the Administrator subfield contains an autonomous system (AS) number.

If the service provider is using autonomous system number 64512, and the assigned number is 100, then the IPv4 prefix 

would translate into


If a Type 1 RD is specified, the Administrator subfield and Assigned Number subfields are 4 bytes and 2 bytes, respectively, and they are encoded as shown in Figure 6-14.

Figure 6-14

Figure 6-14. RD Type 1 Encoding

As you can see, when using a type 1 RD, the Administrator subfield contains an IP address.

If the service provider is using IP address, and the assigned number is 100, then the IPv4 prefix 

would translate into


If a Type 2 RD is specified, the Administrator subfield and Assigned Number subfields are 4 and 2 bytes, respectively, and they are encoded as shown in Figure 6-15.

Figure 6-15

Figure 6-15. RD Type 2 Encoding

If the service provider is using autonomous system number 64512, and the assigned number is 100, then the IPv4 prefix 

would translate into


If you are the observant type, you might have noticed a striking similarity between the format of type 0 and type 2 RDs. They do, in fact, have a similar format.

Type 0 and 1 RDs are used when translating IPv4 prefixes into VPN-IPv4 prefixes. Type 2 RDs can be used to signal Multicast VPNs (MVPNs).

VPN Routing and Forwarding Instances

To allow complete logical separation of routes belonging to different customers, separate routing tables and forwarding tables are used on PE routers. These routing and forwarding tables collectively make up what is known as a VPN Routing and Forwarding (VRF) instance. PE router interfaces connected to different customers are then associated with these VRFs.

Customer routes received on an interface are stored in the associated VRF routing table. Similarly, customer traffic received on an interface is routed according to the associated VRF.

Figure 6-16 illustrates VRF tables on the PE routers.

Figure 6-16

Figure 6-16. VRF Tables on PE Routers

The global routing table is still maintained on the PE router and contains backbone IGP routes, as well as any Internet routes.

Note that on Cisco routers it is now possible to associate incoming traffic with a VRF based on its source IP address rather than incoming interface. This feature is known as VRF Selection.

Route Target Attribute

Although RDs facilitate the disambiguation of overlapping IP address space, they are not flexible enough to allow the provisioning of complex network topologies over an MPLS VPN backbone. To allow this provisioning, a BGP extended community attribute called a Route Target (RT) is used.

The BGP extended community has two fields, the Type field and the Value field. The Type field used with Route Targets is 2 octets (Extended Type), and the Value field is 6 octets. Figure 6-17 illustrates the BGP extended community attribute.

Figure 6-17

Figure 6-17. BGP Extended Community Attribute (Extended Type)

The (Extended) Type field is subdivided into high and low order octets. The low order octet has a value of 0x02 when the extended community is a Route Target.

The Value field is subdivided into Global and Local Administrator fields. The length of these fields depends on the value of the Type high order octet. If the Type high order octet has a value of 0x00 or 0x02, the Global Administrator is 2 octets (if the high order octet is 0x00) or 4 octets (if the high order octet is 0x02), and the Local Administrator is 4 or 2 octets. In this case, an autonomous system number (either 2 or 4 octets) is carried in the Global Administrator field. The Local Administrator field is, as the name suggests, a value is assigned by the local administrator.

Figure 6-18 shows the Route Target attribute when the Type high order octet is 0x00 or 0x02.

Figure 6-18

Figure 6-18. Route Target Attribute (High Order Octet Is 0x00 or 0x02)

If the service provider uses autonomous system number 64512, and the Local Administrator number is 100, the Route Target attribute would be 64512:100. If the Type high order octet has a value of 0x01, then the Global Administrator is 4 octets, and the Local Administrator is 2 octets. In this case, an IP Address is carried in the Global Administrator field.

Figure 6-19 shows the Route Target attribute when the Type high order octet is 0x01.

Figure 6-19

Figure 6-19. Route Target Attribute (High Order Octet Is 0x01)

If the service provider uses IP address, and the Local Administrator number is 100, the Route Target attribute would be

Each VRF is configured with a set of import and export RTs. When VPN-IPv4 routes are inserted into the MP-BGP table, one or more route targets are attached. These are known as export route targets.

When a PE router receives a VPN-IPv4 route, it compares the attached RTs with the import RTs for each of its VRFs. If there is at least one match, the route is installed into the VRF.

Figure 6-20

Figure 6-20. VPN-IPv4 Route Export and Import

Figure 6-20 illustrates VPN-IPv4 route export and import based on RTs.

VPN Route Distribution

In a MPLS Layer 3 VPN, customer edge (CE) routers advertise routes to provider edge (PE) routers using Routing Information Protocol (RIP) version 2, Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), or Exterior Border Gateway Protocol (EBGP).

After receiving customer routes from the CE, the PE converts them into VPN-IPv4 routes. One or more (export) RTs are then attached, and they are advertised in Multiprotocol BGP (MP-BGP) to other PE routers. The next-hop of these routes is the BGP update source of the advertising PE router.

VPN labels are also advertised, along with VPN-IPv4 prefixes. These labels identify the VRF or outgoing interface on the advertising PE router and are used for packet forwarding.

Other standard and extended BGP communities, such as Site of Origin (used for loop prevention in multihomed sites), may also be attached to the VPN-IPv4 routes.MP-BGP routes received by PE routers are installed into VRFs depending on the attached RTs. Routes installed into a VRF are then advertised to the attached customer sites in the VPN using RIP, EIGRP, OSPF, or EBGP.

Figure 6-21 shows the advertisement of customer routes across a service provider MPLS VPN backbone.

Figure 6-21

Figure 6-21. CE, PE, and P Routers

Although Figure 6-21 illustrates route advertisement only from CE2 to CE1, route advertisement from CE1 to CE2 is identically configured, just in the opposite direction.

The example in this section describes the use of PE-CE routing protocols, but static routes may also be configured for PE to CE connectivity.

Forwarding VPN Traffic Across the Backbone

When VPN traffic is forwarded across the MPLS VPN backbone, a two-label stack is used. The outer label is known as the IGP label and is used to forward the traffic from the ingress PE to the egress PE. The inner label is known as the VPN label and is used to identify the VRF or outgoing interface on the egress PE.

Figure 6-22 illustrates the forwarding of a packet across an MPLS VPN backbone from host to host

Figure 6-22

Figure 6-22. Packet Forwarding Across an MPLS VPN Backbone

In Figure 6-22, an IP packet is sent from host (on the left) to host (on the right).

The IP packet is forwarded by CE1 to Chengdu_PE. Chengdu_PE does a Layer 3 lookup in the VRF mjlnet_VPN routing table and finds a route to network with a next-hop of is the BGP update source on HongKong_PE.

Chengdu_PE then imposes a two-label stack, with the inner (VPN) label (36) corresponding to the IP prefix in VRF mjlnet_VPN on HongKong_PE, and the outer (IGP) label (29) corresponding to the next-hop of route (

Chengdu_PE forwards the packet to Chengdu_P. Chengdu_P consults its LFIB and swaps outer label 29 for label 25. The VPN label is unmodified.

Chengdu_P forwards the packet to HongKong_P. HongKong_P consults its LFIB, and pops (removes) outer label 25 (HongKong_P is the penultimate hop). Again, the VPN label is unmodified.

HongKong_P then forwards the packet to HongKong_PE. HongKong_PE examines the VPN label and, having found that it corresponds to VRF mjlnet_VPN, removes the label and forwards the unlabeled IP packet to CE2. Finally, CE2 forwards the packet onwards to host

As previously noted, IP or GRE tunnels can be used to transport VPN traffic over a non-MPLS network between PE routers. In this case, the outermost MPLS label can be replaced by GRE or IP encapsulation.

MPLS VPN traffic can also be transported over a non-MPLS network using an L2TPv3 or IPSec tunnel. When L2TPv3 is used to transport VPN traffic over a non-MPLS network, the outermost MPLS label is replaced by L2TPv3 encapsulation. When MPLS VPN traffic is transported over an IPSec tunnel between PE routers, the outermost MPLS label is replaced by IP/IPSec encapsulation.

When comparing the three methods of encapsulation for transport of MPLS VPN traffic over a non-MPLS network, L2TPv3 allows a compromise between the strong security but high overhead of IP/IPSec and the very limited security of IP/GRE. The L2TPv3 cookie makes blind spoofing attacks more difficult to achieve when compared with IP/GRE because an attacker has to guess the cookie values in use (as well as the MPLS label value).

See draft-ietf-mpls-in-ip-or-gre, draft-townsley-l2tpv3-mpls, and draft-ietf-l3vpn-ipsec-2547 for more information on MPLS VPN transport over IP or GRE, L2TPv3, and IP/IPSec respectively. See also Chapter 5, "Troubleshooting L2TP v3 Based VPNs," for more details on L2TPv3.Unless otherwise specified, this chapter assumes transport of MPLS VPN traffic over an MPLS backbone.

Internet Access

VPN customers often require Internet access. The MPLS VPN provider can configure this in several ways. Two of the most popular ways of configuring Internet access are packet leaking between the VRF and global routing tables via static routes, and using separate interfaces for VPN and Internet access. Other methods of providing Internet access include via a shared service VPN or a separate ISP.

Providing VPN Customers Internet Access with Packet Leaking via Static Routes

Normally, VRF and global routing tables are completely separated. The VRF routing tables contain VPN routes, and the global routing table contains backbone IGP routes and either Internet routes or a route to an Internet gateway.

If packet leaking via static routes is configured, traffic outbound from the customer VPN to the Internet is allowed to "leak" from the VRF routing tables to the global routing table. Similarly, traffic inbound from the Internet is selectively allowed to leak into the customer VPN via the VRF interface.

Leaking from the VRF to the global routing table (for traffic outbound from the customer VPN) is accomplished by configuring a static VRF route with a next-hop in the global routing table. This static VRF route is usually a default route. Similarly, a global static route pointing to the customer networks (for traffic inbound from the Internet) is configured with an outgoing VRF interface.

Figure 6-23 illustrates Internet via route leaking.

In Figure 6-23, traffic outbound to the Internet from mjlnet_VPN site 1 arrives on the VRF interface of the PE router. The PE router routes the traffic using the default route in the VRF routing table. The next-hop is in the global routing table.

When traffic inbound from the Internet to mjlnet_VPN site 1 arrives on the PE router, the PE router forwards the traffic using the route in the global routing table. The outgoing interface of the route is the mjlnet_VPN VRF interface.

When configuring packet leaking, the global static route that points to the customer network should be redistributed into global BGP. This ensures that hosts on the Internet have a route back to the PE router. Also, the VRF static default route should be redistributed into the PE-CE routing protocol, if one is being used.

Figure 6-23

Figure 6-23. Internet Access via Route Leaking

Providing VPN Customers Internet Access with a Separate Interface

Another way of configuring Internet access for a customer site is to configure one interface for VPN connectivity on the CE router and another separate interface for Internet connectivity. The interface for Internet connectivity is associated with the global routing table on the PE router.

Figure 6-24 illustrates Internet access via a separate interface.

Figure 6-24

Figure 6-24. Internet Access via a Separate Interface

In Figure 6-24, traffic both outbound and inbound from the Internet is routed via the Internet (global) interface on the PE router. Routing can be configured between the CE and PE routers over the Internet interface in the standard way using BGP or static default routes.

Multicast VPNs (MVPNs)

Previously, if a customer required multicast connectivity between sites in an MPLS VPN, a mesh of point-to-point GRE tunnels between the CE routers was required. With the advent of Multicast VPNs (MVPNs), this is no longer necessary. The MVPN feature is based on Multicast Domains (MD), which are described in Internet Draft draft-rosen-vpn-mcast.

MVPNs allow a service provider to tunnel customer multicast traffic between sites over a core multicast tree (in other words, multicast over multicast tunneling).

MVPN presupposes support for PIM within the customer network, as well as the provider backbone. Supported modes include PIM Sparse-Mode (PIM-SM), PIM Bi-directional (PIM-BIDIR), and PIM Source Specific Multicast (PIM-SSM). PIM Dense Mode (PIM-DM) is also supported within the customer network.

When configuring and troubleshooting MVPN, you should have a good understanding of the following elements:

  • MVPN support on PE routers
  • Formation of PIM adjacencies in an MVPN environment
  • Default multicast forwarding in the backbone
  • Optimizing multicast forwarding in the backbone

The sections that follow discuss these elements in greater detail.

MPVN Support on the PE Router: The Multicast VRF, Multicast Tunnel, and Multicast Tunnel Interface

When MVPN is configured for a VRF on a PE router, a Multicast VRF (MVRF) and a Multicast Tunnel Interface (MTI) are created. The MVRF is the multicast routing table for the VRF. The MTI is an endpoint of the Multicast Tunnel (MT) and is used to forward customer multicast traffic between sites in an MVPN.

The MT source address is the MP-BGP update source on the PE router, and the destination address is the Multicast Distribution Tree (MDT) address.

Formation of PIM Adjacencies in an MVPN Environment

Each PE router maintains one instance of PIM for the backbone network, as well as one instance per MVRF. Provider backbone PIM adjacencies are maintained between PE routers' core interfaces and P routers. MVPN PIM adjacencies are maintained between PE and CE routers, as well as between PE routers over the MT.

Figure 6-25 illustrates PIM adjacencies in a MVPN environment.

Note that each PE router in Figure 6-25 has only one (multipoint) MTI. PE routers create a single MTI per MVRF. This means that each PE router in Figure 6-25 is maintaining two PIM adjacencies on their MTIs.

Figure 6-25

Figure 6-25. PIM Adjacencies in a MVPN Environment

Default Multicast Forwarding in the Backbone: The Default MDT

A default Multicast Distribution Tree (default MDT) is maintained in the provider backbone for the purpose of forwarding customer multicast traffic and PIM control traffic between the MTIs on the PE routers. All PE routers participating in the MPVN join the default MDT.

Figure 6-26 illustrates the default MDT.

In Figure 6-26, the default MDT (group has been established over the provider backbone between Chengdu_PE, HongKong_PE, and Shanghai_PE. The backbone network in this example is configured for PIM Sparse Mode (PIM-SM), with Chengdu_P as the Rendezvous Point (RP). Customer multicast traffic (in this example, group is forwarded over the default MDT.

Figure 6-26

Figure 6-26. Default MDT

Note that in Figure 6-26, HongKong_PE drops multicast traffic for group because there are no receivers at MVPN_mjlnet site 2. This is the disadvantage of forwarding customer multicast traffic over the default MDT. Traffic is forwarded to all PE routers participating in the MVPN, regardless of whether there are any receivers at the site to which they are connected. The solution to this issue is the data MDT.

Optimizing Multicast Forwarding in the Backbone: The Data MDT

A data MDT (see Figure 6-27) is constructed across the provider backbone when traffic for a particular customer multicast group crosses a configured bandwidth threshold. Crucially, only PE routers connected to sites with receivers for this group join the data MDT.

In Figure 6-27, the bandwidth threshold for group has been exceeded, and a data MDT (group has been established.

Figure 6-27

Figure 6-27. Data MDT

There is a receiver for group at site 3, and so Shanghai_PE joins the data MDT. There are no receivers for this group at site 2, however, and so HongKong_PE does not join the data MDT.

Note that data MDTs are not established for PIM dense mode groups.

  • + Share This
  • 🔖 Save To Your Account