In the normal course of events, network device technology (IP/MPLS, layer 2/3 VPN, VPLS, and so on) is developed, and only later does management information base (MIB) design commence. The major theme of this article is that network management operational workflows should play a greater role in driving MIB design. A simple example discussed is the creation of an IP/MPLS RFC2547 VPN, where it's not currently very easy to define end-to-end quality of service for many VPNs. A recent addition to the MIB has improved this situation, which means that if a VPN site has real-time traffic to send to another site, it's no longer so very difficult to guarantee availability of the required resources. This change shows that designing MIBs with workflows in mind facilitates a better match between network management and the services deployed in the network.
Everyone's talking about IP services nowadays, and with good reasonthey're being deployed by service providers across the globe. I want to look at two major issues in this article:
Workflows required to build a layer 3 (IP) service
Mapping these workflows into the network management MIBs
Let's take an example of a popular IP/MPLS servicea VPN based on RFC2547bis, as illustrated in Figure 1.
Figure 1 Two IP VPNsA and Blinked across a multiprotocol label switching (MPLS) core.
One of many reasons for preferring a layer 3 VPN solution over the traditional layer 2 variety is that it solves the N2 problem. This problem, seen in layer 2 networks, is caused by the need for a virtual circuit for every VPN site. So, if you have N sites, you need N2 virtual circuits (to be precise, N * (N - 1)/2 circuits are needed). This requirement becomes unmanageable as the number of sites gets large. A layer 3 VPN doesn't suffer from this scalability problem. However, the layer 3 service doesn't feature virtual circuits stretching from site to site, so end-to-end QoS is more difficult to achieve in layer 3 VPNs.
The nodes (PE and P devices) in the center of Figure 1 are all contained in the core of a service provider network; Sprint in the U.S., BT in the UK, etc. The provider edge nodes (such as PE1) are connected to customer edge (CE) nodes using a link technology such as POS, Ethernet, ATM, FR, etc. Typically, the PE and CE nodes are owned and managed by the service provider. The internal nodesP for provider nodesare not visible from the customer edge. P nodes illustrate one of the strengths of MPLS: migration from legacy technologies  such as ATM or FR. The service provider can upgrade some or all of the legacy devices to run a selection of the IP/MPLS protocols listed in the center of Figure 1. This upgrade feature helps to reduce the cost of adopting MPLS and its applications.
We note in passing that the PE devices in Figure 1 are edge nodes as opposed to service aggregation nodes; that is, they serve to connect the incoming traffic directly into the core. Service aggregation nodes, on the other hand, do some form of concatenation; for example, joining many DSL connections for transmission into the core.
Part of the magic of RFC2547 is that traffic exchanged between sites 1, 2, and 3 in VPN A cannot be seen by anyone in sites 1 and 2 in VPN B. This is a crucial requirement for an IP VPN. Another, less obvious requirement is that the IP address ranges used in both VPNs can be sharedvery useful for large networks.
Without delving too much into the details of RFC2547, a VRF is assigned to the incoming PE interface (220.127.116.11 in Figure 1). The route distinguisher is a special number used to differentiate between the IP addresses in each VPN, and the route target is used to ensure that traffic from one VPN doesn't get forwarded into another. (For a detailed and very clear example of VPN configuration, see  in the References section at the end of this article.)
Let's take a quick look at how this is all set up by the operator using the network management system (NMS) at the bottom of Figure 1.