Home > Articles > Networking > Network Administration & Management

  • Print
  • + Share This
This chapter is from the book

MIB Note: Scalability

A very useful object type for large table management (as described above) is a counter conceptually called nextObjectIndex. This object provides the index number of the next available slot in a table. The agent maintains the value of this object so that when the manager has to add a new row to the MPLS tunnel table, it need only get the value of the associated nextObjectIndex. This avoids the overhead of MIB walks to actually count the entries and figure out the next free value. Once a new entry is added to the table, the agent increments the value of its nextObjectIndex. It is encouraging to see this type of MIB object being used in the IETF draft-standard MPLS MIBS, (e.g., mplsTunnelIndexNext [IETF-TE-MPLS]). This takes cognizance of scalability issues in the standard MIB document and avoids the need for proprietary solutions. It also provides a good example for implementers. Scalability issues like this can be difficult (or impossible) to resolve without the support of special MIB objects. This type of scalability issue will become more pressing as networks and the complexity of the constituent managed objects continue to grow.

Network operators and their users increasingly demand more bandwidth, faster networks, and bigger devices. Scalability concerns are growing because routers and switches are routinely expected to support the creation of millions of virtual circuits [ATM&IP2001]. Not only can devices support this number of objects, they can also create these circuits at an increasingly fast pace—tens of thousands per second. To illustrate the scale of this, let's assume in Figure 3-3 that there are hundreds of thousands of nodes (we show just a few).

03fig03.gifFigure 3-3. Creating LSPs in an MPLS network.

Client 2 now executes a bulk provisioning operation. This results in the NMS server requesting that MPLS router LER A is to create two blocks of 10,000 signaled LSPs originating at A. The first 10,000 LSPs follow the path LER A-LSR A-LSR B-LSR C-LER B, while the second set follows the path LER A-LSR A-Cloud-LSR C-LER B. (The cloud in the latter case could be another network.) Further, let's assume that LER A can create LSPs at a rate of 10,000 per second. This means that once the intermediate node MIBs have been populated and the LSPs become operational, the network will emit a tunnel-up trap for every LSP. So, the management system has to be able to handle 20,000 traps coming in very fast from the network. There could be scope here for aggregating traps in compressed form, as mentioned earlier.

Since the LSPs are now operational, this must be reflected in the management system database and the active client/user interfaces (Clients 1 to n in Figure 3-3). The clients could be viewing (or provisioning, like Client 2) LSPs in the network, and any required changes to their views should be made as quickly as possible.

The problems don't stop there, because the LSPs must then be managed for further changes, such as:

  • Status (e.g., becoming congested or going out of service)

  • Faults such as an intermediate node/link failure or receipt of an invalid MPLS label

  • Deletion by a user via a CLI (i.e., outside the management system)

  • Modification by a user (changing the administrative status from up to down)

The result of any or all of these is some change in the LSP managed object attributes. The NMS picture of the network state is then at variance with the actual picture. All such changes must be reflected in the NMS as quickly as possible. The detailed functions of a typical NMS are discussed in Chapter 5, “A Real NMS.”

The above discussion is a little simplistic; that is, it is likely that many of the above LSPs might be aggregated into one LSP with more reserved bandwidth. However, we illustrate the case merely to point out that if the emerging NEs are capable of generating large numbers of virtual circuits quickly, then the NMS must be able to support that in all of the affected FCAPS areas.

A noteworthy point in Figure 3-3 is the direction of the IP service; this is indicated as being from left to right. This reflects the fact that MPLS is a forwarding technology. If it is required to move IP traffic from LER B towards LER A, then LSPs have to be created specifically for this purpose, originating at LER B and terminating at LER A.

Other Enterprise Network Scalability Issues

The discussion in the previous section applies mostly to SP networks. Scalability concerns are also profoundly affecting enterprise networks in the following areas:

  • Storage solutions, such as adding, deleting, modifying, and monitoring SANs

  • Administration of firewalls, such as rules for permitting or blocking packet transit

  • Routers, such as access control lists and static routes

  • Security management, such as encryption keys, biometrics facilities, and password control

  • Application management

SANs are becoming a vital storage service. Storage needs are steadily increasing as the number and complexity of applications in use grows. The administration burden associated with firewalls, routers, security, and applications deployment is growing all the time as user populations expand and work practices become more automated.

  • + Share This
  • 🔖 Save To Your Account