Home > Articles

  • Print
  • + Share This

SLA Support in the ISP Environment

Supporting SLAs in ISP environments can be done by means of multiple technologies. One of the possible ways to provide SLA support is by operating different physical networks. Another approach is to use a technology such as DiffServ.

SLA Support by Operating Multiple Networks

When different service levels are implemented as different networks, the structure of the operator's network looks like Figure 4.3. Packets belonging to different service levels are directed to different networks. This function is also known as alternate routing. With this approach, the core network is effectively split into two core subnetworks.

Figure 4.3
Parallel Physical Subnetworks for SLA Support

The responsibility of identifying the correct service level to which a packet belongs and routing it to the corresponding core subnetwork belongs to the access router. The access router is typically the place where many customers connect to the ISP network. The access router needs to classify packets into the different service levels on the basis of any combination of IP and TCP/UDP headers. Some fields that can be included in this combination are the source and destination IP addresses and the protocol field in the IP header, and the source and destination port numbers in the transport header.

The choice of the exact fields that are needed to support SLAs depends on the granularity at which SLAs are defined. Consider the case in which different performance requirements are offered at the granularity of a customer organization (in other words, all traffic from a single customer organization is offered the same performance bounds). In this case, the classification can be done simply on the basis of the source IP addresses of the packets. You can alternatively do the classification on the basis of the adapter on which an incoming packet is received. When each customer organization is mapped to a different service level, it is as if each has a separate intranet of its own. They simply happen to share the same access router.

An alternative approach is to assign different service levels to different applications. Each core subnetwork is used to carry a different set of applications. In this case, the classification of packets would be done using a combination of the IP source and destination addresses and the TCP or UDP port numbers used by the applications.

As an example, consider the case of an enterprise that used to operate a Systems Network Architecture (SNA) network and that has decided to migrate to an IP-based network. The legacy SNA applications are encapsulated in IP using the data-link switching protocol (DLSw). The SLAs for SNA applications typically require a much higher availability. In order to prevent interference among the DLSw traffic and other IP traffic, one of the core networks is used to carry DLSw traffic, and the second core subnetwork is used to carry all other IP traffic. DLSw uses TCP as the transport protocol and typically uses the port numbers 2065 and 2067. The access routers determine the packets that use TCP (the protocol field in the IP header is 6) on ports 2065 or 2067 as source or destination port numbers and direct them to specific core subnetworks.

The identification of applications using only port numbers is best done when applications are run on standard well-known ports or on ports that are used universally within the network. In this case, the port numbers can be used to identify applications. If an application is run on a server at a different port number than the standard one, you have to introduce additional rules specifying the server and the nonstandard port to identify the application. If port numbers are not managed properly, the number of rules required to identify an application can become large.

Running different networks for different service levels has several advantages. Traffic in each service level is insulated from the other traffic. Thus, the idiosyncrasies of one type of traffic do not interfere with the other types of traffic. As an example, voice packets typically happen to be small (less than 100 bytes) and require low jitter in the network. Data packets can potentially be large (a size of 4096 bytes for file transfer is typical). These large packets can cause a significant amount of jitter in the congested nodes. Imagine a voice packet stuck behind a few bulk transfer packets at a queue in the network. Separating the two types of packets into different networks insulates them from each other. This is assuming that the separation point (the access router) is itself not congested and that the interference between the two types of packets at the access router is negligible.

The same arguments hold true for networks that are used to support two different customers' organizations. If a customer requires low delays, its packets can be routed on a subnetwork consisting of faster links. If a customer requires higher availability, you can design a subnetwork that is a more dense mesh of lower-speed links. Other customers can be routed on networks with different characteristics.

The low-level policy issues when running multiple subnetworks relate mostly to the access routers that connect the customer's routers to the different core subnetworks. They have to be configured with the routing rules that direct customers' packets to the right subnetwork. The routers that make up the subnetworks have no significant policy issues to " consider.

SLA Support with DiffServ in the ISP Environment

Although running parallel networks simplifies the task of providing different SLAs, it is a relatively expensive solution. It would be much cheaper to combine the different networks into a single network. If the distinction between the multiple physical networks were mostly related to performance, you could use DiffServ to partition the overall network into two logical ones. Each logical network would be specified by a specific Per-Hop Behavior (PHB) as defined by the DiffServ specification.

The model of supporting different service levels using DiffServ is shown in Figure 4.4. The figure shows two PHBs supported at each link in the network. Assume that one of these PHBs is a higher-priority forwarding class (with a DS field of 110000) and the other one is the default forwarding class (with a DS field of 000000). The physical network is essentially divided into two logical networks sharing the same physical links. Figure 4.4 shows each link as consisting of two virtual sublinks (one of which is shaded and the other is not). Each of the virtual sublinks corresponds to one of the PHBs used within the network. The bandwidth on each link is allocated among the different virtual PHB-based sublinks in a manner determined by the network operator.

Figure 4.4
SLA Support Using DiffServ

Assume that the allocation of bandwidth along the two priority classes shown is done so that 40 percent of each link's capacity is reserved for the higher-priority class, and the other 60 percent is used for default forwarding. In order to ensure that the DiffServ network meets the desired SLAs, the access router and the core routers must perform specific functions.

The various access routers, as shown in Figure 4.4, must perform the following functions in order to obtain a split into two logical networks:

  • Upon receiving a packet, examine the fields in the packet header to determine the packet's service level.

  • Determine the rate limits associated with the specific service level, and enforce those limits.

  • Determine the correct PHB to be used for the service level, and change the Type of Service field in the IP header to the correct code point for the PHB.

  • Keep counters measuring the number of bytes and packets belonging to each service level, and collect any information needed to estimate network performance.

The definition of service levels, how each service level is mapped onto a specific DiffServ header field code point, and the rate limits associated with each service level constitute the low-level policies for the access router supporting the DiffServ function.

Another policy item is the action to be taken on an IP packet when rate limits are exceeded. The packet could be delayed until it conforms with the rate limit, or it could be discarded. If the packet is delayed, it occupies buffers at the access router. The limit on how many buffers can be occupied by packets belonging to a specific service level is also part of the low-level policy for the access router.

The core router needs to implement the queuing behavior that supports the different DiffServ PHBs within the network. Each PHB is associated with a specific scheduling behavior, such as defining the queuing priority level, an absolute limit on bandwidth to be used, or relative ratios in which bandwidth needs to be allocated. The definitions of the priorities, the rate limits, and the action to be taken when the limits are exceeded constitute the low-level policies for the core routers in the network.

  • + Share This
  • 🔖 Save To Your Account

Related Resources

There are currently no related titles. Please check back later.