- Physical Network Topology and Availability
- Layer 2 Availability: Trunking —802.3ad—Link Aggregation
- Layer 2 Trunking Availability Strategies using SMLT and DMLT
- Layer 2 Availability: Spanning Tree Protocol
- Layer 3—VRRP Router Redundancy
- Layer 3—IPMP—Host Network Interface Redundancy
- Layer 3—Integrated VRRP and IPMP
- Layer 3—OSPF Network Redundancy— Rapid Convergence
- Layer 3—RIP Network Redundancy
- About the Authors
Layer 2 Trunking Availability Strategies using SMLT and DMLT
In the past, server network resiliency leveraged IPMP and VRRP. However, in actual large scale deployments, serious scalability issues emerged. This was due primarily to the fact that network switches were not designed to process a steady stream of ping requests in a timely manner. Ping requests were traditionally used occasionally to troubleshoot network issues, hence the control plane processing of ping was considered lower priority compared to processing routing updates and other control plane network tasks. As the number of IPMP nodes increased, the network switch ran out of CPU processing resources and began dropping the ping requests. This resulted in IPMP nodes falsely detecting router failures, which often result in a ping pong effect of failing over back and forth across interfaces. One recent advance, which is introduced in Nortel Networks switches is called Split MultiLink Trunking (SMLT) and Distributed Multilink Trunking (DMLT). This section describes several key tested configurations using Nortel Networks Passport 8600 Core switches and the smaller Layer 2 switches: Nortel Networks Business Policy Switch 2000. These configurations describe how network high availability can achieved, without encountering scalability issues that have plagued IPMP-VRRP deployments.
SMLT and DMLT
SMLT is a Layer 2 trunking redundancy mechanism. It is similar to plain trunking except that it spans two physical devices. FIGURE 10 shows a typical SMLT deployment, using two Nortel Network Passport 8600 Switches and a Sun server, with dual GigaSwift cards, where the trunk spans both cards, but each card is connected to a separate switch. SMLT technology, in effect, exposes one logical trunk to the Sun server, when actually there are two physically separate devices.
FIGURE 10 Layer 2 High Availability Design Using SMLT
FIGURE 10 shows a Layer 2 high availability design using Sun Trunking 1.3 and Nortel Networks Passport 8600 SMLT.
FIGURE 11 shows another integration point, where workgroup servers connect to the corporate network at an edge point. In this case, instead of integrating directly into the enterprise core, the servers connect to a smaller Layer 2 switch, which runs DMLT, a scaled version of the SMLT, but similar in functionality. The switches are viewed as one logical trunking device, even though packets are load shared across the links, with the switches ensuring packets arrive in order at the remote destination. A sample configuration of the Passport is shown below in CODE EXAMPLE 1.
FIGURE 11 Layer 2 High Availability Design Using DMLT
FIGURE 11 illustrates a server to edge integration of a Layer 2 high availability design using Sun Trunking 1.3 and Nortel Networks Business Policy 2000 Wiring Closet Edge Switches.
CODE EXAMPLE 1 shows a sample configuration of the Passport 8600.
CODE EXAMPLE 1 Sample Configuration of the Passport 8600
# # MLT CONFIGURATION PASSPORT 8600 # mlt 1 create mlt 1 add ports 1/1,1/8 mlt 1 name "IST Trunk" mlt 1 perform-tagging enable mlt 1 ist create ip 10.19.10.2 vlan-id 10 mlt 1 ist enable mlt 2 create mlt 2 add ports 1/6 mlt 2 name "SMLT-1" mlt 2 perform-tagging enable mlt 2 smlt create smlt-id 1 #