WAN Technologies: Important Points of Interest, Part 1 of 3
Overview
A number of different WAN technologies have been developed over the last half-century. Some have disappeared in favor of better or more preferred methods; others have remained functional even after decades of use. This article is the first of three on WAN technologies in use on today's networks that should be familiar to network engineers. For each of these technologies, initially you really need to know just enough to pass your certification exams. However, since techniques vary from one organization to another, you should develop a good general knowledge of all of these technologies, advancing deeper as time and need allow.
In this article we'll focus on four main technologies:
- Fiber technologies, including Synchronous Optical Network (SONET), Dense Wavelength Division Multiplexing (DWDM), and Coarse Wavelength Division Multiplexing (CWDM)
- Frame Relay (FR)
- Asynchronous Transfer Mode (ATM)
- Multiprotocol Label Switching (MPLS)
Common Fiber WAN Technologies
Fiber-based WAN technologies are very common. We'll examine three: SONET, CWDM, and DWDM.
SONET
SONET has been around for over 20 years. It was designed to replace the older T-carrier Time-Division Multiplexing (TDM) networking technologies (T1, T3, and so on). SONET was laid out similarly to those older standards, with multiple tiers of service in synchronous transport signal (STS) frames, which have a base signal rate of 51.84 Mbps. Each of these frames can be carried in an Optical Carrier 1 (OC-1) signal. However, SONET is implemented using higher OC levels, including the following:
- OC-3 (155.52 Mbps)
- OC-12 (622.08 Mbps)
- OC-48 (2,488.32 Mbps)
- OC-192 (9,953.28 Mbps)
- OC-768 (38,813.12 Mbps)
Dense Wavelength Division Multiplexing (DWDM)
DWDM was implemented in response to the exponential growth in traffic demand on Internet service provider networks. SONET networks could offer high-speed links by using one fiber pair per circuit. DWDM equipment can allocate specific wavelengths of a fiber for a specific circuit, allowing multiple circuits to be multiplexed over the same fiber bundle. Currently available equipment has the ability to provide up to 25.6 Tbps of capacity per fiber.
Course Wavelength Division Multiplexing (CWDM)
CWDM devices came after DWDM devices and were designed to be cheaper over shorter distances. By definition, a CWDM device uses fewer than eight wavelengths per fiber, whereas DWDM uses eight or more. These devices use wide frequency ranges compared to their DWDM cousins, and are more tolerant of wavelength drift.
Frame Relay
Frame Relay is an old technology (from the early 1980s) and has been widely used over the last 30+ years. In North America, Frame Relay is still seen on existing networks, but it's quickly losing out to cheaper high-bandwidth alternatives (DSL/cable, VPN). Frame Relay was commonly seen as a connection method to branch/remote locations. It provides the ability to deliver a number of different connection speeds, typically ranging from 64 kbps to 1.544 Mbps (T1). It supports greater speeds, but that usage is not as typical. Frame Relay usually is implemented using a physical T-carrier connection (Fractional-T1, T1) with Frame Relay acting at Layer 2.
Frame Relay can multiplex multiple connections over a single physical link. These connections are called virtual circuits (VC). Frame Relay has the ability to support both permanent virtual circuits (PVCs) and switched virtual circuits (SVCs), but PVC are much more common. VCs are addressed at each endpoint using a Data-Link Connection Identifier (DLCI), which is used by the Frame Relay provider to delineate between different VCs. It's important to note that a DLCI is only locally significant between a Frame Relay endpoint and the Frame Relay provider. As long as the provider knows which connection goes to which client, it can route together a PVC from one FR client to another FR client through the provider's network.
Frame Relay also has a very detailed congestion-management mechanism that allows connections (PVCs) to be allocated with a specific committed information rate (CIR), but still be able to burst up to the circuit line rate if the bandwidth is available. This flexibility allows many smaller organizations to have a connection between offices that can support short-term higher-bandwidth applications without violating the organization's support agreement with the FR provider.
In 1990, FR gained an additional extension that included the ability to support global addressing, virtual circuit status messaging, and multicasting. This extension is called the FR Local Management Interface (LMI).
Asynchronous Transfer Mode (ATM)
Like Frame Relay, ATM has been around for a long time (since 1991). The original intention for ATM was to design a network that could provide different service types, depending on the type of traffic being transmitted. This goal differs from those of the earlier options, which were designed to manage a specific type of traffic: voice (T-carrier) or data (Frame Relay). With ATM, service providers had the ability to assign traffic to specific traffic classes that provided the appropriate path parameters. For example, one ATM PVC might be created for data traffic between two endpoints, with the ability to manage variable amounts of delay; another PVC might be created for voice between two endpoints, with the ability to maintain a very specific level of low delay along the path.
ATM is referred to as a cell switched technology because it uses a fixed 53-byte cell for all traffic types. This 53-byte cell includes 5 bytes for the header, with the remaining 48 bytes being used for payload. The type of traffic being transmitted alters the layout of the payload. A number of different ATM adaptation layers (AAL) can be selected and used, depending on the needs of each specific circuit being configured.
Multiprotocol Label Switching (MPLS)
MPLS is a newer technology (late 1990s) that was developed for a number of reasons, including the ability to reduce the forwarding lookup load on the routers throughout a network. It does this by providing a labeling mechanism that classifies traffic as it comes into a network. This classified traffic is then forwarded along a path through the MPLS network toward the destination, using only this label information. This label-switched path (LSP) is unidirectional; for a normal connection, two separate LSPs are determined for each.
For example, IP traffic coming into the MPLS network is initially classified and labeled based on its IP header information, with classification being handled by a label edge router (LER). From this point on throughout the packet's path through the MPLS network, the IP information is no longer used. These non-edge routers are referred to as label switching routers (LSR). Once the packet reaches the last device in the MPLS network, the label is removed and the packet is forwarded out of the device. This final device is referred to as the egress node.
To determine the best path through the MPLS network, two main protocols are used: the Label Distribution Protocol (LDP) and Resource Reservation Protocol with Traffic Engineering (RSVP-TE). All networks use one or both, depending on the service being delivered.
A number of different service offerings utilize MPLS. Examples include Layer 2 and Layer 3 virtual private networks (VPNs), as well as virtual private LAN service (VPLS).
Summary
Each of the technologies I've described here was developed with specific goals in mind, which means that each offers advantages and disadvantages in specific environments. This article is intended as a simple primer; for further information, check out the Pearson IT Certification CompTIA Network+ Resource Center. Parts 2 and 3 of this series will cover dial-up, DSL, ISDN, broadband cable, wireless data, and leased lines.