Home > Articles > Networking

Asynchronous Transfer Mode (ATM)

  • Print
  • + Share This
In this final article of the series, Kyle Cassidy examines the ever-popular Asynchronous Transfer Mode (ATM) and covers the wireless, wiring, and hardware requirements for each of the various bandwidth-delivery technologies. He also offers some baseline suggestions for choosing the delivery technology that's right for you.
This article is excerpted from The Concise Guide to Enterprise Internetworking and Security.

Asynchronous Transfer Mode is the progeny of both the telecom world and the computer-networking world. The successful combination of the ideologies, much less the technologies, of these two very different worlds has taken quite a while to mature. The ATM specification is still being refined, although most of the changes now refer to LAN-specific features such as ABR (Available Bit Rate), MPOA (Multiprotocol Over ATM), and LANE (LAN Emulation).

From the telecom point of view, the network was a way to link two individuals together from different locations for short periods of time. You paid for the privilege of access to the telecom network, and you paid per-connection charges.

Contrast this to the data networking ideology, in which the media was shared by groups of people all working at once. The data was important; the network was just a resource allowing people to browse Web sites, send email, and transfer files. If it took 2 seconds or 20 to transfer the data, it did not matter as long as the data transferred uncorrupted.

Changes occurred to both the telecom and the data networking industries, however. The need for additional media formats, such as audio and video, has increased the bandwidth requirements. Real-time audio and videoconferencing over data networks has created a need for the same real-time quality of service guarantees that the telecom industry has enjoyed since the inception of digital telephony.

It's All About Timing

So, what is it that makes ATM so different from all the other telecom technologies? The most obvious is that it is asynchronous, as in Asynchronous Transfer Mode. But what does that mean, exactly?

Asynchronous, in the context of ATM, means that sources are not limited to sending data during a set time slot, which is the case with circuit switching, used in the old standby T1. ATM transmits data not in bits or frames, but in packets. Actually, in ATM parlance, the packets are called cells. Cells are fixed in length and are composed of two parts: the header and the payload.

ATM is not totally asynchronous, however. ATM cells are transmitted synchronously to maintain the clock between sender and receiver. The sender, however, is not limited to sending data in any specific time slot or channel. Rather, the sender transmits when it has something to send; when idle, it sends empty cells synchronously. In short, data is sent asynchronously and cells are sent synchronously. The synchronous nature of the cells allows both sides of the ATM link to maintain timing reference similar to DS1.

SONET/SDH removed the need for bit stuffing due to the multiplexed and layered plesiochronous digital hierarchy. Remember, plesiochronous services are not synchronized to the same clock source. Although they are arbitrarily close in frequency and precision, plesiochronous signals will be skewed from each other over long distances and temperature ranges. ATM would allow the enormous SONET/SDH bandwidth to be efficiently used.

Mitosis

The ATM cell is quite simple, which is part of the attraction to ATM. The fixed length of the ATM cell simplifies the transmission and reception of the cell compared to the variable-length packets of Frame Relay and LAN networks.

The ATM cell is 53 octets in length and is divided into two portions: the header, which is 5 octets, and the payload, which is 48 octets. You can see this displayed in Figure 1.

Figure 1 Lives of an ATM cell.

The following are the components of the ATM header:

  • Generic flow control (GFC) was originally allocated for local switch functions such as flow control. Local means that the value is not preserved from endpoint to endpoint, and it can be expected to change each physical hop in the ATM network.

  • The virtual path indicator (VPI) is the virtual path through the myriad of ATM switches that a cell must pass through to make its journey through the ATM network. The VPI actually changes from node to node, as the VPI is local to each ATM switch.

  • The virtual channel indicator (VCI) is similar in concept to a virtual circuit, but it identifies a specific virtual channel on a virtual path. You can think of VPI as identifying the road you're driving on, and VCI as identifying the lane your car is in. The VCI allows many different virtual channels of data to be transmitted over the same virtual path. Many channels are reserved for overhead, administration, and maintenance of the ATM link. These reserved channels are similar in concept to the D channel for ISDN.

  • The payload type (PT) define features of the overhead, administration, and maintenance of ATM.

  • The cell loss priority (CLP) allows the ATM switch to prioritize cell traffic by defining which cells are okay to discard if there is a problem. This is very similar in concept to the Discard Enable bit of Frame Relay. If this bit is 1, the cell can be discarded. If it is 0, it should not be discarded, although setting this bit to 0 does not guarantee that the cell will not be discarded.

  • Header error control is an 8-bit CRC of the first 4 octets of the header.

One thing to keep in mind is that VCIs and VPIs are not addresses. They are explicitly assigned at each segment (link between ATM nodes) of a connection when a connection is established, and they remain for the duration of the connection. Together, the VCI and VPI are used to multiplex (and demultiplex) data onto a physical link.

Why 53 Octets?

I'm sure you're asking yourself, "Why on earth would anyone pick 53 octets as a standard size of anything?" ATM cells are standardized at 53 octets because it seemed the politically correct thing to do. The United States proposed a payload of 64 octets focusing on bandwidth use for data networks and efficient memory transfer (length of payload should be a power of 2 or at least a multiple of 4). Sixty-four octets fit both requirements.

The French, and eventually most of Europe, proposed a 32-octet payload focusing on voice applications. At cell sizes greater than 152 octets, there is a talker echo problem. Cell sizes between 32 and 152 result in a listener echo problem. Cell sizes of 32 or less overcome both problems.

In the end, the CCITT decided to split the difference, and it proposed and settled on 48 octets for payload. Not wanting to impose more than 10 percent overhead for the header information, 5 octets was agreed upon for the header length. Thus, the ATM cell is 48 octets of payload and 5 octets of header, totaling 53 octets in length.

ATM OSI Layers

The OSI model helps descramble protocols or specifications into physical, data link, and network layers. ATM does not fit well into the OSI model. As a matter of fact, you have a better chance of herding cats than getting a consensus on how to shoehorn ATM into the OSI model.

Many people agree that the ATM standards cover three distinct layers: the physical layer, the ATM layer, and the ATM adaptation layer (AAL).

The physical layer (corresponding to OSI physical layer, Layer 1) is usually assumed to be SONET/SDH. This is not the only possibility, however, because there are specifications for running ATM over DS1, DS3, and twisted-pair copper. The PHY specification deals with medium-related issues.

The ATM layer is responsible for creating cells and formatting the cell header (5 octets). Some argue that it also corresponds to the OSI physical layer (it deals with bit transport), and others say that the ATM layer corresponds to the OSI data link layer (formatting, addressing, flow control, and so on).

The AAL is responsible for adapting ATM's cell-switching capabilities to the needs of specific higher-layer protocols. The AAL is responsible for formatting the cell payload. Some argue that this layer corresponds to the OSI data link layer (data error control, above the physical layer); others say that it corresponds to the OSI transport layer (it's end to end). ATM is almost a physical layer itself, whereas TCP/IP is a higher layer that gets encapsulated onto a physical layer.

Transmitting a TCP/IP frame from one medium to another requires the IP frame to be decoded and then re-encapsulated onto the new medium. This becomes expensive in implementation costs, latency, and complexity. To transmit from a local Ethernet to an FDDI backbone to a WAN via DS1, the frame must be processed four times.

The attractiveness of ATM is that an ATM cell is an ATM cell. After an ATM cell is created, it does not get changed (except for the VPI, which is local to the ATM switch). ATM cells might get grouped together in the conversion to DS1 or DS3 frames, but the cell itself does not need to be processed.

ATM Adaptation Layers

The four ATM adaptation layers (AAL) that have been defined are as follows:

  • AAL1—Designed to support connection-oriented services that require constant bit rates and are sensitive to timing, delay, and error detection. Sequence numbers are associated with each cell, similar to TCP/IP. Because cells always arrive in order, this allows for easy determination of lost cells and a request for retransmission. One octet of the payload is used for the sequence number, which leaves 47 octets for data payload. Sample candidates for AAL1 are constant bit-rate services such as DS1 or DS3 transport.

  • AAL2—Designed to carry voice and video over ATM. AAL2 consists of variable-size packets encapsulated within the ATM payload, but it does not require the constant bit rate. AAL2 is otherwise similar to AAL1. Because of the variable-length data stream, three octets of the payload were used: 1 bit for sequence, 6 bits for length, and 10 bits for CRC-10. This leaves only 45 octets for actual data payload.

  • AAL3/4—Intended for both connection-oriented and connectionless (AAL3, AAL4 respectively) variable bit-rate services. AAL3/4 was designed for computer data that is sensitive to loss but not necessarily timing or delay. AAL3/4 does not support real-time or timed connections. The final nail in the coffin for this specification is that it takes 4 octets of overhead, leaving only 44 octets for data payload.

  • AAL5—Designed to support variable bit-rate data services. AAL5 is essentially a raw cell, 48 octets of pure payload. Compared with AAL3/4, you lose error recovery and built-in retransmission, but this can be handled at upper protocol layers, such as TCP/IP. Because sequence numbers and CRCs did not need to be calculated, this simplified processing and implementation.

Guaranteed Service Levels

One of the original design goals of ATM was the capability to efficiently provide bandwidth to both time- and delay-sensitive services, such as voice and video, and to loss-sensitive services, such as computer data. To guarantee those levels of quality of service (QoS), several service classes for ATM have been defined.

Consistent Bit Rate (CBR)

The CBR service class is intended for real-time applications, those applications sensitive to delay and delay variation, as would be appropriate for voice and video applications. Time-division–multiplexed traffic is extremely sensitive to delay and delay variation. Any cells that are delayed beyond the value specified by cell transfer delay (CTD) are assumed to be of significantly less value to the application.

Real-Time VBR

The real-time VBR service class is intended for real-time applications that are sensitive to delay and delay variation, such as interactive compressed voice and video applications. Sources are expected to transmit at a rate that varies with time—that is, bursty traffic. Cells that are delayed beyond the value specified by CTD are assumed to be of significantly less value to the application.

Non - Real-Time VBR

The non–real-time VBR service class is intended for non–real-time applications that have bursty traffic. Those applications that are bursty are slightly less sensitive to delay, such as video playback, video training, and so on. Non–real-time VBR is used where interactivity is not an issue; some types of conversations are insensitive to delay, while others are very sensitive. An electronic mail message, for example, can be held up for 20 or 30 seconds along the way without terrible consequence. A telephone conversation, on the other hand, is very sensitive to delay; a 20-second wait between the time you speak and the time the listener hears you speak would make the medium unusable. For those cells that are transferred, VBR expects a bound on the cell transfer delay.

Unspecified Bit Rate (UBR)

The UBR service class is intended for delay-tolerant or non–real-time applications that are not sensitive to delay and delay variation, such as traditional computer communications. Sources are expected to transmit in short bursts of cells. UBR service is known as a "best-effort service" that does not specify bit rate or traffic parameters. There is no guaranteed QoS with UBR. UBR is subject to increased cell loss and the discard of whole packets.

Available Bit Rate (ABR)

ABR, like UBR, is also a best-effort service, but it differs in that it is a managed service based on minimum cell rate (MCR) low cell loss.

Wireless

IEEE standard 802.11 include two major physical-layer standards: direct-sequence (DS) spread spectrum and frequency-hopping (FH) spread spectrum. Both operate in 83.5MHz of unlicensed spectrum in the 2.4GHz band.

The Strange History of Frequency Hopping

Oddly enough, the idea of frequency hopping was patented by 1940s Hollywood heartthrob Hedy Lamar as a way of preventing American torpedoes from being jammed by Japanese ships. She and composer George Anthiel hold U.S. patent number 2,292,387, dated June 12, 1941, describing the technology using piano tones. They called it a "secret communication system." Its most common application today is to secure cell phone conversations.

Although it is frustrating for the industry to deal with two 802.11 radio standards, there are sound reasons for them. FH systems provide greater scalability and better protection from radio-frequency interference, while DS systems provide about 20 percent better per-station performance and slightly greater transmission range.

DS has the lure of a fast-emerging upgrade to the existing 802.11 standard that will deliver a data rate of 11Mbps. Note that while vendors might tout these products as offering Ethernet speeds, you shouldn't be confused by the difference between data rate and throughput. This technology offers throughput that is only 60 percent of Ethernet's, but it is likely to improve in the future.

FH uses a predetermined method of rapid frequency switching to facilitate secure transmission.

  • + Share This
  • 🔖 Save To Your Account

Related Resources

There are currently no related titles. Please check back later.