Frame Relay Topologies and Congestion Control

After a discussion about Frame Relay design parameters, an understanding of the various Frame Relay topologies becomes important.

Partial-Mesh and Full-Mesh Frame Relay Designs

Frame Relay networks provide several virtual circuits that form the basis for connections between stations (routers) that are attached to it. The resulting set of interconnected devices form a private Frame Relay group. These groups can be either fully interconnected with a complete mesh of virtual circuits, or only partially interconnected. In either case, each virtual circuit is uniquely identified at each Frame Relay interface by a DLCI.

From an architectural point of view, a Frame Relay topology supports both partial-mesh and full-mesh structures. Because of the permanent nature of the connections, partial-mesh structures contain physical connections to some sites but not to all, thus creating an any-to-some structure. It becomes obvious that a fully meshed structure requires more resources because an any-to-any connection requires n number of sites to be connected to each other. This number can be calculated by using the following formula:

  • Number of physical connections = n x (n – 1) / 2, where n is the number of connected routers.

Regardless of the Frame Relay network topology, from an enterprise standpoint, two main configurable connections exist to the Frame Relay network: point-to-point and point-to-multipont configurations, which are addressed in detail in Chapter 16.

User and Frame Relay Switch Operations Under Congestion

The existence of virtual circuits and statistical multiplexing in Frame Relay requires sophisticated methods to deal with congestion. The Frame Relay specifications provide guidelines and rules on how the user should react to forward explicit congestion notifications (FECNs) and backward explicit congestion notifications (BECNs). The Frame Relay switch at the ingress UNI must exercise caution in the amount of traffic that it permits to enter the network. To prevent severe congestion, you need to implement measures not only at the UNI, but at the switch's ingress point, which should know when to implement rate adoption control.

The software at the ingress point should be informed and fast enough to implement a remedy before the traffic load becomes a problem. There are no formal rules for the number of buffers, traffic, and throughput; however, the unofficial rule is the smaller the queue, the lower the delays and better response time.

Two main methods control congestion in Frame Relay networks using explicit notification:

  • Rate adoption algorithm— The rate adoption algorithm uses both a system of counters and certain ratios of the number of bits to perform rate adjustments. This system is also called the leaky-bucket algorithm. The algorithm maintains a running count of the cumulative number of bits sent during a measurement interval. The counter decrements at a constant rate of 1 per bit, to a minimum value of 0, and increments to the value of the threshold.[2] When the predefined threshold is exceeded, the switch sets the FECN and BECN bits of the passing packet to 1, which notifies both parties that the direction is experiencing congestion.
  • Consolidated link-layer management (CLLM)— Another congestion method involves using the CLLM message. CLLM transmits management messages over DLCI 1007 (or DLCI 1023 in T1.618), which is reserved for it. Thus, DLCI 1007 (1023) notifies the edge switches that congestion has occurred and that the message contains a list of affected virtual connections. The edge switches then set up the FECNs and BECNs of the appropriate packets or issue another CLLM to end devices that support CLLM.

Figure 15-4 is an illustration of the first method using the rate adoption algorithm. The first Frame Relay network switch (called the Frame Relay Cloud [FRC] switch) experiences congestion. For this illustration, assume that the router connected to edge switch A sends data to both edge switch B and the router connected to edge switch B through the FRC switch. Then assume that there is additional traffic coming from the Frame Relay network to edge switch B. If the FRC switch and edge switch B are connected through a T1 and, at some point, the overall traffic increases and exceeds 1.5 Mbps, the FRC switch notifies all switches of the congestion. For a short period of time, the buffers in the FRC switch overflow and the frames are dropped. This is why the FRC switch sets the FECNs and BECNs to 1, in the manner shown in the figure. When the FRC switch reaches a certain threshold, it sets FECN = 1 in the frames that are coming from edge switch A and that are forwarded by the FRC switch to edge switch B. At the same time, the FRC switch sets BECN = 1 in the frames that are transmitted from edge switch B to edge switch A to identify the network that is experiencing congestion in the opposite direction.


Figure 15-4 The Frame Relay Congestion Mechanism and Use of FECNs and BECNs

There is no standard criteria for setting BECNs and FECNs. It is obvious that different providers can choose different criteria to set up these bits. In most cases, router A lowers the transmission rate if the incoming messages contain BECN = 1.

American National Standards Institute (ANSI) Annex A in T1.618 defines guidelines for using BECNs and FECNs by the user and the network. The FRC switch does not set the FECNs and BECNs directly. Instead, the FRC switch generates a CLLM to edge switches A and B, and these switches then decide to either set BECNs or FECNs or to generate another CLLM message to routers A and B. The CLLM messages are incompatible with the initial LMI specification because DLCI 1023 is used, but they are compatible with the Annex D specification.

Congestion and Windowing

Using windowing to manage congestion is suggested in the book ISDN and SS7: Architectures for Digital Signaling Networks by Ulysses D. Black. The basic approach resembles the windowing mechanism in the TCP/IP stack, but combines the FECNs and BECNs with the sliding window technique. Unlike TCP, the sliding window technique reduces and increases the size of the window by a factor of 0.125, depending on the network conditions. In Cisco routers, windowing is configurable by parameter K, where K is a maximum number of I-frames that are either outstanding for transmission or transmitted but not acknowledged. The value of K ranges from 1 to 127 frames, with a default of 7. The calculations are based on 2 to the exponent n, where if n = 3, the enumeration of windows is from 0 to 7.

Frame Relay Performance Criteria

The design of Frame Relay requires setting objectives for the future network design, when conducting capacity planning, and developing the ability to measure performance using different criteria. Section 4 of T1.606 contains several definitions of Frame Relay performance parameters. The following sections summarize these performance parameters.


The term throughput is defined as the number of protocol data units (PDUs) successfully transferred (FCS indicates success) in one direction, per a predefined time period (measurement interval) over a virtual connection. For this definition, the PDU is considered to be all data between the flags of the Frame Relay frame (see Figure 14-4).

Transit Delay

The transit delay is a measurement of the time it takes to send a frame across the link between two points (DCE, Frame Relay access device (FRAD), router). The delay is a function of the access rate of the link, the distance, and the size of the frame. A rough estimate can be obtained by using the following equation:

  • Delay (seconds) = size (bits) / link access rate (bps)

Transit delay is measured between boundaries, which can be between two adjacent DCEs, two networks, and so forth. Based on the boundary, the measurement starts when the first bit of the PDU leaves the source (t1), and ends when the last bit of the PDU crosses the other party's boundary (t2). The transit delay can be measured by the following equation:

  • Transit delay = (t2) – (t1)

The virtual circuit transit delay is the sum of all delays across all boundaries of the virtual connection.

Residual Error Rate

Residual error rate (RER) is synonymous with the undetected error ratio of the number of bits incorrectly received and undetected to the total number of bits sent. RER is measured through the exchange of Frame Relay service data units (SDUs), or FSDUs. This measurement can be calculated from the following formula:

  • RER = 1 – (total correct delivered SDUs / total offered SDUs)

RER is an important component when considering future use of bandwidth demand. The future design must address these parameters when evaluating the service offerings from different providers.

At the same time, RER must be correlated with the user's actual throughput and CIR. If the user constantly exceeds the CIR agreement, a high RER is expected. Conversely, if the design parameters are not exceeded, a lower RER is expected.

Other Performance Parameters

The following parameters are defined by the ITU-T, and are referred to as quality of service (QoS) parameters that affect network performance:

  • Delivered erroneous frames— The number of frames that are delivered when one or more bits in the frame are discovered erroneous.
  • Delivered duplicate frames— The number of frames delivered twice or more.
  • Delivered out-of-sequence frames— The number of frames that are not in the expected sequence.
  • Lost frames— The number of frames not delivered within the predefined time period.
  • Misdelivered frames— Frames delivered to the wrong destination.
  • Switched virtual call establishment delay and clearing delay— These refer respectively to the time required to establish and clear the call across the C-plane.
  • Premature disconnect— Describes the loss of the PVC.
  • Switching virtual call clearing failure— Describes a failure to teardown the switched virtual call.

+ Share This