3.4 Fibre Channel
As the first network architecture to implement storage networking applications successfully, Fibre Channel has faced substantial challenges in product development, standardization, interoperability, and market acceptance. It has also achieved technological breakthroughs in the areas of gigabit transport and upper layer protocol support for SCSI-3, which ironically have made it vulnerable to competing storage network technologies. As the pioneer of storage networking, Fibre Channel has had to create its own vocabulary, and this, in turn, has made it difficult for customers to understand, deploy, and support. The basic lexicon of Fibre Channel is reviewed here.
3.4.1 Fibre Channel Layers
Fibre Channel standards are developed in the National Committee of Industrial Technology Standards (NCITS) T11 standards body, which has defined a multilayer architecture for the transport of block data over a network infrastructure. As shown in Table 31, Fibre Channel layers are numbered from FC-0 to FC-4.
Table 31 Fibre Channel Layered Architecture
Fibre Channel Layer |
Layer Title |
Comments |
FC-4 |
Upper layer protocol interface |
SCSI-3, IP, VI, and so on |
FC-3 |
Common services |
Under development |
FC-2 |
Data delivery |
Framing, flow control, service class |
FC-1 |
Ordered sets/byte encoding |
8b/10b encoding, link controls |
FC-0 |
Physical interface |
Optical/electrical, cable plant |
The upper layer, FC-4, establishes the interface between the Fibre Channel transport and upper level applications and operating system. For storage applications, FC-4 is responsible for mapping the SCSI-3 protocol for transactions between host initiators and storage targets. The FC-3 layer is still in standards development, and includes facilities for data encryption and compression. The FC-2 layer defines how blocks of data handed down by the upper level application are segmented into sequences of frames for handoff to the transport layers. This layer also includes class-of-service implementations and flow control mechanisms to facilitate transaction integrity. The lower two layers, FC-1 and FC-0, focus on the actual transport of data across the network. FC-1 provides facilities for encoding and decoding data for shipment at gigabit speeds, and defines the command structure for accessing the media. FC-0 establishes standards for different media types, allowable lengths, and signaling.
Collectively, the Fibre Channel layers fall within the first four layers of the OSI model: physical, data link, network, and transport. Fibre Channel assumes a single unpartitioned network and homogeneous address space for the network fabric. Although theoretically this address space can be quite large (15.5 million addresses in a switched fabric), a single network space has implications for large Fibre Channel SANs. Without network segmentation, the entire fabric is potentially vulnerable to disruption in the event of failures.
3.4.2 FC-0: Fibre Channel Physical Layer
As the first successful serial gigabit transport, Fibre Channel has defined the basic principles and methods required for data integrity over high-speed serial links. At the physical layer, these include standards for gigabit signaling, supported cable types, allowable cable distances, and physical interfaces. Because Gigabit Ethernet has borrowed heavily from the Fibre Channel physical layer standards, it is useful to understand what they provide.
Unlike SCSI parallel cabling, a serial network cabling scheme does not have a separate control line to signal the rate of data transmission so that the recipient can accurately capture data. In a serial implementation, this clock signaling must be embedded in the bit stream itself. Fibre Channel uses an FC-1-defined byte-encoding scheme and CDR circuitry to recover the clock signal from the serial bit stream. The physical layer standards dictate that data integrity for gigabit transmission must be no less than 1012 bit error rate, or a maximum of 1 bit error every 16 minutes over 1Gb media. To meet or exceed this rigorous standard, the physical interfaces and cabling must minimize the amount of jitter or timing deviation that may occur along the physical transport.
Deviations from the original clock signaling, or jitter, may be the result of natural propagation delays through fiber-optic or copper cabling as well as unnatural transients from poorly designed interfaces, laser optics, circuit boards, or power supplies. Jitter may be measured and represented in graphical form by an eye diagram on a test scope, as illustrated in Figure 38. The crossover points or intersections forming the eye represent signaling transitions to high or low voltages. Ideally, all transitions should occur at precisely the same interval. If this were the case, the eye would be perfectly formed and the CDR circuitry could recover all data bits with no bit errors whatsoever. In reality, some deviation will always be present. If the jitter is too extreme, the CDR will miss one or several bits, resulting in the corruption of data. This will in turn trigger recovery routines at the higher layers.
Figure 38 An eye diagram showing timing deviations in a gigabit stream.
If jitter reduction is essential for Fibre Channel's 1.0625-Gbps clock rate, it is even more essential for Gigabit Ethernet's faster 1.25 Gbps. The faster the clock, the greater the statistical occurrence of bit errors over the same time span. A faulty transceiver, substandard fiber-optic cabling or connectors, exceeding cable distance guidelines, improperly shielded copper components, or simply bad product design can introduce system instability at the physical layer.
For cable plant, Fibre Channel accommodates both copper and fiber-optic cabling. Copper cabling is typically twin axial as opposed to shielded twisted pair, and is deployed for intracabinet and intercabinet usage. Intracabinet copper cabling may be used within an enclosed 19-inch rack for connecting storage devices or HBAs to Fibre Channel hubs or switches. The maximum length of intracabinet copper is 13 m. Intercabinet copper cabling may be used externally to 19-inch racks, to a maximum of 30 m. Both varieties are problematic because any copper cable plant is succeptable to electromagnetic interference (EMI) and may create ground loop problems between devices.
For both Fibre Channel and Gigabit Ethernet, fiber-optic cabling is the preferred cable plant because of its immunity to EMI. Fiber-optic cable types are distinguished by "mode," or by the frequencies of light waves that the optical cable supports.
Multimode cabling is used with shortwave laser light and has either a 50-mm or 62.5-mm core with 125-mm cladding. The reflective cladding around the core restricts light to the core. As shown in Figure 39, a shortwave laser beam is composed of hundreds of light modes that reflect off the core at different angles. This dispersion effect reduces the total distance at which the original signal can be reclaimed. In Fibre Channel configurations, multimode fiber supports 175 m with 62.5-mm/125-mm cable, and supports 500 m with 50-mm/125-mm cable.
Figure 39 Multimode fiber-optic cable.
Single-mode fiber is constructed with a 9-mm core and 125-mm cladding. Single mode is used to carry long-wave laser light, which has little of the dispersion effect of multimode lasers because the diameter of the core is matched to the wavelength of the light. With a much smaller diameter core and a single-mode light source, single-mode fiber supports much longer distances, currently as much as 10 km at gigabit speeds.
At either end of the cable plant, transceivers or adapters are used to bring the gigabit bit stream onto the circuit boards of HBAs or controller cards. Gigabit interface converters, or GBICs, connect the cabling to the device interface. Small-form factor GBICs are steadily replacing the older SC connectors, because they enable higher port density for switches. Optical transceivers may be permanently mounted onto the HBA, storage, or switch port, or may be removable to facilitate changes in media type or to service a failed unit.
3.4.3 FC-1: Fibre Channel Link Controls and Data Encoding
Suppose that the cable plant, transceivers, and interfaces all provided a stable physical layer transport for gigabit transmission. Turning bits of serial data into intelligible bytes is still an issue. If raw data bytes were dropped serially onto a gigabit transport, it would be impossible to tell where one byte ended and another began. Sending a stream of hex 'FF' bytes, for example, would create a sustained direct current (DC) voltage on the link, making it impossible to recover the embedded clock signaling needed to establish byte boundaries.
Fibre Channel standards have addressed this problem by using a byte encoding algorithm first developed by IBM. The 8b/10b encoding method converts each 8-bit data byte into two possible 10-bit characters. To avoid sustained DC states, each of the two 10-bit characters will have no more than six total ones or zeros. Of all the possible 10-bit characters that can be generated by standard 8-bit data bytes, about half will have an equal number of ones and zeros. The 8b/10b encoding scheme thus ensures a healthy mix of ones and zeros that allows recovery of the embedded clock signaling and thus recovery of data.
Because the 8b/10b encoder generates two different 10-bit characters for each byte, which one should be used for data transmission? This selection is made based on the running disparity of the character stream (Figure 310). If a 10-bit character has more ones than zeros, it has positive disparity. If it has more zeros than ones, it has negative disparity. An equal number of ones and zeros results in neutral disparity. The concept of running disparity is key to maintaining a more consistent distribution of ones and zeros in the bit stream. A 10-bit data character with positive disparity should be followed by a character with neutral disparity (which leaves the running disparity positive), or by a character with negative disparity (which would leave the running disparity negative). This alternation between positive and negative disparity patterns ensures that no sequential combination of 10-bit characters will result in persistent ones or zeros bit states.
Figure 310 8b/10b encoding logic.
For all the 10-bit characters that can be generated by standard data bytes, none have more than four ones or zeros in sequence. Running disparity maintains this minimal occurrence for data characters. Some nonstandard 10-bit combinations, however, result in five ones or zeros in sequence. These characters are reserved as special characters and are inserted into the character stream as a means to establish boundaries between 10-bit characters. In Fibre Channel standards, the presence of a special "K28.5" character is monitored by the CDR circuitry. As soon as five ones or zeros in sequence are detected, the CDR can begin buffering sets of 10 bit streams that can then be converted accurately to valid 8-bit data bytes.
Fibre Channel standards for the FC-1 layer leverage the 8b/10b encoding method to create a command syntax known as ordered sets. The 8b/10b encoding scheme and running disparity ensure that the embedded gigabit signaling can be recovered and that data bytes can be successfully retrieved. Ordered sets are composed of four 10-bit characters, or 40 bits that constitute a transmission word. The ordered set leads with the special K28.5 character to indicate that the transmission word is a link-layer command or a signal of a change in state. The three data characters following the K28.5 character define the function of the ordered set; for example, start of frame (SOF), end of frame (EOF), initialization, and class of service.
Gigabit Ethernet has borrowed the ordered set command and signaling structure from Fibre Channel, but as we see later, uses fewer commands. Fibre Channel ordered sets are divided into frame delimiters, primitive signals, and primitive sequences. Frame delimiters mark the frame boundaries and may include frame sequencing information for multiframe transmissions. Primitive signals include the IDLE ordered set, which is used to maintain CDR when no user data is present on the link. Primitive sequences are ordered sets that must occur at least three times on the link before any action is taken (for example, a loop initialization or LIP primitive). Fibre Channel standards define more than 20 ordered sets for frame delimiting, more than 10 for primitive signals, and more than 15 for primitive sequences. Because only a single instance of a primitive signal is required to initiate an action, the CDR mechanism for gigabit transmission must be very precise.
3.4.4 FC-2: Fibre Channel Framing, Flow Control, and Class of Service
The data bytes that were encoded by FC-1 for reliable transmission on the physical media were handed down by FC-2 as a series of frames. The FC-2 layer receives blocks from the upper layer protocol (for example, FCP) and subdivides those into sequences of frames that can be reassembled on the other end. Frames are grouped into sequences of related frames. A database record, for example, may be written to disk as a single sequence of frames. The sequence is the smallest unit of error recovery in Fibre Channel. If a transmission word within a frame is corrupted and cannot be recovered, the entire sequence of frames must be retransmitted. At gigabit speeds, it is more efficient simply to retransmit an entire sequence of frames than to buffer and provide recovery constantly at the frame level. In the hierarchy of frame delivery at FC-2, multiple sequences of frames can occur within a single exchange. The exchange binding between two communicating devices maximizes utilization of the link between them and avoids constant setup and teardown of logical connections.
Fibre Channel framing allows for a variable-length frame with a payload of 0 to 2,112 bytes. Because the Fibre Channel maximal frame size does not map directly to Ethernet framing, issues can arise when Fibre Channel is tunneled over IP/Ethernet. The basic format of the Fibre Channel frame is shown in Table 32. The ordered sets used for the SOF and EOF delimiters indicate where the frame falls within a sequence of frames, as well as the class of service required. The header field contains the destination and source Fibre Channel addresses as well as payload length. The cyclic redundancy check (CRC) is calculated before the data is run through the 8b/10b encoder, with the 4-byte CRC itself later encoded along with the rest of the frame contents. At the receiving end, the CRC is recalculated and compared against the frame's CRC to ensure data integrity.
Table 32 Fibre Channel Frame Format
SOF |
Header |
Data Field |
CRC |
EOF |
1 word |
6 words |
02112 bytes |
4 bytes |
1 word |
The SOF delimiter establishes the class of service that will be used for frame transmission, whereas the EOF delimiter may indicate when that class of service may be terminated. Class of service is used to guarantee bandwidth or to require acknowledgment of frame receipt for secure data transport. Storage applications may require different classes of service, but the vast majority of Fibre Channel transactions are performed with class 3 datagram service.
Class 1 service defines a dedicated connection between two devices (for example, a file server and a disk array) with acknowledgment of frame delivery. Class 1 service can be assumed in a point-to-point connection between two devices because there are no other participants to impose bandwidth demands. Class 1 service through a Fibre Channel fabric, however, requires the fabric switches to establish dedicated data paths between the communicating pair. A 16-port switch, for example, could only support 8 concurrent class 1 sessions. Consequently, class 1 is almost never deployed in SAN applications.
Class 2 service avoids the issue of connection-oriented, dedicated bandwidth, but provides acknowledgment of frame delivery. Frame acknowledgment imposes its own overhead, however, and so impacts the efficiency of link utilization. Like class 1, class 2 service is fully defined in standards but is infrequently used.
Ironically, although storage network applications revolve around mission-critical applications that require the highest degree of data integrity, the most commonly used class of service in Fibre Channel is both connectionless and unacknowledged in terms of frame delivery. Class 3 service in Fibre Channel is analogous to datagram service such as UDP/IP in LAN environments. Frames are streamed from initiator to target with no acknowledgment of receipt. In the early days of Fibre Channel adoption, customers balked at the idea of committing their mission-critical data to a datagram type of service. In practice, however, class 3 gained respectability simply because it worked. As a connectionless protocol, class 3 facilitates the efficient utilization of fabric resources because bandwidth is not hoarded by communicators as in class 1. And by eliminating acknowledgments, class 3 service imposes minimal protocol overhead on the link.
The ability of a datagram class of service to transmit and receive data reliably is predicated on a highly stable and properly provisioned infrastructure. The 1012 bit error rate mandated by Fibre Channel standards and the thoughtful allocation of bandwidth for storage network resources enables the use of class 3 service for a wide variety of storage applications. This has significant implications for storage networks based on Gigabit Ethernet, which shares the link integrity requirements of Fibre Channel. For contained switch environments such as data centers, a datagram type of service is viable for stable, high-performance data transfer. This is not the case for potentially congested or lossy implementations, such as wide area switched networks.
Other Fibre Channel classes of service include class 4 for virtual circuits and class 6 for acknowledged multicast applications. As with many other Fibre Channel features that may be supported in fabrics, these are still immature in terms of product implementation and have lacked the engineering focus that Gigabit Ethernet has enjoyed.
Class 3 service requires a flow control mechanism to ensure that a target is not flooded with frames and forced to discard them. Fibre Channel flow control is based on a system of credits, with each credit representing an available frame buffer in the receiving device. If, for example, a disk array has 20 frame buffers, a server could stream 20 frames of a sequence in a single burst before waiting for additional credits to be issued by the array. As the array absorbs the 20 frames, the first in sequence are passed to the FC-2 frame reassembly logic for reconstruction into data blocks for FC-4. As individual frames move up the assembly line, buffers are freed for additional inbound frames. The array issues a credit for each newly emptied buffer, allowing the server to send additional frames.
This frame-pacing algorithm based on credits prevents frame loss and reduces the frequency of sequence retransmission over the fabric. For class 3 service in a fabric, the credit relationship is not end to end between storage devices and servers, but between each device and the switch port towhich it is attached. Providing adequate buffers on switch ports is essential for minimizing frame discards. For Gigabit Ethernet SANs, port buffering is also an issue, although the link-level flow control is implemented differently.
3.4.5 FC-3: Common Services
The FC-3 layer has been a placeholder in Fibre Channel standards as the more basic functions of the other layers have been developed. Because FC-3 sits between the FC-4 upper layer protocol and the FC-2 framing layer, FC-3 would contain services that would be performed immediately prior to handoff to the lower layers. This would include services such as encryption and authentication, although there are currently no such services in Fibre Channel implementations. Arguably, there has not been a lot of incentive to develop such facilities for Fibre Channel, because Fibre Channel SANs presuppose a private, secure environment. Putting storage traffic over metropolitan and wide area networks (MANs and WANs), however, may raise security concerns. This is another area that highlights the advantages of storage traffic over IP, because encryption and authentication tools are readily available to safeguard sensitive storage data.
3.4.6 FC-4: Fibre Channel Upper Layer Protocol
The purpose of engineering a highly reliable physical plant, a rigorous byte encoding scheme, link-layer controls, efficient framing and sequence transmission, viable classes of service, and flow control is, of course, to service the upper layer applications behind which sit end users who are constantly creating and accessing stored data. Although the FC-4 layer standards include support for VI, IP, and other protocols, the most well-developed and most widely used FC-4 protocol is serial SCSI-3 (FCP).
The central task of FCP is to make Fibre Channel end devices appear as standard SCSI entities to the operating system. For host systems, the FCP function is embedded in the Fibre Channel HBA and the device driver supplied by the manufacturer. This allows Windows Disk Administrator, for example, to see Fibre Channel disks as SCSI-addressable storage resources. The operating system does not need to distinguish between storage resources that are direct-SCSI attached, ATA/IDE attached, or SAN attached. If, alternately, Fibre Channel as a storage networking solution had required changes to Windows, Solaris, or UNIX operating systems, it is doubtful that it ever would have been deployed. This is because of the much longer development and test cycles required for operating system revisions and the reluctance of customers to introduce additional complexity into their server environments. Just as FC-0 and FC-1 enabled reliable gigabit transmission of data at the physical and link levels, FCP has enabled a reliable protocol interface to the operating system and supported applications.
As shown in Figure 311, the upper layer protocol interface supports standard SCSI mapping for the operating system while maintaining Fibre Channel device address mapping for data destinations in the form of logical unit numbers (LUNs) on the target disks. The Fibre Channel frame header holds the 3-byte Fibre Channel network address, with identifying LUN information contained in the frame payload. This tiered mapping is, thankfully, transparent to the end user, whose primary interface is through the drive designation assignable through the operating system's file system/volume management interface.
Figure 311 Perspectives on the Fibre Channel storage target.
From the standpoint of the operating system, FCP translates standard SCSI commands into the appropriate SCSI-3 equivalents required for block data transfer over a serial network infrastructure. A SCSI I/O launched by the operating system to read blocks of data from disk, for example, would initiate an FCP exchange between the host and target using command frames known as information units (IUs). Within the exchange session, groups of frame comprising one or more sequences would be used to transport data from target to host. SCSI commands and responses between the operating system and FCP are implemented through the lower layers as serial SCSI FCP functions, as shown in Table 33.
Table 33 FCP equivalents to standard SCSI functions (from American National Standards Institute [ANSI] T10 FCP-2)
SCSI Function |
FCP Equivalent |
I/O operation |
Exchange (concurrent sequences) |
Protocol service request and response |
Sequence (related frames) |
Send SCSI command request |
Unsolicited command IU (FCP_CMND) |
Data delivery request |
Data descriptor IU (FCP_XFER_RDY) |
Data delivery action |
Solicited data IU (FCP_DATA) |
Send command complete response |
Command status IU (FCP_ RSP) |
REQ/ACK for command complete |
Confirmation IU (FCP_CONF) |
Device drivers for HBAs must translate between conventional SCSI addressing and Fibre Channel device addresses. As a legacy from parallel SCSI, storage devices are identified by a bus/target/LUN triad. The bus is a SCSI chain hung from a specific SCSI port or SCSI adapter card. Multiple SCSI ports on a server require multiple bus designations. The target is a storage device, such as a disk. The logical unit identified by an LUN may represent a logical division of the diskfor example, a disk with two partitions that are accessible from the operating system as drives E: and F:. The device driver of the HBA or storage adapter card must translate this bus/target/LUN designation into a network-addressable identifier so that data can be passed to the appropriate storage target on the SAN. How this is implemented in IP storage environments is examined in the following chapters.
3.4.7 Fibre Channel Topologies
The three topologies supported by Fibre Channel are significant for IP-based storage networks both in terms of legacy support of Fibre Channel SAN segments and for understanding common features that can assist IP storage solutions for faster time-to-market. Different generations of Fibre Channel end devices may be optimized for specific topology protocols. Accommodating these devices via IP storage switch ports will encourage the transition from Fibre Channel SANs to IP-based SANs. In addition, the stability and demonstrated interoperability of Fibre Channel end devices and the FCP protocol enables IP-based storage networks to leverage the intellectual effort that has been vested in these technologies to date.
As shown in Figure 312, Fibre Channel supports direct point-to-point connections between two devices (typically a server and a single storage array); a shared, arbitrated loop topology; and a switched fabric. Gigabit Ethernet can support an analogous point-to-point connection as wellas switched fabric, but in practical implementation has no shared media option. Fibre Channel point to point was commonly deployed for first-generation solutions, but because it only supports two devices it does not quite qualify as a storage network. Fibre Channel arbitrated loop is similar in concept to Token Ring. Multiple devices (as many as 126 end nodes) can share a common medium, but must arbitrate for access to it before beginning transmission. A Fibre Channel fabric is one or more Fibre Channel switches in a single network. Each device has dedicated bandwidth (200 MBps full duplex for 1Gb switches), and a device population of as many as 15.5 million devices is supported. This large number is strictly theoretical, because in practice it has been difficult for Fibre Channel fabrics to support even a few hundred devices.
Figure 312 Fibre Channel topologies for point to point, loop, and fabric.
Loop and fabric devices may be supported on a single network. A loop hub with six devices, for example, can be attached to a fabric switch port. Each of the devices registers its presence with the fabric switch so that it can communicate to resources on other switch ports. Such devices are referred to as public loop devices. They can also communicate with each other on the same loop segment without switch intervention. However, they each must arbitrate for access to their shared loop before any data transaction can occur.
One caveat for loop devices on fabrics is the result of the evolution of Fibre Channel device drivers. Not all loop-capable HBAs or storage devices can support fabric attachment. Such devices are known as private loop devices. To support these older loop devices, the fabric switch must provide proxy registration for them so that they become visible to the rest of the network and accessible as storage resources. There are no specific Fibre Channel standards covering this private loop proxy feature, and consequently every switch vendor's implementation is proprietary.
Switched fabrics pose significant issues, many of which are still unresolved. Fabric switches provide a number of services to facilitate device discovery and to adjust for changes in the network infrastructure. Devices register their presence on the switch via a simple name server (SNS), which is essentially a small database with fields for the device's network address, unique WWN, upper layer protocol support, and so on. When a server attaches to the fabric, it queries the SNS to discover its potential disk targets. This relieves the server from polling 15.5 million possible addresses to discover and establish sessions with storage resources. The SNS table in a stand-alone switch may be fairly small, with only 10 to 30 entries. When multiple switches are connected into a single fabric, however, they must exchange SNS information so that a server anywhere on the network can discover storage. The larger the fabric, the more difficult it becomes to update the collective SNS data and to ensure reliable device discovery.
Another fabric issue for large fabrics is the ability to track changes in the network. Fabric switches provide a registered state change notification (SCN) entity that is responsible for alerting hosts to changes in the availability of resources. If, for example, a server has a session with a target array, it can be proactively notified if the array goes off-line or if the path to it through the fabric is broken. Because a Fibre Channel fabric is one large infrastructure, marginal components that trigger repeated SCNs can be disruptive to the entire network.
Management of Fibre Channel fabrics has evolved over time, but has been hindered by the challenges that a new technology faces. Out-of-band management with more familiar Simple Network Management Protocol (SNMP) protocols has enabled device and configuration management, although these are not as mature as the management platforms used by LAN and WAN networking. In-band management over the Fibre Channel links eliminates the need to have a parallel 10/100 Ethernet network for SNMP management, but is vulnerable to link failures. In-band management in Ethernet and WAN networks is predicated on redundant links. If both data and management traffic ride on the same network links, the failure of a single link would simply reroute traffic to available links in the meshed network. For these environments, provisioning redundant links is relatively inexpensive and simplifies network design and management. Achieving this level of redundancy in Fibre Channel networks is both expensive and awkward to implement. Without redundant links for in-band management, however, the loss of a data path may also be the loss of management traffic. Just when management information is needed the most, it would be unavailable.
Probably the most publicized issue for large Fibre Channel fabrics has been the lack of interoperability between vendor switch products. The standard for switch-to-switch connectivity (NCITS T11 FC-SW2) defines the connectivity and routing protocol for fabric switches. Switches are joined to each other via expansion ports, or E_Ports, and share routing information through the Fabric Shortest Path First (FSPF) protocol, a variant of the more commonly used Open Shortest Path First (OSPF) protocol for LAN and WAN networks. Although FSPF itself has not presented an overwhelming engineering challenge, competitive interests among Fibre Channel switch vendors have retarded its implementation. Until recently, dominant vendors have been unwilling to accelerate interoperability, fearing that openess would result in loss of market share. This, in turn, has led to absurd fabric designs simply to achieve higher port counts that could easily be accommodated through vendor interoperability. Some SAN designs have attempted to provide a high port count by deploying stacks of 10 or more 16-port switches, with nearly half the switch ports sacrificed for interswitch links. The result is a conglomeration of cabling that still results in a blocking architecture if more than one switch-to-switch transaction is started. With switch interoperability, it would be possible to combine high-port-count director-class switches with departmental 16-port switches for a more efficient deployment (Figure 313).
Figure 313 An actual vendor example of a higher port count fabric.
One inherent issue for large fabrics is the fact that a fabric is a single network. OSPF in LANs and WANs allows for the subdividing of networks into nondisruptive areas. A disruptive occurrence within a single area does not propagate throughout the entire network. FSPF does not provide this facility. Consequently, as a Fibre Channel fabric grows in population (and importance), it becomes increasingly vulnerable to outages.
Taken collectively, the issues associated with Fibre Channel fabrics are not insurmountable, but will require significant engineering resources to overcome them. The Fibre Channel fabrics that join Fibre Channel end devices have not achieved the level of stability and interoperability already attained by Fibre Channel HBAs, storage arrays, and tape subsystems. This makes the Fibre Channel fabric itself a prime candidate for replacement, which is the stated goal of IP storage solutions.