The At-Distance Site
"At-distance" reflects the needs of many centralized computing locations that use storage equipment placed at remote locations.
The remote equipment can range from small RAID arrays to large disk RAID arrays, tapes, and tape libraries. Even the central location prefers to have the tape library "at distance" from the central site. Often this is a tape vaulting area (such as "Iron Mountain"). iSCSI permits "natural" access to such remote units, without undue gateways, routing, or conversions from one technology to another. (See Figure 29.) Part of this "natural" access is the ability of either the servers, the storage controller, or some third-party equipment to create dynamic mirrors at remote locations, which can be "spun off" at any time and then backed up to tape. This permits remote backup without impacting online applications. The remote mirror can be located at the tape-vaulting site, or iSCSI can be used to send it to another remote location. This type of process, though possible at a local site, seems to be very valuable when located at a secure remote site.
Figure 29 The at-distance environment.
Today this remote storage access is done with proprietary protocols, devices, and often expensive leased lines. In the future it will be done with standard IP protocols, primarily iSCSI, often utilizing carrier-provided IP tone interconnects.
The Central Site
The central site will receive iSCSI storage requests from campus department servers, from desktops and laptops, and from satellite locations. Likewise it will issue storage requests to remote locations for backup and disaster recovery. (See Figure 210.) The central environment is considered to be the high-end processing and storage environment. It has the highest speed requirements not only on the processor itself but also on the I/O network it uses. Further, it has an overarching need for high reliability, availability, and serviceability (RAS).
Figure 210 The campus environment.
In the central site iSCSI must be able to perform as well as Fibre Channel and match the same RAS requirements. These difficult but attainable requirements dictate that hosts use top-of-the-line iSCSI HBAs and that these HBAs be configured to operate in tandem such that failover is possible. Also, they need to be usable in a parallel manner such that any degree of throughput can be accomplished. The iSCSI protocol has factored in these requirements and supports a parallel technique known as Multiple Connections per Session (MC/S). This permits multiple host iSCSI HBAs to work as a team, not only for availability but also for maximum bandwidth. The same set of capabilities within the iSCSI protocol also permits iSCSI target devices to perform similar bandwidth and availability functions.
The high-end environment will have both the FC and the iSCSI storage controllers needed to service it. And since Fibre Channel is already there and can't be ignored, the installation must be able to interconnect the two storage networking technologies and expect them to "work and play" well together. The installation will have the problem of how to begin and how to integrate the two networks. Customers will want to invest in iSCSI storage controllers and yet continue to capitalize on the FC SAN investments they already have.
Various vendors offer "bridge boxes" that convert iSCSI host connections to FC storage connections. Some boxes convert FC host connections to iSCSI storage connections. Both of these functions are accomplished via routers, gateways, and switches (switch routers).
The thing that will actually make all this interconnection capability work is the management software. Probably there will be storage network management software that can operate with all FC networks and similar software that can control the iSCSI network. Clearly, though, there is a need for storage management software that can manage a network made up of both FC and iSCSI.
Even though multi-network software is sophisticated, some vendors are bringing it to market now. Luckily, the iSCSI protocol has defined a set of discovery processes that can be shipped with each iSCSI device which will permit a full iSCSI discovery process. This process, when used in conjunction with the FC discovery processes, will permit the interplay of iSCSI and FC SANs.
Since iSCSI and Fibre Channel share the SCSI command set, most existing LUN discovery and management software will continue to operate as it does in SCSI and Fibre Channel today. Therefore, there should not be signifi-cant changes to the SAN LUN management software.
The key problem is that the combined "SCSI device" discovery processes need to be carried out when there are both FC and iSCSI connections. It needs to be done both to and from the hosts and to and from the storage controllers. When an FC network manager performs its discovery process and detects an FC device, which just happens to be available to an iSCSI host (via a gateway device of some kind), it is important that the iSCSI network manager also know about the device. Therefore, an FC/iSCSI network manager needs to combine the results of the FC discovery process with its iSCSI discovery process so that all appropriate devices can be offered to the host systems as valid targets.
In addition to knowledge about each other's hosts and storage controllers, there has to be a melding of names and addresses such that iSCSI hosts can actually contact the FC target storage controllers and vice versa. Luckily, a companion protocol/process called iSNS (Internet Storage Name Service) deals with this problem by mapping the names and address between the FC and iSCSI views. In this way, with the appropriate surrounding management software, both networks can be seen as a seamless interconnected SAN.
To sum up, the high-end environment contains all aspects of the low-end (SoHo) and midrange environments, plus additional requirements for high availability and large bandwidth, along with campus and WAN (intranet) connections. It also requires seamless interconnect between FC and iSCSI networks.