Moving up to the midrange company environment, we find multiple server systems. These are unlike desktop or laptop systems, which usually have more processing power than they can use. Servers are heavily loaded performance-critical systems that consume all the CPU cycles they have and often want more. They need access to storage with the smallest amount of lost CPU cycles possible. In an FC, or a direct-attach environment, these systems expend approximately 5% processing overhead to read and write data from and to a storage device. If iSCSI is to be competitive in the server environment, it needs a similar overhead profile. This requires that the processor overhead associated with TCP/IP processing be offloaded onto a chip or host bus adapter (HBA).
Offloading requires a TCP/IP offload engine (TOE), which can be completely incorporated in an HBA. All key TOE functions can be integrated on a single chip or placed on the HBA via the "pile-on" approach. The pile-on approach places a normal processor and many discrete components on the HBA (along with appropriate memory and other support chips) and includes normal TCP/IP software stacks. The integrated chip and the pile-on technique both permit the host processor to obtain iSCSI storage access without suffering the overhead associated with host-based TCP/IP processing.
We will be seeing a number of HBAs of all types from a collection of vendors. These will include not only a TOE but also in many cases full iSCSI offload. We will also see a pile-on HBA that can support 70% to 100% of the line speed while operating at close to 100% CPU utilization on the HBA. The customer of the server processor will care not how hard the HBA is working, but only that it can keep up with line speed and offload the iSCSI overhead (including TCP/IP) from the host processor.
The pile-on HBA approach will have higher latency than an HBA that has TCP/IP and iSCSI processing integrated onto a single chip. Even if the pile-on HBA can operate at line speed (1 Gb/s), the latency caused by this type of adapter is unlikely to permit its ongoing success in the market. That is because HBAs with full iSCSI and TOE chips will permit not only operation at line speed but also very low latency. We should consider the pile-on approach to be a time-to-market product that vendors will replace over time with faster and cheaper HBAs using iSCSI and TOE integrated chips.
The goal of iSCSI HBAs is to keep latency as close to that of Fibre Channel as possible (and it should be close when using integrated chips) while keeping costs significantly under those of Fibre Channel.
Some people have argued the price issue, saying that Fibre Channel can easily lower its prices to match iSCSI's because an FC chip will have less silicon than an iSCSI TOE chip. This is of course an important consideration, but sales volume is the key, and iSCSI has the potential for high volume with a technology that operates in an Ethernet environment. This includes operating at gigabit speeds with normal Cat. 5 Ethernet cable attachments so that the customer doesn't have to install and manage a new cable type.
As stated previously, I do not believe that FC vendors will give up their high margins in the high-end market in order to fight iSCSI in the low-end and midrange markets. This will only occur when iSCSI is considered a threat in the high end, but by then iSCSI will have large volumes in the rest of the market and will be able to push the price envelope against Fibre Channel. Also remember that TCP/IP (and Ethernet) connections will always be needed on these systems anyway. Therefore, since FC is always a "total cost adder," whereas iSCSI will have much of its cost supported by other host requirements for IP interconnect, price advantage will clearly go to iSCSI.
There has been talk that FC vendors will attempt to move their 1Gb offerings into the midrange while keeping their 2Gb offerings at the high end. However, the total cost of ownership (TCO) to the midrange customer will still be higher than iSCSI because of the shortage of FC-trained personnel, the use of new special cables, and, as mentioned above, the fact that Fibre Channel is always a total cost adder.
The goal is for the midrange environment to be able to obtain iSCSI-block I/O pooled storage, with performance as good as that of Fibre Channel but at lower cost. However, the midrange customer will still face the dilemma of iSCSI versus NAS. The same consideration and planning should be done in this environment as in the small office environment. The only difference is in the capabilities and price of the competing offerings.
In addition to the normal NAS and iSCSI offerings in this environment, there will be dual dialect offerings also. The difference is that the iSCSI-offload HBAs and chips can be employed to reduce the iSCSI host overhead to a point where they are competitive with Fibre Channel and direct-attached storage. This is not currently possible with NAS.
The other consideration in midrange company environments is that they have desktops and laptops that feed the server systems, which will also, from time to time, need additional storage. Their users will want to get the additional storage and have it managed along with the server storage. This is similar to the needs of SoHo environments: Instead of spending time upgrading internal disk storage, they want to get their additional storage via the network they are already plugged into.
With the new copper 1000Mb/s Ethernet adapters, users can have both a high-speed interactive network and a high-speed storage network, all without changing the Cat. 5 Ethernet cable already installed throughout their company. iSCSI storage controllers can supply the needs of both servers and client desktops and laptops.
Still, the argument is often made that a NAS solution can address the needs of desktops and laptops. This is true, but at a higher cost. As pointed out earlier, in the small office environment many applications are being written to use databases. They generally use a "shared nothing" approach and therefore provide an information-sharing environment in which NAS is not required. Again, if files need to be shared, NAS is appropriate; otherwise, a block I/O interface best meets the requirements. iSCSI is the most cost-effective approach for non-shared pooled storage.
Many of these midrange companies will be building iSANs. These are logically the same as FC SANs but are made up of the less expensive iSCSI equipmentless expensive because the entire Ethernet and IP equipment market is relatively low priced (at least low priced when compared to Fibre Channel). Even iSCSI HBAs are cheaper than current FC components. Intel, for example, has declared that its HBA will be available at a street price of under $500. It is further expected that iSCSI HBAs and chips will have even lower prices as sales volumes go up.
One significant difference between the midrange and small office computing environments is that the I/O requirements of the various servers can be as demanding as that found in many high-end servers. Therefore, in the midrange one tends to see more use of iSCSI HBAs and chips in various servers and storage controllers, and a smaller reliance on software versions of iSCSI. (See Figure 25.)
Figure 25 The midrange environment.
High-end environments will have the same processor offload and performance requirements that midrange environments have. However, they will probably be more sensitive to latency, so it is expected that the pile-on type of HBA will not be very popular. Because of the never-ending throughput demand from high-end servers, it is in this environment that HBAs with multiple 1Gb Ethernet connections and 10Gb implementations will eventually find their most fertile ground.
Another important distinction is that the high-end environment will probably have some amount of FC equipment already installed. This means that the cohabitation of iSCSI and Fibre Channel will be required.
Because of the usefulness and flexibility of iSCSI-connected storage, and because high-end servers are probably already connected with Fibre Channel, it is expected that high-end environments will first deploy iSCSI in their campus, satellite, and "at-distance" installations. (See Figure 26.) The following sections will break out each of these subconfigurations and then address the central facility.
Figure 26 The high-end environment.
The campus is the area adjacent to the central computing sitewithin a few kilometerswhere private LANs interconnect the buildings containing local department servers as well as the desktops and laptops that are also spread throughout. The different department areas are analogous to the midrange and small office environments. Their general difference is that, with the use of iSCSI, they can exploit the general campus IP backbone to access the data, which may be located at the central computing location.
Often these department areas have policy or political differences with the organization that runs the central computing complex, and so they want their own independent server collections. Generally they want the flexibility that a storage area network (SAN) can provide (such as device pooling and failover capability), but they do not want to get into the business of managing an FC network.
In spite of their independence, these departments want access to the tape libraries at the central location. They want access to these robust backup devices, which they consider essential but which they do not want to service, manage, or maintain. (See Figure 27.) The departments also want to access disk storage at the central location as long as they do not have to abide by what they perceive as excessive centralized regulation and control requirements.
Figure 27 Campus and central system/storage.
Today, even if the FC cables could be pulled to the various campus locations, since Fibre Channel has no security in its protocols, the access control demands of the central computing location may be more than the departments want to put up with. iSCSI, on the other hand, has security built into the basic protocol (both at the TCP/IP layer and at the iSCSI layer), which permits fewer invasive manual processes from anyone, including the disk storage administrator at the central location. iSCSI also permits the department servers to be booted as often as necessary, while still getting at central storage, something that probably would not be done if located within the main computing center.
Because of their needs and desires, campus departments are very likely to view iSCSI as key to their strategic computing direction. However, the campus environment is made up of more than just the department servers. It also has individual desktops and laptops distributed throughout that look like home office systems. A major difference, however, is that their users are not encouraged to modify them. Instead, every time they need additional storage, they have to justify it to either the department or the central computing location. Then the central or department "guru" who handles the system, must schedule time to come out and do an upgrade. Since these guru types handle many different users, they take approaches that can be unpleasant for the end user, often causing the loss of data or carefully constructed "desktop screens." Gurus are in a no-win scenario. They do not like taking end users' systems apart, especially since users can be abusive about procedures, scheduling, and so forth.
Some installations have been known to just upgrade the entire system whenever new storage or processor power is needed. In the past this was often a reasonable approach since the need for processing power was keeping pace with the creation of storage. Now, as a rule, this is not the case. The 1-to-2 gigahertz (Gh) processors seem to have reached a plateau where the productivity of the office worker does not benefit from the additional speed of the laptop or desktop. However, one can still generate a lot of storage requirements with these processors, and it is beginning to occur to many companies that replacing systems just to upgrade the storage is a waste of time and money. Further, it greatly disturbs employees when they lose their data, their settings, or their visual desktop. Even when things go right in the backup and restore stages of bringing data from one system to another, the process is lengthy and tedious. Companies that believe time and productivity are money dislike these disruptions.
Both the end user and the guru will love iSCSI. To get additional iSCSI storage the end user just has to be authorized to use a new logical volume, and the issue is done. Often this can be accomplished over the phone.
Over time, desktops will be "rolled over" for newer versions, which will come equipped with the 10/100/1000BaseT (gigabit copper) IP adapter cards. 1000BaseT-capable adapter cards permit desktop performance of up to 1 Gb/s, which will greatly improve the performance of iSCSI storage. Note that most installations use Cat. 5 copper cables for 10/100Mb/s Ethernet connections, and these same Cat. 5 cables are adequate for gigabit speeds. Therefore, installations do not have to rewire in order to get gigabit iSCSI storage access for their ubiquitous desktop systems.
Since iSCSI also supports remote boot, one can expect many desktop systems only to support storage connected via iSCSI in the future. The desktops can then be upgraded as needed independently of the data.
A remotely located office, known as a satellite, will have an environment similar to that of a campus. It often functions like a department or small office. Satellites have their own desktop systems and sometimes their own servers. They generally suffer from the lack of adequate "remote support," which often means slow response to their needs.
As in the small office, satellite users do not usually touch the system but instead get a guru to come to the remote location to fix things. With the use of iSCSI many satellite installations can have their storage-related needs handled via the phone. As they need more storage, they can call in the storage administrator, who enables more logical volumes for their use. This is possible since with iSCSI they are connected to the central location via a virtual private network (VPN).
A VPN is provided by a combination of carrier and user equipment. A carrier or ISP delivers some type of "IP tone" to the remote location, and the remote office uses encrypting firewalls and the like to secure access to a central computing facility, even across the Internet or other public infrastructures.
When the various satellite offices are located in a metropolitan area, a VPN becomes very attractive, since there will not be a large problem with "speed of light" latency issues. These network types are called metropolitan area networks (MANs). However, the greater the distance, the more local (iSCSI) storage will be deployed at the satellite location and the less central storage will be used for normal operations. These more remote locations will like the feature of local pooled storage that they get with iSCSI, without having to learn Fibre Channel.
When metropolitan area satellite offices need more iSCSI-based storage, they just ask the storage administrator at the central installation to logically attach more virtual volumes to the user's iSCSI access list. All this is possible without significant effort at the satellite location, assuming, of course, that adequate bandwidth exists between the central location and the satellite office.
In the past, satellite office connections required private or leased phone lines, but it is now becoming prevalent in many areas for carriers to offer "IP tone" at a much lower cost than leased lines. Thus, the customer is now more likely than before to have high-speed connections between the satellite office and the central office.
Satellite locations may also have local servers and storage requirements, and will want the flexibility offered by a SAN. They will find iSCSI a more cost-effective solution than Fibre Channel, especially since the network management can still be handled at the central location.
The satellite installation, like the campus environment, will also want to be able to use centralized tape units for backup without having them located at the satellite location. This also is an ideal exploitation of the capabilities of iSCSI. (See Figure 28.)
Figure 28 Satellite and central system/storage.