Home > Articles > Networking > Storage

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

3.5 Simplified SAN Application I/O Models for Verification

Now that the performance assessment of the template applications and host systems has been completed, use the information gathered from the assessment to model the expected behaviors of the host systems. The verification model can be simple and should try to recreate the I/O behaviors of the system being modeled. Not all I/O behaviors need to be built into the model, because modeling everything is extremely complex and time-consuming. The verification model tries to emulate peak performance for the chosen I/O characteristics. The verification model can also test failure modes and evaluate SAN behaviors while working with specific features of the SAN.

Modeling the NAS Server Replacement

The I/O model for the NAS server replacement SAN in Figure 3.1 (page 59) should emulate the archival processes that the NAS server currently services. This application simultaneously transfers several large files to the NAS server, and the model for the file transfers can be quite simple. The tester places a set of test files on one client host system or more and then writes a simple set of scripts that transfers these files to and from the new SAN file server.

The tester then measures the transfers for bandwidth performance and checks for reliability. Performance should be evaluated and assessed from several places in the SAN. Ideally, the throughput of the NAS replacement SAN has been measured from the client, the server, and the fabric devices that make up the SAN.

Testing of the failure cases in the NAS replacement SAN includes these tasks:

  • Simulating device failures during data transfers

  • Powering off fabric devices

  • Rebooting host systems

  • Unplugging cables in a controlled manner to evaluate behaviors under failure or maintenance conditions

These tests provide a better understanding of the failure cases and may uncover problems in maintenance methods or the design.

Modeling the Data Warehouse ETL Consolidation SAN

A model for the storage consolidation SAN in Figure 3.4 (page 63) is more complex than the NAS replacement SAN test model. The systems in the storage consolidation SAN will use the fabric-attached storage for file creation in addition to reads and transfers, which differs from the dedicated data transfer use of the NAS replacement SAN. The I/O model must include file creation, reads, and writes. Modeling must also include an approximation of the timing of the processes.

The first step is the creation of a few simple scripts that create, read, and write files. These scripts can then be grouped together to simulate I/O behaviors of the systems being consolidated on the SAN. Example 3.3 shows a Perl script that randomly reads a file.

This simple script performs a specified number of random 1KB reads throughout a specified file. A similar script in Perl can randomly write updates to a file, as shown in Example 3.4.

The writer.pl script inserts an all-zero, 1KB update into a specified file at a random location. It is easy to modify the size and content of the update for customization.

Much simpler scripts can also create files. Because a new file will be sequentially written with the typical I/O size of the application in most cases, a file creation script can use the UNIX system tool dd. Example 3.5 shows a dd command to write an 800MB file in 8KB-size blocks.

In Example 3.5, the parameters are:

  • Input file (if)

  • Output file (of)

  • Block size (bs)

  • Number of IOPS (count)

To create a file of any size with any I/O size, change the block size and the count.

Use a wrapper script to run the scripts or file creation command numerous times. Simulate CPU processing time with delays in the wrapper. A wrapper script that simulates a load operation in a data warehouse is shown in Example 3.6.

EXAMPLE 3.3. A random file reader script (reader.pl)

# reader.pl
# Perform random reads of a file

# The first argument to the script is the file name
# The second argument to the script is the number
# of reads to perform
$file = $ARGV[0];
$count = $ARGV[1];

# open the file to be read and find its size
open(FH, $file) || die "Can't open $file\n";
seek(FH, 0, 2);
$filesize = tell(FH);


open(FH, $file) || die "Can't open $file\n";

# perform 1KB reads of the file at random offsets
# $count times
while ( $i <= $count) {
    $fpos = int(rand $filesize) + 1;
    read(FH, $dump, 1024);

printf "Done reading file $file\n";

EXAMPLE 3.4. A random file updater script (writer.pl)

# writer.pl
# Perform random updates of a file
$LOCK_SH = 1;
$LOCK_EX = 2;
$LOCK_NB = 4;
$LOCK_UN = 8;
# The first argument to the script is the file name
# The second argument to the script is the number of writes to perform
$file = $ARGV[0];
$count = $ARGV[1];

# Make a 1KB buffer of zeros
$buf="0" x 1024;

# open the file to be read and find its size
open(FH, $file) || die "Can't open $file\n";
seek(FH, 0, 2);
$filesize = tell(FH);


# open and lock the file for writing
open(FH, "+<$file") || die "Can't open $file\n";
flock(FH, $LOCK_EX);

# perform 1KB writes to the file at random offsets $count times
while ( $i <= $count) {
    $fpos = int(rand $filesize) - 1;
    seek(FH, $fpos, 0);
    print FH $buf;

flock(FH, $LOCK_UN);

EXAMPLE 3.5. Simple file creation using dd

dd if=/dev/zero of=/fs1/file01 bs=8192 count=100000 

EXAMPLE 3.6. Data warehouse load simulation wrapper

# Data warehouse load I/O model

# create 10 2GB files sequentially
dd if=/dev/zero of=/fs1/file01 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file02 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file03 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file04 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file05 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file06 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file07 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file08 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file09 bs=8192 count=250000
dd if=/dev/zero of=/fs1/file10 bs=8192 count=250000

# read and write previously created
# simulated catalog file at random
# 250000 times simultaneously in
# 10000 I/O chunks with 30 seconds
# of simulated calculations between chunks
while [ $i -le 25 ]
    reader.pl /fs1/simucat 10000 &
    writer.pl /fs1/simucat 10000 &
    i=´expr $i + 1´
    sleep 30

These tools simulate the I/O workload of the ETL systems on the storage consolidation SAN. Use the same I/O workload simulation for failure mode and maintenance evaluation by simulating failures and performing maintenance tasks while the model runs.

Model the I/O behaviors of the systems on a capacity-planning SAN for midsize data warehouse applications using the same set of tools. In addition, use a nonrandom read command, because data warehouse systems tend to scan large tables sequentially. Example 3.7 shows a dd command that performs a simple sequential read.

This command reads 8KB blocks of the file created in Example 3.5. In this case the command simply reads and discards the data because the data is not needed for anything else.

The four simple I/O workload components just described can be assembled to simulate the I/O behavior of the data warehouse systems in almost any mode. Simulation of the staging, loading, and querying of the data warehouse system requires several wrapper scripts in order to combine these I/O workload driver tools. The wrapper scripts would be variations on Example 3.6 and can also be very simple.

In a capacity-planning SAN where zone changes can be frequent due to unknown initial system configurations, evaluation of zoning changes is particularly interesting. Make changes to the capacity-planning SAN configuration while running the I/O model to determine the exact behavior of the systems, fabric devices, and storage devices.

Create an experimental SAN I/O model out of the same components used for the capacity-planning SAN in order to exploit the SAN performance characteristic or behavior. Running several copies of the sequential reader at the same time will drive up bandwidth on the SAN. Multiple copies of the random reader and writer scripts will create high IOPS loads. Additional combinations of the I/O work-load components can simulate the interesting workloads found in most environments.

EXAMPLE 3.7. Simple sequential read using dd

dd if=/fs1/file01 of=/dev/null bs=8192 count=100000 

Model a SAN for a new project in the same fashion as an experimental SAN. The SAN for a new project has more clearly defined performance expectations that facilitate a more accurate model of the expected I/O workload. The SAN does not have to be intentionally stressed, but it can be evaluated with an I/O model that creates the expected performance level for the host systems and applications that will be using the SAN.

  • + Share This
  • 🔖 Save To Your Account