- Overview
- Cycle-based, Round-robin
- Deadline-driven, Random
- Optimization Techniques
- Online Reconfiguration
- Supporting Multi-zone Disks

## 3.4 Optimization Techniques

#### 3.4.1 Request Migration

By migrating one or more requests from a group with zero idle slots to a group with many idle slots, the system can minimize
the possible latency incurred by a future request. For example, in Figure 3.8, if the system migrates a request for *X* from *G*_{4} to *G*_{2} then a request for *Z* is guaranteed to incur a maximum latency of one time period. Migrating a request from one group to another increases the
memory requirements of a display because the retrieval of data falls ahead of its display. Migrating a request from *G*_{4} to *G*_{2} increases the memory requirement of this display by three buffers. This is because when a request migrates from *G*_{4} to *G*_{2} (see Figure 3.8), *G*_{4} reads *X*_{0} and sends it to the display. During the same time period, *G*_{3} reads *X*_{1} into a buffer (say, *B*_{0}) and *G*_{2} reads *X*_{2} into a buffer (*B*_{I}). During the next time period, *G*_{2} reads *X*_{3} into a buffer (*B*_{2}) and *X*_{1} is displayed from memory buffer *B*_{0}. (*G*_{2} reads *X*_{3} because the groups move one cluster to the right at the end of each time period to read the next block of active displays
occupying its servers.) During the next time period, *G*_{2} reads *X*_{4} into a memory buffer (*B*_{3}) while *X*_{2} is displayed from memory buffer *B*_{1}. This round-robin retrieval of data from clusters by *G*_{2} continues until all blocks of *X* have been retrieved and displayed.

**Figure 3.8 Load balancing.**

With this technique, if the distance from the original group to the destination group is *B* then the system requires *B* + 1 buffers. However, because a request can migrate back to its original group once a request in the original group terminates
and relinquishes its slot (i.e., a time slot becomes idle), the increase in total memory requirement could be reduced and
become negligible.

#### 3.4.2 Object Replication

#### Full Replication (FR)

To reduce the startup latency of the system, one may replicate objects. The simplest way is to replicate entire objects in
the database so that all blocks have the same number of replicas. Let the original copy of an object *X* be its primary copy, X* ^{P}*. All other copies of

*X*are termed its secondary copies. The system may construct

*r*secondary copies for object

*X*. Each of its copies is denoted as

*X*where 1 ≤

^{i}*i ≤ r*. The number of instances of

*X*is the number of copies of

*X*,

*r*+ 1 (

*r*secondary plus one primary). Assuming two instances of an object, by starting the assignment of

*X*

^{1}with a disk different than the one containing the first block of its primary copy (), the maximum startup latency incurred by a display referencing

*X*can be reduced by one half. This also reduces the expected startup latency. The assignment of the first block of each copy of

*X*should be separated by a fixed number of disks in order to maximize the benefits of replication. Assuming that the primary copy of

*X*is assigned starting with an arbitrary disk (say

*d*contains ), the assignment of secondary copies of

_{i}*X*is as follows. The assignment of the first block of copy

*X*should start with disk mod

^{j}*d*. For example, if there are two secondary copies of object

*Y*(

*Y*,

^{1}*Y*) assuming its primary copy is assigned starting with disk

^{2}*d*

_{0}. is assigned starting with disk

*d*

_{2}while is assigned starting with disk

*d*

_{4}when

*d*= 6.

With two instances of an object, the expected startup latency for a request referencing this object can be computed as follows.
To find an available slot, the system simultaneously checks two groups corresponding to the two different disks that contain
the first blocks of these two instances. A failure happens only if both groups are full, reducing the number of failures for a request. The maximum number of failures before
a success is reduced to due to two simultaneous searching of groups in parallel. Therefore, the probability of *i* failures in a system with each object having two instances is identical to that of a system consisting of disks with 2*N* servers per disk. A request would experience a lower number of failures with more instances of objects.

This also reduces the expected startup latency. The assignment of the first block of each copy of *O _{i}* should be separated by a fixed number of disks in order to maximize the benefits of replication. Let denote the disk that stores the first block of

*j*replica of object

^{th}*O*. Then, assuming

_{i}*R*copies of

_{i}*O*, from Eq. 3.2,

_{i}The location of an object, *O _{i}*, with

*R*replicas can be represented by the set

_{i}*T*:

_{i}For example, in a six-disk system, if there are two secondary copies of object *O _{i}* ( and ) assuming its primary copy, is assigned starting with disk

*d*

_{0}. is assigned starting with disk

*d*

_{2}while is assigned starting with disk

*d*

_{4}. Thus,

*T*= {0, 2, 4}.

_{i}With two instances of an object, the expected startup latency for a request referencing this object can be computed as follows.
To find an available server, the system simultaneously checks two groups corresponding to the two different disks that contain
the first blocks of these two instances. A failure happens only if both groups are full, reducing the number of failures for
a request. The maximum number of failures before a success is reduced to due to two simultaneous searching of groups in parallel. Therefore, the probability of *i* failures in a system with each object having two instances is identical to that of a system consisting of disks with 2*N* servers per disk. A request would experience a lower number of failures with more instances of objects. With *j* instances of an object in the system, the probability of a request referencing this object to observe *i* failures is:

where . Hence, the expected startup latency of requests that reference an object with *j* instances is:

#### Selective Replication (*SR*)

Object FR greatly increases the storage requirement of an application. One important observation in real applications is that objects may have different access frequencies. For example, in a video-on-demand system, more than half of the active requests might reference only a handful of recently released movies [2]. [34] models the empirical distribution of video rental frequency using a Zipf distribution. By replicating frequently referenced objects more times than less popular ones, i.e., selectively determine the number of replicas of an object based on its access frequency, we could significantly reduce the startup latency without a dramatic increase in storage space requirement of an application.

The optimal number of secondary copies per object is based on its access frequency and the available storage capacity. The
formal statement of the problem is as follows. Assuming *n* objects in the system, let *S* be the total amount of disk space for these objects and their replicas. Let *R _{j}* be the optimal number of instances for object

*j*,

*S*to denote the size of object

_{j}*j*and

*F*to represent the access frequency (%) of object

_{j}*j.*The problem is to determine

*R*for each object

_{j}*j (*1

*≤ j ≤ n)*while satisfying

*.S*.

_{j}≤ SThere exist several algorithms to solve this problem [93]. A simple one known as the Hamilton method computes the number of instances per object *j* based on its frequency by calculating the quota for an object (*F _{j}* ×

*S*). It rounds the remainder of the quota to compute

*R*. However, this method suffers from two paradoxes, namely, the Alabama and Population paradoxes. Generally speaking, with these paradoxes, the Hamilton method may reduce the value of

_{j}*R*when either

_{j}*S*or

*F*increases in value. The divisor methods provide a solution free of these paradoxes (see Figure 3.9). For further details and proofs of this method, see [93]. Using a divisor method named Webster (

_{j}*d*(

*R*)

_{j}*= R*+ 0.5), we classify objects based on their instances. Therefore, objects in a class have the same instances. The expected startup latency in this system with

_{j}*n*objects is:

**Figure 3.9 Divisor method to compute the number of replicas per object.**

where is the expected startup latency for object having *R _{i}* instances.

#### Partial Replication (PR)

FR and *SR* replicate all blocks in an object. Considering the size of large *SM* objects and a bounded amount of available space for replication, replicating only the first small portion of each object
several times could greatly reduce the extra space requirement while providing a much shorter startup latency. For example,
we can replicate only the first 10 blocks of an object *X* when the number of blocks of *X* is 100. The placement of blocks follows the same way in the previous replication techniques. The assignment of requests is
similar as in FR and *SR* but the system allocates a new request into the group which is currently accessing the disk where the first block of the
primary copy of the requested object resides () whenever it is possible. In other words, when both and have at least one empty slot, a new request should be assigned to . However, if is full and has empty slots, a request goes to . Only when both have no empty slot, a request experiences a failure. This provides a same impact on the startup latency as
in FR with two copies per object. The only difference is that a request assigned to should be relocated to before it reaches the last block of the partially replicated copy (tenth block in the previous example). The newly relocated
request will retrieve the primary copy until the end of its display. For example, in Figure 3.10, a request for *X* arrives and is assigned to because is full. For the next seven time periods, the request is serviced in until releases a slot when a display ends. Then, the request is relocated from to and is serviced until the end of display. With PR, if we replicate the first 10% of an object 10 times, it takes two times
of the original storage requirement (the size of the primary copy). As we discussed in the previous section, this can greatly
reduce the startup latency as if the system has ten full copies of the object requiring ten times of the original storage
requirement. However, there is a chance that a request assigned to the partially replicated copy could not be relocated to
primary copy until the display reaches the end of partially replicated copy. Then, hiccups happen.

**Figure 3.10 Partial replication technique.**

If a disk drive can support *N* simultaneous displays and *s* is average display time (service time) of objects, the service rate of a disk drive (a group) is *N/s.* Hence, ideally, if we replicate the first *s/N* portion of an object, no hiccup happens. However, due to the statistical variation of time when requests arrive at and leave
the system, there exists a probability that a request experiences hiccup in this technique. To reduce this probability, we
should replicate more portion than *s/N,* resulting in a higher space requirement.

Request migration with temporary buffering can eliminate this hiccup problem. In Figure 3.11, assume that a request in G_{4} is accessing a partially replicated copy of *X* while *G*_{1} is accessing the primary copy of *X*. A hiccup happens when *G*_{1} does not have any idle slots until *G*_{4} reaches to the last block of a secondary copy. Request migration utilizes buffers to avoid a potential hiccup situation.
For example, while G_{4} accesses the last block of the secondary copy *X*_{9}, G_{0} reads the block *X*_{10} from the disk drive *d*_{0} and stores it into a temporary buffer in the same time period. From the next time period on, *G*_{0} retrieves the remaining blocks sequentially until the group *G*_{1} releases a time slot. Then the scheduler migrates the request to the group *G*_{1} and the temporary buffers are freed. Hence, hiccups don't happen even though *G*_{1} is full, as long as there exists at least one available slot in the entire groups. Note that there is a tradeoff between
the amount of required buffers and the number of hiccups. Extra buffers increase system cost per display.

**Figure 3.11 Request migration in the partial replication technique.**

#### Partial Selective Replication (PSR)

This is a hybrid approach of PR and *SR* discussed in the previous subsections. By taking advantage of utilizing skewed access frequencies of an application and reduced
storage requirement of PR, this approach determines the number of partially replicated secondary copies of an object based
on its access frequency. Modifying the divisor method for partial replication is trivial and straightforward.

#### Comparison

We compare four replication techniques using simulation studies. Assuming two different access frequency models: 1) uniform
and 2) skewed (Zipf), the average startup latency of each technique is quantified as a function of available extra disk storage
space. In these experiments, we assumed that the entire database was disk resident. A server was configured with twelve Seagate
Cheetah ST39103LW/LC disks. Each disk has 9 GBytes of disk space and 80 Mb/s of data transfer rate. We assumed that each video
clip was encoded using MPEG2 with a 4 Mb/s of display rate. The database consisted of 30 1-hour video clips with a total of
54 GByte disk space. A video clip consisted of 3600 blocks with one second of time period. For FR and *SR*, the entire 3600 blocks were replicated while first 720 blocks (20%) were replicated for PR and PSR.

We assumed a uniform distribution of access frequency for 30 video clips, and also analyzed a skewed distribution (Zipf) of
access as a simplified model of real access frequencies (Figure 3.12). The bandwidth of each disk can support sixteen simultaneous displays (*N* = 16). Hence, the maximum throughput of this configuration (12 disks) is 192 simultaneous displays. We assumed that request
arrivals to the system followed a Poisson pattern with an arrival rate of *λ* = 0.05req/sec for a 95% system utilization. Upon the arrival of a request, if the scheduler fails to find an idle slot in
the system then this request is rejected.

**Figure 3.12 Two access frequency distributions.**

Figure 3.13 shows the quantified average startup latency from our simulations using both uniform and Zipf access distributions. The *X* axis represents the storage space of a system as a multiple of database size. For example, 1.1 on *X* axis means an extra space of 10% of *DB* size. The *y* axis shows resulting average startup latency. Note that, when *x =* 1, it shows an average startup latency without any replication. Due to statistical variance in request arrivals, the system
could reach to its maximum capacity, 192 simultaneous displays, then newly arrived requests were rejected during such a peak
time. The rejection rate was 2.5% on the average for all experiments.

**Figure 3.13 Average startup latency (ρ = 0.95).**

As we increase extra disk space for replication, the average startup latency decreases as a function of it. PR and PSR provides
the shortest startup latency with both uniform and Zipf distribution. In the best case, when *x =* 2, 79% and 77% of reduction in the average startup latency were observed with uniform and Zipf distribution, respectively.
FR and PR show steps downward because they cannot create more number of replicas when the increase of extra space is not enough
to replicate all objects in database. However, *SR* and PSR continuously decrease as extra space grows.

One observation is that the average startup latency can be significantly reduced even with a small amount of extra space.
In Figure 3.13.b, with only 20% increase of storage requirement, *SR*, PR, and PSR provide 45%, 59%, and 56% of reduction, respectively, comparing to one without replication (2.39 seconds). This
implies that the system does not require a huge amount of extra space to achieve a smaller startup latency to meet the latency
criteria of many applications. While PR and PSR provide the shortest startup latency, they require more memory space because
of increased number of buffers for request migration. Thus, for a cost-effective solution, *SR* works fine without any increase in system cost, especially for applications with highly skewed access frequency.