- Statistical Nature of Availability
- Establishing Expected System Availability
- Creating a Complete Failure Impact Specification
- Acknowledgements
Establishing Expected System Availability
System availability is a statistical variable, characterized by a probability distribution. This probability distribution can be derived by processing a large amount of measurements, but it is hard or impossible to predict before a solution has been implemented. For this reason, a more limited characteristic is often used: The expected value. The expected value of availability (Ae) is the mean value you would obtain if an infinite number of measurements were performed.
The expected value does not contain information about the spread of the measurement values. We only use it because it allows us to develop a model from which we can draw conclusions in later sections of this article.
Developing a Simple Model
Suppose that the functionality of a system depends on one single component. Assume that this component has a mean time to failure (MTTF), and it takes a mean time to recover (MTTR) from this failure. The expected availability of the system can then be expressed as shown in the following formula. For more information, refer to the Sun BluePrints OnLine article "High Availability Fundamentals," by Enrique Vargas.
Ae = MTTF / (MTTF _ MTTR)
Since MTTF is orders of magnitude larger than MTTR, we can approximately rewrite this as follows:
Ae = (MTTF MTTR) / MTTF
When T is the interval over which availability is measured (for example, one month in the previous section), multiplying numerator and denominator by T/MTTF yields:
Ae = ( T - (T/MTTF)*MTTR ) / T
Assuming that the component has a constant probability to fail over time, and MTTF is much larger than T, then T/MTTF represents the probability for the component to fail over a time interval T. Denoting this probability by p, the formula becomes
Ae = ( T - p * MTTR) / T
If the system depends on i independent components (independent in the sense that failure of one component does not change the failure probability of the other components), each with a failure probability of pi and a recovery time of MTTRi, the expression can be generalized as follows:
Ae = (T - SUM (pi * MTTRi) / T
Or, it can be generalized as:
Ae = 1 - De/T
Where De is the expected value of system downtime, SUM (pi * MTTRi) .
The expression is obvious, but we want to show that it is just an algebraic transformation of the classical formula (MTTF+MTTR)/MTTF. It allows us to confirm, more formally, what was stated in the introduction of this article: The expected availability of a system is determined by two factors:
The first factor (pi) is a probability. For some components, like hard drives or central processing unit (CPU) boards, this value is known by the vendor. When the scope of the system widens to include software and procedures, the probabilities in the formula can hardly be estimated. For example, how can you determine the probability that an operator will bring the system down by mistake?
The second factor is the recovery time from failure: the time it takes, on average, for the system to be functional after a failure occurs. The MTTR is determined by the design of the system, the recovery procedures that are in place, the staff's familiarity with recovery procedures, whether spare parts are on site, and many other measures that may be taken. For more information, refer to the Sun BluePrints OnLine article "Network Storage Evaluations Using Reliability Calculations," by Selim Daoud.
In the preceding formula for expected availability, the impact of a failure is simply MTTR: the time to repair the broken component, or the time to restore the lost data. This is a huge oversimplification. All modern IT architectures employ redundancy and resiliency. For example, Sun Fire servers are fault resilient for all their components (including the central system interconnect), eliminating the need for a maintenance event when a hardware failure occurs.
The simple assumption in this section that recovery from failure is synonymous with repair does not, therefore, hold. To be practically useful, we need to refine our model to account for redundancy and resilience, and the failure impact parameters that we derive from it.
Accounting for the Impact of Redundant Components on Expected Availability
The simple model we have used until now (a system with functionality that depends directly on a number of components) is insufficient to describe a modern IT architecture. Such an architecture always employs redundancy. In this section, we refine the expression for expected availability to account for redundant components. From it, we derive another metric to be added to our failure impact model.
Suppose a system is made up of two redundant components. Failure of one component is not fatal, the system can resume its function using only the remaining healthy component. Failure of both components before the first one is repaired, however, will cause a total system outage. To understand the impact of redundant components on the expected availability for such a system, you need to distinguish time to recover from time to repair. Recovery time is the time required for the system to resume its function after a redundant component fails (the failover time). In contrast, the time to repair is the time required to physically replace the failed component and to restore the system to its redundant (or nominal) state.
The following formulas use 'TTR to denote recovery time and BTN to denote back-to-nominal time (time to repair is a more straightforward term, but can lead to endless confusion). The probability for one of the components to fail over a time interval T is represented by the following formula:
p1=T/MTTF
A first failure may cause downtime during a time interval MTTR (the failover time), after which the system resumes its function. The probability for the second component to fail before the first component is repaired, is represented by the following formula:
p2=BTN/MTTF
The downtime incurred when there is a complete system outage (both components are broken) we call MTTR2. The expected system downtime is then represented by the following formula:
p1*MTTR + p1*p2*MTTR2 = p1*MTTR + (BTN/T)*p1*p1*MTTR2
As an example, take two hard drives configured in RAID-1. When a disk breaks, BTN is the time required to get a new part on site, insert it into the system, and resynchronize the mirror.
MTTR is the downtime as a consequence of a failing drive. After a few Small Computer System Interface (SCSI) retries, the RAID-1 driver resumes its function. In this case, MTTR is, therefore, negligible.
MTTR2 is the downtime as a consequence of the failure of the entire disk mirror. This includes ordering and replacing two disks, recreating and synchronizing the mirror, restoring data from backup media, checking the restored data, and restarting the application.
In general, the second term, (BTN/T)*p1*p1*MTTR2, is extremely small, corresponding to the very low probability that both components will break within the BTN time window. However, in the case of complete failure (both disks in a mirror, both computers in a cluster, and both channels on a redundant system center plane), MTTR2 is usually uncomfortably large.
In the presence of redundant components, expected downtime contains an extra term to account for multiple failures. This term is proportional to the BTN time and usually multiplies a very low probability with a large downtime impact. Since all critical IT infrastructure today contains a high degree of redundancy, you need to include BTN in any failure impact specification.
Automating Failover to Improve Expected Availability
When a component in a redundant system breaks, failover to a redundant component can be triggered automatically, or it can be performed manually. By automating failover, you can unambiguously estimate the impact of failover on expected availability. In the absence of an automated failover mechanism, the time to recover can increase dramatically. The impact of manual failover on expected availability is entirely dependent on how well systems are monitored, and which procedures are in place to react to anomalous situations.
At some point, every publication about availability mentions the importance of people and processes in a mission critical IT environment. We account for this by including the question 'Is failover automated?' for every failure scenario.
By answering positively, failure detection and recovery is provided as part of the architecture and it is therefore the vendor's responsability to provide an upper bound on system recovery time. By answering negatively, the owner of the system recognizes that recovery time from this failure (and hence the expected availability of the overall system) will depend to a much larger extent on internal processes and people skills.
Employing Online Serviceability to Control Expected Availability
A redundant system can resume its function quickly without replacing the broken component; however, a physical replacement will have to occur, which may require additional downtime.
The duration of this downtime depends on a quality that is usually called serviceability. In practice, online serviceability allows you to repair components when you want to, instead of postponing repair until the next planned downtime (for example, the following weekend). Performing repairs while the system is online increases expected availability by decreasing the probability of multiple failures. For this reason, we include online serviceable as a binary value in our failure impact model.
Considering the Impact of Service Degradation
So far, we identified a number of metrics with a direct link to expected availability: TTR, BTN, automated failover, and online serviceability. In this section, we discuss a final metric as visualized in the following graphic:
FIGURE 4 Measuring the Impact of Service Degradation on Availability
After failover to a redundant component, the system may not perform as it did before the failure. Service degradation is the most complex element to specify, and it includes performance degradation, which may be specified in terms of response time or throughput. To keep things simple, we assume that performance is directly proportional to system capacity. For example, when a system loses half of its CPUs after a failure, we assume a performance degradation of 50 percent. In almost all cases, this is a conservative assumption.
Data regression is another element of service degradation, which measures how many data updates are lost when recovery involves restoring backup data. We do not attempt to define this factor, but it must be included in the impact of a failure.
Service degradation does not appear in our formula for expected availability. To include it, we would need to refine the model again, from a binary model (whether the system is functional or not), to a model that considers the level of service. In our opinion, this would produce formulas that nobody would care to decipher.