Home > Articles

  • Print
  • + Share This
This chapter is from the book

Introduction to Technical Metrics

Technical metrics are a core component of SLAs. They are used to quantify and to assess the key technical attributes of delivered services.

Examples of technical metrics are shown in Table 2-1. They are separated into the two basic groups: high-level metrics, which deal with attributes that are highly relevant to end users and are easily understood by them, and low-level metrics, which deal with attributes of the underlying technologies. Note that you should be very specific when defining these terms in an agreement. Although many of these terms are in common use, their definitions vary.

Table 2-1 Examples of Technical Metrics

Metric

Description

High-Level Technical Metrics

 

Workload

Applied workload in terms understandable by the end user (such as end-user transactions/second)

Availability

Percentage of scheduled uptime that the system is perceived as available and functioning by the end user

Transaction Failure Rate

Percentage of initiated end-user transactions that fail to complete

Transaction Response Time

Measure of response-time characteristics of a user transaction

File Transfer Time

Measure of total transfer-time characteristics of a file transfer

Stream Quality

Measure of the user-perceived quality of a multimedia stream

Low-Level Technical Metrics

 

Workload

Applied workload in terms relevant to underlying technologies (such as database transactions/second)

Availability

Percentage of scheduled uptime that the subsystem is available and functioning

Packet Loss

Measure of one-way packet loss characteristics between specified points

Latency

Measure of transit time characteristics between specified points

Jitter

Measure of the transit time variability characteristics between specified points

Server Response Time

Measure of response-time characteristics of particular server subsystems


Workload is an important characteristic of both high- and low-level metrics. It's not a measure of delivered quality; instead, it's a critical measure of the load applied to the system. For example, consider the workload of serving web pages. A text-only page might comprise only 10 K bytes, whereas a graphics page could comprise a few megabytes. If the requirement is to deliver a page in six seconds to the end user, massively different bandwidth and capacity will be necessary. Indeed, content may need to be altered for low-speed connections to meet the six-second download time.

NOTE

In many situations, certain technical metrics aren't specified in the SLA. Instead, the supplier is asked to use best effort, which represents the classic Internet delivery strategy of "get it there somehow without concern for service quality." Today, best effort represents the commodity level for services. There are no special treatments for best-effort services. The only need is that there are sufficient resources to prevent best-effort services from starving out, which means having the connection time out because of long periods of inactivity.

Discussions of all of the examples in Table 2-1 follow, to illustrate the basic concepts of technical metrics. Additional descriptions of these metrics, and other technical metrics, appear in Chapters 4 and 8–10.

High-Level Technical Metrics

These metrics deal with workload and performance as seen and understood by the end user.

Workload

The workload high-level technical metric is the measure of applied load in end-user terms. It's unreasonable to expect a service provider to agree to service levels for an unspecified amount of workload; it's also unreasonable to expect that an end user will willingly substitute specification of obscurely-related low-level workload metrics instead of understandable high-level metrics. SLAs should therefore begin by specifying the high-level workload metrics, and service providers can then work with the customer's technical staff to derive low-level workload metrics from them.

For transaction systems, the workload metric is usually specified in terms of the end-user transaction mix and volumes, which typically vary according to time of day and other business cycles. For existing systems, these statistics can be obtained from logs; for new systems or situations (such as a proposed major advertising campaign designed to drive prospective customers to a web site), the organization's marketing group or their consultants should work to produce the most accurate, specific estimates possible. These workload estimates for new systems should be used for load testing as well as for SLAs.

Transaction workload metrics must include end-user tolerance for transaction response time delays. If response time delays are too long, external customers will abandon the transaction. In legacy systems where external customers did not interact directly with the server systems, abandonment was not a factor in workload testing. Call-center operators handled any delays by talking to the customers, shielding them from the problem, if necessary. On the Web, customers see the delays without any shielding, and they may decide at any point to abandon the transaction—with immediate impact on the server system's workload.

Another effect of the direct connection between customers and web-serving systems is that there's no buffer between those customers and the servers. In a call center, the workload is buffered by external queues. Incoming calls go through an automatic call distribution system; callers are placed on hold until an operator is available. In an order-entry center, the workload is buffered by the stack of documents on the entry clerk's desk. In contrast, the web workload has no external buffer; massive spikes in workload hit the servers instantly. These spikes in workload are called flash load, and they must be specified in the workload metric and considered during load testing. Load specification for the Web should therefore be in terms of arrival rate, not concurrent users, as was the case for call centers and order-entry centers.

File-serving, web-page, and streaming-media workload metrics are similar to transaction metrics, but simpler. They're usually specified in terms of the size and number of files that must be transferred in a given time interval. (For web pages, the types of the files are usually specified. Dynamically-generated files are clearly more resource-intensive than stored static files.) The serving system must have the bandwidth to serve the files, and it must also be able to handle the anticipated number of concurrent connections. There's a relationship between these two variables; given a certain arrival rate, higher end-to-end bandwidth results in fewer concurrent users.

Availability

Availability is the percentage of time that the system is perceived as available and functioning by the end user. It is a function of both the Mean Time Between Failures (MTBF) and the Mean Time To Repair (MTTR). Scheduled downtime might, in some organizations, be excluded from these calculations. In those organizations, a system can be declared 100 percent available even though it's down for an hour every night for system maintenance.

Availability is a binary measurement—the service is either available or it isn't. For the end user, and therefore for the high-level availability metric, the fact that particular underlying components of a service are unavailable is not a concern if that unavailability is concealed through redundant systems design.

Availability can be improved by increasing the MTBF or by decreasing the time spent on each failure, which is measured by the MTTR. Chapter 3, "Service Management Architecture," introduces the concept of triage, which decreases MTTR through quick assignment of problems to the appropriate specialist organization.

Transaction Failure Rate

A transaction fails if, having successfully started, it does not successfully complete. (Failure to start is the result of an availability problem.) As is true for availability, systems design and redundancy may conceal some low-level failures from the end user and therefore exclude the failures from the high-level transaction failure rate metric.

Transaction Response Time

This metric represents the acceptable delay for completing a transaction, measured at the level of a business process.

It's important to measure both the total time to complete a transaction and the elapsed time per page of the transaction. That's because the end user's perception of transaction time, which will be used to compare your system with your competitors', is based on total transaction time, regardless of the number of pages involved, while the slowest page will influence end-user abandonment of a web transaction.

File Transfer Time

The file transfer time metric is closely associated with specified workload and is a measure of success. The file transfer workload metric describes the work that must be accomplished in a certain period; the file transfer time metric shows whether that workload was successfully handled. Lack of end-to-end bandwidth, an insufficient number of concurrent connections, or persistent transmission errors (requiring retransmission) will influence this measure.

Stream Quality

The quality of multimedia streams is difficult to measure. Although underlying low-level technical metrics, such as frame loss, can be obtained, their relationship to the quality as perceived by an end user is very complex.

Streaming is a real-time service in which the content continues flowing even with variations in the underlying data transmission rates and despite some underlying errors. A content consumer may see a small blemish on a graphic because a packet is lost in transit—equivalent to static on your car radio. There is no rewinding and playing it again, as there might be with interactive services. Thus, packet loss is handled by just continuing with the streaming rather than retransmitting lost packets.

Occasional packet loss can still be tolerated and sometimes may not even be noticed. If packet loss increases, quality will begin to degrade until it falls below a threshold and becomes unacceptable. Years of development have been focused on concealing these low-level errors from the multimedia consumer, and the major existing technologies from Microsoft, Real Networks, Apple, and others have different sensitivities to these errors.

Nevertheless, quality must be measured. The telephone companies years ago established the Mean Opinion Score (MOS), a measure of the quality of telephone voice transmission. There are also international standards for evaluation of audio and video quality as perceived by human end users; examples are the International Telecommunication Union's ITU-T P.800-series and P.900-series standards and the American National Standards Institute's T1.518 and T1.801 standards. Simpler methods are also in use, such as measuring the percentage of successful connection attempts to the streaming server, the effective bandwidth delivered over that connection, and the number of rebuffers during transmission.

Low-Level Technical Metrics

These metrics deal with workload and performance of the underlying technical subsystems, such as the transport infrastructure. Low-level technical metrics can be selected and defined by first understanding the high-level technical metrics and their implications for the performance requirements placed on underlying subsystems. For example, a clear understanding of required transaction response time and the associated transaction characteristics (the number of transits across the transport network, the size of each transit, and so on) can help set the objective for the low-level technical metric that measures network transit time (latency).

Workload and Availability

These low-level technical metrics are similar to those for the high-level discussion, but they're focused on performance characteristics of the underlying systems rather than on performance characteristics that are directly visible to end users. Their correlation with the high-level metrics depends on the particular system design and the degree of redundancy and substitution within that design.

Throughput, for example, is a low-level technical metric that measures the capacity of a particular service flow. Services with rich content or critical real-time requirements might need sufficient bandwidth to maintain acceptable service quality. Certain transactions, such as downloading a file or accessing a new web page, might also require a certain bandwidth for transferring rich content, such as complex graphics, within the specified transaction delay time.

Packet Loss

Packet loss has different effects on the end-user experience, depending on the service using the transport. The choice of a packet loss metric for a particular application must be carefully considered. For example, packet loss in file transfer forces retransmission unless the high-level transport contains embedded error correction codes. In contrast, moderate packet loss in streaming media may have no user-perceptible effect at all—unless bad luck results in the loss of a key frame.

The burst length must be included in packet loss metrics. Usually a uniform distribution of dropped packets over longer time intervals is implicitly assumed. For example, out of every 100 packets there could be two lost without violating an SLA calling for two percent packet loss. There may be a different perspective if you examine behavior over longer intervals, such as 1,000 packets. Up to 20 packets in a row could be lost without violating the SLA. However, losing 20 consecutive packets—creating a significant gap in data received—might drive quality levels to unacceptable values.

Latency

Latency is the time needed for transit across the network; it's critical for real-time services. Excessive latency quickly degrades the quality of web sites and of interactive sound and video.

Routes in the Internet are usually asymmetric, with flows often taking different paths coming and going between any pair of locations. Thus, the delays in each direction are usually different. Fortunately, most Internet applications are primarily sensitive to round-trip delays, which are much simpler to measure than one-way delays. File transfer, web sites, and transactions all require a flow of acknowledgments in the opposite direction to data flow. If acknowledgments are delayed, transmission temporarily ceases. The round-trip latency therefore controls the effective bandwidth of the transmission.

Round-trip latency is much simpler to measure than one-way latency, because clock synchronization of separated locations is not necessary. That synchronization can be quite tricky if it is accomplished across the same network that's having its one-way delay measured. In that case, fluctuations in the metric that's being measured (one-way latency) can easily affect the stability of the measurement apparatus for one-way latency. An external reference, such as the satellite Global Positioning System (GPS) timers, is often used in such situations.

Jitter

Jitter is the deviation in the arrival rate of data from ideal, evenly-spaced arrival; see Figure 2-3. Some packets may be bunched more closely together (in terms of inter-packet delays) or spread farther apart after crossing the network infrastructure. Jitter is caused by the internal operation of network equipment, and it's unavoidable. Jitter is created whenever there are queues and buffering in a system. Extreme varieties of jitter are also created when there's rerouting of packets because of network congestion or failure.

Figure 3Figure 2-3 Jitter

Interactive teleconferencing is an example of a service that is extremely sensitive to jitter; too much jitter can make the service completely useless. Therefore, a reduction in jitter, approaching zero, represents an increase in quality.

Buffering in the receiving device can be used to smooth out jitter; the jitter buffer is familiar to those of us who have a CD player in the car. Small bumps are smoothed out and the sound quality remains acceptable, but hitting a pothole usually causes more disturbance than the buffer can overcome. The dejitter buffer allows for latency that is typically one or two times that of the expected jitter; it's not a cure for all situations. The time spent in the dejitter buffers is an important contributor to total system latency.

Server Response Time

Similar to the high-level technical metric transaction response time, this measures the individual response time characteristics of underlying server systems. A common example is the response time of the database back-end systems to specific query types. Although not directly seen by end users, this is an important part of overall system performance.

  • + Share This
  • 🔖 Save To Your Account