Home > Articles > Operating Systems, Server > Solaris

Enterprise Quality of Service (QoS): Part I - Internals

  • Print
  • + Share This
This two-article series works to clear the confusion surrounding QoS by explaining what it is, how it is implemented, and how to use it in an enterprise. This month's article details the basics surrounding the "what" and "how" of implementation, as well as the internals of QoS.
Like this article? We recommend

Like this article? We recommend

Enterprise customers, are realizing that as a result of deploying new emerging real-time and mission-critical applications, that the traditional "Best Effort" IP network service model is unsuitable. The main concern is that non-well behaved flows, adversely affect other flows that share the same resources. It is difficult to tune resources so that the requirements of all deployed applications are met.

Quality of Service (QoS) can be thought of as a performance and availability delivery specification of a service. QoS can usually be referred to as a measure of the ability of network and computing systems to provide different levels of services to selected applications and associated network flows. Customers that are deploying mission-critical applications and real time applications have an economic incentive to invest in QoS capabilities so that acceptable response times are guaranteed within certain tolerances.

This article, Part I of a two part series, explains QoS functional components and mechanisms and provides the reader with the technical background helpful to better understand the trade-offs between alternative QoS solutions.

Next month, Enterprise Quality of Service (QoS) Part II: Enterprise Solution focuses on Enterprise Networks detailing what corporations can do to prioritize traffic in an optimal manner to ensure that certain applications receive priority over less important applications.

The Need for QoS

In order to understand the need for QoS, let's look at what has happened to enterprise applications over the past decade. In the late 1980's and early 1990's, the client server was the dominant architecture. The main principle involved a thick client and local server, where 80% of the traffic would be from the client to a local server and 20% of the client traffic would need to traverse the corporate backbone. In the late 1990's with the rapid adoption of Internet-based applications, the architecture changed to a thin client, and servers were located anywhere and everywhere. This had one significant implication, the network became a critically shared resource, where priority traffic was dangerously impacted by nonessential traffic. A common example is the difference between downloading images versus processing sales orders. Different applications have different resource needs. This section describes why different applications have different QoS requirements and why QoS is becoming a critical resource for enterprise data centers and service providers whose customers drive the demand for QoS.

Classes of Applications

There are five classes of applications, having different network and computing requirements. They are:

  • Data transfers
  • Video/voice streaming
  • Interactive video/voice
  • Mission-critical
  • Web-based

These classes are important in classifying, prioritizing, and implementing QoS. The following sections detail these five classes.

Data Transfers

Data transfers include applications such as FTP, email, and database backup. Data transfers tend to have zero tolerances for packet loss, and high tolerances for delay and jitter. Typical acceptable response times range from a few seconds for FTP transfers to hours for email. Bandwidth requirements in the order of Kbyte/sec are acceptable, depending on the file size, which keeps response times to a few seconds. Depending on the characteristics of the application, (for example, size of a file) disk I/O transfer times can contribute cumulatively to delays along with network bottlenecks.

Video / Voice Streaming

Video/voice streaming includes applications such as Apple's QuickTime Streaming or Real Networks' Streaming video and voice products. Video/voice streamings tend to have low tolerances for packet loss, and medium tolerances for delay and jitter. Typical acceptable response times are in the order of a few seconds. This is due to the fact that the server can pre-buffer multimedia data on the client to a certain degree. This buffer then drains at a constant rate on the client side, while simultaneously, receiving bursty streaming data from the server with variations in delay. As long as the buffer can absorb all variations (without draining empty), the client sees a constant stream of video and voice. Typical bandwidth requirements are in the order of Mbyte/sec, depending on frame rate, compression/decompression algorithms, and size of images. Disk I/O and Central Processing Unit (CPU) also contribute to delays. Large Motion Pictures Experts Group (MPEG) files must be read from disks and compression/decompression algorithms.

Interactive Video/Voice

Interactive video/voice tends to have low to medium levels of tolerance for packet loss, and low tolerance for delay and jitter. Typical bandwidth requirements are tremendous (depending on number of simultaneous participants in the conference, growing exponentially). Due to the interactive nature of the data being transferred, tolerances are very low for delay and jitter. As soon as one participant moves or talks, all other participants need to immediately see and hear this change. Response times requirements range from 250 to 500 ms. This response time is compounded by the bandwidth requirements with each stream requiring a few Mbit/sec. In a conference of five participants, each participant is pumping out their voice and video stream while at the same time receiving the other participants' streams.

Mission-Critical Applications

Mission-critical applications vary in bandwidth requirements, but tend to have zero tolerance for packet loss. Depending on the application, bandwidth requirements are in the order of Kbyte/sec. Response times are in the order of 500 ms to a few seconds. Server resource requirements vary, depending on the application (for example, in terms of CPU, disk, and memory).

Web-Based Applications

Web-based applications tend to have low bandwidth requirements, (unless large image files are associated with the request web page) and grow in CPU and disk requirements, due to dynamically generated web pages and web transaction based applications. Response time requirements range from 500 ms to 1 second.

Different classes of applications have different network and computing requirements. The challenge is to align the network and computing services to the application's service requirements from a performance perspective.


The two most common approaches used to satisfy the service requirements of applications are:

  • Over provisioning
  • Managing and controlling

Over provisioning allows over allocation of resources to meet or exceed peak load requirements. Depending on the deployment, over provisioning can be viable if it is a simple matter of just upgrading to faster local area network (LAN) switches and network interface cards (NICs), adding memory, adding CPU, or disk. However, over provisioning may not be viable in certain cases, for example when dealing with relatively expensive long haul wide area network (WAN) links, resources that on average are under utilized, or source busy only during short peak periods.

Managing and controlling allows allocation of network and computing resources. Better management of existing resources attempts to optimize utilization of existing resources such as limited bandwidth, CPU cycles, and network switch buffer memory.

  • + Share This
  • 🔖 Save To Your Account