Home > Articles > Operating Systems, Server > Linux/UNIX/Open Source

Performance Tools for Optimizing Linux: Process-Specific CPU

📄 Contents

  1. 4.1 Process Performance Statistics
  2. 4.2 The Tools
  3. 4.3 Chapter Summary
  • Print
  • + Share This
The tools to analyze the performance of applications are varied and have existed in one form or another since the early days of UNIX. It is critical to understand how an application is interacting with the operating system, CPU, and memory system to understand its performance. This chapter will help you understand where the bottleneck in your system is occuring, and how to fix it.
This chapter is from the book

After using the system-wide performance tools to figure out which process is slowing down the system, you must apply the process-specific performance tools to figure out how the process is behaving. Linux provides a rich set of tools to track the important statistics of a process and application's performance.

After reading this chapter, you should be able to

  • Determine whether an application's runtime is spent in the kernel or application.

  • Determine what library and system calls an application is making and how long they are taking.

  • Profile an application to figure out what source lines and functions are taking the longest time to complete.

4.1 Process Performance Statistics

The tools to analyze the performance of applications are varied and have existed in one form or another since the early days of UNIX. It is critical to understand how an application is interacting with the operating system, CPU, and memory system to understand its performance. Most applications are not self-contained and make many calls to the Linux kernel and different libraries. These calls to the Linux kernel (or system calls) may be as simple as "what’s my PID?" or as complex as "read 12 blocks of data from the disk." Different systems calls will have different performance implications. Correspondingly, the library calls may be as simple as memory allocation or as complex as graphics window creation. These library calls may also have different performance characteristics.

4.1.1 Kernel Time Versus User Time

The most basic split of where an application may spend its time is between kernel and user time. Kernel time is the time spent in the Linux kernel, and user time is the amount of time spent in application or library code. Linux has tools such time and ps that can indicate (appropriately enough) whether an application is spending its time in application or kernel code. It also has commands such as oprofile and strace that enable you to trace which kernel calls are made on the behalf of the process, as well as how long each of those calls took to complete.

4.1.2 Library Time Versus Application Time

Any application with even a minor amount of complexity relies on system libraries to perform complex actions. These libraries may cause performance problems, so it is important to be able to see how much time an application spends in a particular library. Although it might not always be practical to modify the source code of the libraries directly to fix a problem, it may be possible to change the application code to call different or fewer library functions. The ltrace command and oprofile suite provide a way to analyze the performance of libraries when they are used by applications. Tools built in to the Linux loader, ld, helps you determine whether the use of many libraries slows down an application’s start time.

4.1.3 Subdividing Application Time

When the application is known to be the bottleneck, Linux provides tools that enable you to profile an application to figure out where time is spent within an application. Tools such as gprof and oprofile can generate profiles of an application that pin down exactly which source line is causing large amounts of time to be spent.

  • + Share This
  • 🔖 Save To Your Account