Home > Articles > Operating Systems, Server > Linux/UNIX/Open Source

  • Print
  • + Share This
This chapter is from the book

Interpreting Results and Other Notes

Before delving into how each tool works and what the returned information means, I have to throw out a disclaimer. Most performance-monitoring tools used in a somewhat random pattern can sometimes yield random results. It is quite common for documentation that comes with a tool to state that scripting with it might yield better long-term results.

To better understand this, think about real performance and perceived performance. Even the system administrator can be tricked into "freaking out" over a quick loss of performance. A prime example is with quick compiling programs. If you happen to run a monitoring tool around the same time that a programmer is running a compile, it might appear that the system is being taxed when, in fact, it is not under a sustained load. Most likely you already realize that systems will occasionally face a quick burst load that really does not harm the overall performance of the system. However, do all users or staff members realize this? Most likely they do not-remember, a little user education never hurts.

  • + Share This
  • 🔖 Save To Your Account