In this chapter we discuss a set of in-process metrics for the testing phases of the software development process. We provide real-life examples based on implementation experiences at the IBM Rochester software development laboratory. We also revisit the effort/outcome model as a framework for establishing and using in-process metrics for quality management.
There are certainly many more in-process metrics for software test that are not covered here; it is not our intent to provide a comprehensive coverage. Furthermore, not every metric we discuss here is applicable universally. We recommend that the several metrics that are basic to software testing (e.g., the test progress curve, defect arrivals density, critical problems before product ship) be integral parts of all software testing.
It can never be overstated that it is the effectiveness of the metrics that matters, not the number of metrics used. There is a strong temptation for quality practitioners to establish more and more metrics. However, ill-founded metrics are not only useless, they are actually counterproductive and add costs to the project. Therefore, we must take a serious approach to metrics. Each metric should be subjected to the examination of basic principles of measurement theory and be able to demonstrate empirical value. For example, the concept, the operational definition, the measurement scale, and validity and reliability issues should be well thought out. At a macro level, an overall framework should be used to avoid an ad hoc approach. We discuss the effort/outcome framework in this chapter, which is particularly relevant for in-process metrics. We also recommend the Goal/Question/Metric (GQM) approach in general for any metrics (Basili, 1989, 1995).
Recommendations for Small Organizations
For small organizations that don't have a metrics program in place and that intend to practice a minimum number of metrics, we recommend these metrics as basic to software testing: test progress S curve, defect arrival density, and critical problems or showstoppers.
For any projects and organizations we strongly recommend the effort/outcome model for interpreting the metrics for software testing and in managing their in-process quality. Metrics related to the effort side of the equation are especially important in driving improvement of software tests.
Finally, the practice of conducting an evaluation on whether the product is good enough to ship is highly recommended. The metrics and data available to support the evaluation may vary, and so may the quality criteria and the business strategy related to the product. Nonetheless, having such an evaluation based on both quantitative metrics and qualitative assessments is what good quality management is about.
At the same time, to enhance success, one should take a dynamic and flexible approach, that is, tailor the metrics to the needs of a specific team, product, and organization. There must be buy-in by the team (development and test) in order for the metrics to be effective. Metrics are a means to an endthe success of the projectnot an end itself. The project team that has intellectual control and thorough understanding of the metrics and data they use will be able to make the right decisions. As such, the use of specific metrics cannot be mandated from the top down.
While good metrics can serve as a useful tool for software development and project management, they do not automatically lead to improvement in testing and in quality. They do foster data-based and analysis-driven decision making and provide objective criteria for actions. Proper use and continued refinement by those involved (e.g., the project team, the test community, the development teams) are therefore crucial.