Home > Articles > Programming

Dimensions of a Good Test Report

  • Print
  • + Share This
If you're a professional tester, you've undoubtedly encountered the question, "How's your testing coming along?" Michael Kelly can't be the only person who flubbed the answer with a mumble and a shrug. To prevent future incoherence, he developed a heuristic for knowing and instantly reporting the status of his testing, and he shares that framework in this article.
Like this article? We recommend

Like this article? We recommend

How quickly can you answer the following questions?

  • What’s the status of your testing?
  • What are you doing today?
  • When will you be finished?
  • Why is it taking so long?
  • Have you tested _______ yet?

If you think you can successfully answer all of those questions, you’re done. Read no further. However, if you think you might stumble over a couple of them, then read on. In this article, we’ll take a look at some of the aspects of "good" test reporting and two different examples of test reports.

Good test reporting is difficult. Difficulties include tailoring test reports to your audience, clarifying confusion about what testing is, explaining how testing is actually done, and understanding which testing metrics are meaningful and when they’re meaningful. In addition, people to whom you’re providing your report frequently have assumptions about testing that you may not share. Some great examples of these assumptions:

  • Testing is exhaustive.
  • Testing is continuous.
  • Test results stay valid over time. (My personal favorite.)

Aspects of a Good Test Report

Have you ever found yourself at a loss for words when someone asks for an ad hoc testing status? There you were, diligently testing, and someone asks how things are going, and then, bam—like some out-of-whack Rube Goldberg machine where the mouse flips though the air, misses the cup, and smacks against the wall—your mind freezes up. You mumble something about the technique you’re using and the last bug you found, and the person you’re reporting your progress to just stares at you like you’re speaking some other language.

That has happened to me more than once. But now I have a framework for thinking about my testing that lets me answer questions about my status with confidence and tailor my test report appropriately for the audience. In its simplest form, a test report should address a range of topics: mission, coverage, risk, techniques, environment, status, and obstacles. This holds true for both written reports and the dreaded verbal report.


A test report should cover what you’re attempting to accomplish with your testing. Are you trying to find important problems? Assess product quality or risk? Or are you trying to audit a specific aspect of the application (such as security or compliance)? In general, if you have a hard time articulating your status, it might be because you don’t have a clear mission. Having a clear mission makes it much easier to know your status, as you have a precise idea of what you’re supposed to be doing.


A test report should include the dimensions of the product you’re covering. Depending on your audience and the necessary completeness of your report, it may also include dimensions that you’re not covering. A great heuristic (and the one I use the most) for remembering the different aspects of coverage is James Bach’s "San Francisco Depot" (SFDPO):

  • Structure
  • Function
  • Data
  • Platforms
  • Operations

Michael Bolton talks about SFDPO in his Better Software article on Elemental Models.


It’s not enough to say what you’re covering; you should also indicate why you’re covering it. That’s where risk comes in. What kinds of problems could the product have? Which problems matter most? You may find it helpful to make a mental (or physical) list of interesting problems and design tests specifically to reveal them.


Once you’ve talked about what you’re testing and why you’re testing it, you might want to indicate how you’re looking for it, if you think the audience is interested. That’s easiest said by talking about specific test techniques. Examples would include scenario testing, stress testing, claims testing, combinatorics, fault injection, and random testing; this list goes on and on. Wikipedia has an excellent list of techniques with links to information on all of them.


The environment where your testing is taking place may include configurations (hardware, software, languages, or settings), who you’re testing with, and what tools or scripting languages you’re using. If you’re using standard or well-understood environments for testing, you might just note exceptions or additions to the standard.


Perhaps the most important aspect of a test report is your status. In this section of the report you answer questions like these:

  • How far along are you?
  • How far did you plan to be?
  • What have you found so far?
  • How much more do you have to do?

Tailor your status report to the needs of your audience: If they’re concerned about potential risks, cover the negative. If they want to measure progress, cover the positive. In the same way, you can determine whether to present a status report that’s summary or detailed.


A follow-up to your test status may include the obstacles to your testing. This can be simple:

"I didn’t do X because of Y."

or detailed:

"If we had X we could do 20% more of Y and 10% more of Z, but that might also mean that we don’t get around to testing W until Friday."

It can be helpful to think of questions like these:

  • Do I have any issues I need help with?
  • Is there anything I can’t work around?
  • Are there any tools that would allow me to test something that I can’t test right now?
  • + Share This
  • 🔖 Save To Your Account