InformIT

Using Bugs To Bring Developers and Testers Closer Together

Date: Nov 18, 2004

Return to the article

Experienced software tester Michael Kelly provides some good suggestions on how testers can get and share information with developers, to improve collaboration and make better programs with less struggle.

While I'm not sure that the common saying "Testers and developers think differently" is accurate, I think it's safe to say that testers and developers certainly have different motivators for their work—and, more importantly, different pressures and perspectives that guide the directions their work takes. Testers are motivated by management to find and report problems within the system as quickly as possible, while developers are motivated to complete code as quickly and accurately as possible in order to move on to the next problem.

Somewhere in this jumble of speed versus quality, someone should be motivating testers to find better, more meaningful bugs, but I rarely see that motivation happening. And someone should be motivating developers to improve the quality of the code they create or modify, but that prompting often takes a backseat to "More code, faster."

Because of these different motivations of testers and developers, and the time pressures that everyone on the team is under, communication suffers. It's easy for a tester to think of a simple test that the developer should have run to find a particular problem before releasing the code—but the tester doesn't appreciate the pressures imposed on the developer. And it's easy for the developer to be dismayed by an overwhelming number of low-value or meaningless defects being reported—but the developer doesn't appreciate the expectations and metrics against which management measures the tester.

As more defects are submitted, developers have less time to work on them, and communication may break down. Members of the project team commonly start to rely on short, assumptive descriptions and comments entered into a defect-tracking system as the fastest and most effective means of communication. Little time is spent face to face, where the most effective communication can take place and dialogues can offer insights to both the tester and the developer.

In this article, I'll discuss how recent projects used the following simple techniques to fix problems faster and improve communication between testers and developers:

Not all of these techniques added "face time" to our project communications, but all of them helped everyone involved to gain a better understanding of the pressures and constraints under which both developers and testers work.

Share Test Scripts Between Teams

Problems encountered by test scripts can be lengthy or difficult to reproduce. In past projects, I've submitted defects that took hours to reproduce due to the sequence of events in the scripts being executed. More often than not, this setup requirement made it impossible for the developer to reproduce the issue based on the information in the log. If you make all of your test scripts available to the entire team, however, you give developers the ability to look at the script code, look at the script logs, re-run the scripts and watch them execute, re-run them in local environments with debug information written to logs, or re-run them in conjunction with other tools.

In addition to sharing the test scripts, provide the development team with a remote automated test-script execution box. Most enterprise tools allow for distributed test execution. If you provide one of your test lab machines to run your scripts, developers can execute the tests they need while simultaneously using their own computers to keep developing. This technique allows developers to work on your problem; without the testing box, the developers might not be able to run a full test due to time and equipment constraints. To make this strategy most effective, reference your scripts in the submitted defect list. The development team can run the scripts without checking with you first, removing a manual step.

Sharing test scripts with developers also enables everyone on the team to use the same tools used to develop the scripts. When team members use the same tools, a likely side-effect is developers taking the time to offer improvements to the scripts. After running one of my scripts, a developer once told me that he already had a unit-test script that did something similar "behind the GUI." Together we reviewed both scripts, and ultimately we transferred the data from my regression script to work with his unit-test script. His unit-test script executed in a fraction of the time of my regression script, and the results were easier to read. The more feedback from developers you get on your scripts, the more powerful they'll become.

In addition, the more you collaborate with developers, the more likely they'll be to fix your problem; after all, it may have been their optimization that found the bug. By distributing the ability to execute any of your scripts, you increase communication between developers and testers.

Distribute the Ability To Execute Smoke Tests

Every time a developer, integrator, or build-master (depending on your choice of terminology) creates a build, there's potential for something to go wrong: Something is left out, a file doesn't end up where it was supposed to be, the wrong version is compiled, something goes wrong when the code is moved into the target environment, and so on. On a team that does daily builds or even multiple builds a day, these issues crop up from time to time. The problem with a bad build is that you won't necessarily know it's bad until you get in there and do some testing. The solution is to create a series of tests that exercise the entire system from end to end. These tests, taken as a whole, are commonly called smoke tests. I believe the term comes from a rudimentary form of testing applied to electronic equipment, in which power is applied and the tester checks for sparks, smoke, or other dramatic signs of fundamental failure. A smoke test doesn't have to be exhaustive, but it should be capable of exposing major problems. If the smoke test fails, you can assume that the build is not stable enough to be tested more thoroughly.

In a web environment, smoke tests can even check the status of services. Web applications often use third-party services; checking all of them quickly can be difficult. If you build into your smoke tests a series of checks for those services, you can kill two birds with one stone. I often include a "service" smoke test with the "regular" smoke testing; the service smoke test can be executed on its own for a quick web-service status update. On some projects, I've even automated such service checks to run hourly and send email when services go down.

If you don't have a smoke test, create one. If it's not automated, automate it. Automated smoke tests are particularly powerful:

The sooner the smoke test is executed after the build, the faster the feedback can get from testing to development. The easiest way to ensure that your smoke test is executed is to include it in the build process. Depending on the available tools, smoke tests can be added to a build process via a batch file, if your build process is automated. If you have a manual build process, add a step at the end so that the person who performs the build also executes the smoke test to verify the build results.

Make the smoke test available to both testers and developers by using a central interface such as a project web site or a test-management tool. If everyone has the ability to execute the smoke test and the results are simple to interpret, testers won't be pressured to provide this service for everyone else on the team. Getting team members other than testers to run the smoke test may take a little patience and prompting the first couple of times, but after that most people will prefer not relying on someone else for such a simple task.

NOTE

Of course, this general availability means that the results must be easy to interpret.

By this step, you've begun giving developers a window into the mind of a tester.

Distributing the ability to execute smoke tests also increases communication between developers and testers. Not only does this practice get everyone using the same tools; it can get developers and testers collaborating in script development and maintenance. Seeing what the test team developed, developers may offer help in integrating selected unit tests that check essential back-end functionality, and help in creating interfaces to check the status of web services.

Perform Runtime Analysis Together

Runtime analysis is a thankless job. Whether you're a developer or just a really geeky tester (that's me), odds are that no one else on the project team appreciates or even understands your efforts in performing runtime analysis. Having used several of the runtime analysis tools on the market, I feel that anyone doing this type of testing should be sainted. It's very difficult to find and interpret the meaningful data that these tools generate. Even if you can find the data and figure out what it's telling you, most often no one knows what actually needs to be fixed.

As Goran Begic states in his article "An introduction to runtime analysis with Rational PurifyPlus," runtime analysis provides information on the following aspects of an application's execution:

Let's consider an example. One of my projects had a problem with pages taking a long time to load (more than 60 seconds). We ran numerous performance tests and couldn't find the problem. The developers looked briefly at the problem, but they had deadlines, and after a couple of days they decided that the problem could be solved later... when the developers had more time to deal with it. Our traditional performance tests couldn't isolate the problem at a sufficient level of detail. What to do? Using a simple code-coverage tool in conjunction with one of our simple regression scripts, the testing team was able to isolate the problem to a specific method. It seemed that a call was being executed 4,000,000 times when the page loaded. Armed with that information, they fixed the problem the next day, decreasing page load time to three seconds. The team now executes runtime analysis on a regular basis.

I've never worked with a developer who was actually tasked with performing runtime analysis. The developer was always doing it to solve a problem discovered by some other method of testing, or in response to something I found while performing my second-rate runtime analysis. I've found that the most effective way to make sure that runtime analysis gets done is to start performing the analysis yourself. As a tester, you don't need to become a runtime analysis expert; all you need to do is learn the basics about some runtime-analysis tools; learn a little about the technologies you're testing (common problems, bottlenecks, and performance problems); and find some time to actually do some testing.

Of all the techniques described in this article, runtime analysis seems to be the most effective in increasing developer/tester communication (your mileage may vary). In my experience as a tester, once you find something (or even if you only think you may have found something), you should bring over a developer and show him or her what you have. Suddenly, to the developer, you're no longer a technology-blind tester who doesn't know anything about development, and the developer will likely be interested in helping you to understand what you're seeing. Once a developer knows that a tester has the desire and the aptitude to learn, the developer typically is willing to spend as much time as available helping the tester to understand the applicable technologies. From the developer's point of view, explaining the technologies once, early in the project, saves him or her from having to answer many small questions later on, when under greater time pressures. At the very least, the tester will have a basic understanding from which to ask smarter and more meaningful questions.

At the same time that the developer is helping the tester, the developer may in turn look to the tester for help in learning the testing tools; this is an opportunity for the tester to share information on the possible risks and long-term effects of the problems found, if they're not fixed immediately. Together, tester and developer uncover and refine performance requirements and simultaneously learn new skills.

By working with developers on runtime analysis, testers can learn more about the technologies, code, and technical issues that developers face, and developers can learn more about what risks concern the testers. This technique leverages tools that both teams can share—and some of the tools specific to each team—to get everyone working together.

Use Log Files To Isolate Problems

Another effective technique—probably the simplest of all—is to leverage log files as a means of capturing bugs and debugging. Often, when a problem happens behind the scenes of an application, you don't see the problem on the user interface. For example, most Java exceptions don't appear onscreen. To see those errors, you need to view the source code, the log files, or the Java console. If developers give testers access to the execution log files for the application, the testers can use scripts to parse through the log files, looking for abnormalities and exceptions. For example, as a regular part of script execution on web applications, I parse the source code and logs while the script is running. Once developers know that the testing scripts are looking for this sort of information, the developers may be more willing to take the time to output results in a common format for testers to parse.

Building testability into the application is a partnership between testers and developers. Most developers want to help in any way they can, assuming that they're given time to do so. It's up to testers to let developers know what they need. When I'm testing, and I find a new problem in a log, I ask around to see whether anyone else knows of similar problems. Often, someone else will make a suggestion and then even help me to update the log parser to look for that related error. After a short time with this sort of collaboration, developers often make their errors easier to parse and make the logs more accessible to testers.

Use Defect-Tracking Systems Effectively

More than likely, your project team uses some form of automated defect-tracking or bug-tracking system. Developers should tell testers what specific information in a defect ticket is most helpful to each developer. When entering a defect, testers should provide the right type and amount of information: Attach a screenshot, source code, steps taken or a script/test case that can reproduce the bug, and any relevant log files. Try not to include suggestions about what might be the problem; you may be wrong, and including your conclusions may make the developer feel as if you don't think he or she can figure out the problem. Have long or complex bug reports reviewed by a second person to ensure that the information is clear. (I prefer to get feedback from another tester first, rather than get negative feedback from a developer who's missing some piece of information and, because of that, thinks of me as incompetent.)

Before assigning priority to defects, testers and developers should work out a prioritization scheme. When more pressing defects need attention, developers get annoyed by lots of reports of little problems that the customer is unlikely to encounter. If defects aren't prioritized properly, developers may also ignore or miss serious problems while sorting through less-significant defects. By prioritizing the issues, you ensure that critical bugs get fixed immediately and small problems get attention when time is available.

If you're a tester, and you have a problem getting your point across using the bug-tracking tool, go and talk with the developer if at all possible. If he or she is within walking distance, start walking. If you're in different locations, use the phone. If you're in different time zones, schedule a regular time to interface with the developer every week or so (she comes in early and you stay late, or something comparable).

NOTE

While defect-tracking or bug-tracking tools are great for organizing issues and managing what goes into each release, by removing face-to-face contact they can become a barrier to communication between developers and testers. However necessary these tools may be, it's important that we understand that such tools are not a substitute for more effective means of communication.

Speak Face to Face

There's no substitute for talking face to face with someone—asking questions, reading body language, and building a rapport are all helpful in creating open and effective project communication. Many times, when I encounter an issue that I think might be a defect, I immediately show the problem to a developer before I even write the defect ticket. This strategy helps in many ways:

For all of this action to occur, the developer needs to be available and willing to help. In one of my past projects, I went through this process several times with a particular developer, and we eventually developed a way of logging bugs together. We also started holding quick developer/tester sessions in which we would further investigate a bug or try to find similar bugs in other areas of the application. Once I had the developer's involvement in the test process, he could easily pull in other developers and architects to provide information as we were testing (something that I could never get to happen as quickly on my own).

Engaging the developer directly helps him or her better understand what you're trying to accomplish. Once the developer sees how you look for bugs, he or she can attempt to make bugs of that type more visible when they occur; for example, by adding error messages to the log files. Most developers have no idea of the odd ways in which testers look for bugs.

NOTE

Several of the developers who reviewed this article commented that when they see a bug produced by a tester, it gives them more insight into the severity of the bug and allows them to glean some better ways to test their code before it even gets to the tester.

Finally, keep in mind that you may be disturbing or distracting the developer during a high-pressure time in the release cycle. If you approach developers frequently, establish some ground rules to avoid annoying the developers, and to avoid reducing their productivity at a critical time in the project.

Next Steps

Now it's your turn. This article has presented several tools and techniques for testers who want to work more closely with developers. Pick one method and try it. If you're good with tools, look at some of the links in the references at the end of this article. If you're less technical but still want a better developer/tester relationship, simply start a dialogue with the developers and see what you can do to make life easier for them. Occasionally call a developer over and ask for help, even if you don't need it. Finally, if all else fails, I've never had a problem getting a developer to skip out of work early on a Friday to grab a drink (project schedule permitting, of course).

References

For more information on getting developers and testers to work together (from the tester's perspective), read the following:

For more information on some of the techniques discussed in this article, check out the following:

800 East 96th Street, Indianapolis, Indiana 46240