Home > Articles > Programming

This chapter is from the book

Manual Testing

Manual testing is human-present testing. A human tester uses her brain, her fingers, and her wit to create the scenarios that will cause software either to fail or to fulfill its mission. Human-present testing allows the best chance to create realistic user scenarios, using real user data in real user environments and still allowing for the possibility of recognizing both obvious and subtle bugs.

Manual testing is the best choice for finding bugs related to the underlying business logic of an application. Business logic is the code that implements user requirements; in other words, it is the code that customers buy the software for. Business logic is complex and requires a human in the loop to verify that it is correct, a task that automation is too often ill-suited to accomplish.

Perhaps it will be the case that developer-oriented techniques will evolve to the point that a tester is unnecessary. Indeed, this would be a desirable future for software producers and software users alike, but for the foreseeable future, tester-based detection is our best hope at finding the bugs that matter. There is simply too much variation, too many scenarios, and too many possible failures for automation to track it all. It requires a "brain in the loop." This is the case for this decade, the next decade, and perhaps a few more after that.

I wish it was just that easy, but historically the industry has not been good at manual testing. It's too slow, too ad hoc, not repeatable, not reproducible, not transferable, and there isn't enough good advice out there for testers to get good at it. This has created a poor reputation for manual testing as the ugly stepchild of development. It's unfortunate that this is the case, but such is the hand we are dealt.

It's time we put the best technology available into the process of manual testing. This is the subject of exploratory testing that this book addresses. I want the industry to get past the idea of ad hoc manual testing and work toward a process for exploratory testing that is more purposeful and prescriptive. It should be a process where manual testing requires careful preparation yet leaves room for intelligent decision making during testing. Manual testing is too important to treat it with any less respect.

We may look to a future in which software just works, but if we achieve that vision, it will be the hard work of the manual testers of this time that makes it possible.

Scripted Manual Testing

Many manual testers are guided by scripts, written in advance, that guide input selection and dictate how the software's results are to be checked for correctness. Sometimes scripts are specific: Enter this value, press this button, check for that result, and so forth. Such scripts are often documented in spreadsheet tables and require maintenance as features get updated through either new development or bug fixes. The scripts serve a secondary purpose of documenting the actual testing that was performed.

Often, scripted manual testing is too rigid for some applications, or test processes and testers take a less-formal approach. Instead of documenting every input, a script may be written as a general scenario that gives some flexibility to the testers while they are running the test. At Microsoft, the folks who manually test Xbox games often do this. So an input would be "interact with the mage," without specifying exactly the type of interaction they must perform. Thus it is possible that scripted testing can be as rigid or as flexible as necessary, but for the flexibility to work, testers are going to need very specific advice for how to handle choice and uncertainty, and this is more the domain of exploratory testing.

In this book, we are only interested in the flexible type of scripted testing.

Exploratory Testing

When the scripts are removed entirely (or as we shall see in later chapters, their rigidness relaxed), the process is called exploratory testing. Testers may interact with the application in whatever way they want and use the information the application provides to react, change course, and generally explore the application's functionality without restraint. It may seem ad hoc to some, but in the hands of a skilled and experienced exploratory tester, this technique can prove powerful. Advocates argue that exploratory testing allows the full power of the human brain to be brought to bear on finding bugs and verifying functionality without preconceived restrictions.

Testers using exploratory methods are also not without a documentation trail. Test results, test cases, and test documentation are generated as tests are being performed instead of being documented ahead of time in a test plan. Screen capture and keystroke recording tools are ideal for recording the result of exploratory testing. Just because it's manual testing doesn't mean we can't employ automation tools as aids to the process. Indeed, even those who "handcraft" furniture do so with the assistance of power tools. Handcrafting test cases should be no different. Manual testers who use debug builds, debuggers, proxies, and other types of analysis tools are still doing manual testing; they are just being practical about it.

Exploratory testing is especially suited to modern web application development using agile methods.5 Development cycles are short, leaving little time for formal script writing and maintenance. Features often evolve quickly, so minimizing dependent artifacts (like pre-prepared test cases) is a desirable attribute. If the test case has a good chance of becoming irrelevant, why write it in the first place? Are you not setting yourself up for spending more time maintaining test cases than actually doing testing?

The drawback to exploratory testing is that testers risk wasting a great deal of time wandering around an application looking for things to test and trying to find bugs. The lack of preparation, structure, and guidance can lead to many unproductive hours and retesting the same functionality over and over, particularly when multiple testers or test teams are involved. Without documentation, how do testers ensure they are getting good coverage?

This is where guidance comes into play. Exploratory testing without good guidance is like wandering around a city looking for cool tourist attractions. It helps to have a guide and to understand something about your destination (in our case, software) that can help your exploration to be more methodical. Looking for beaches in London is a waste of time. Looking for medieval architecture in Florida is equally so. Surely what you are testing is just as important to your strategy as how you test it.

There are two types of guidance for exploratory testers to help in the decision-making process: exploratory testing in the small, which aids in local decision making while running tests; and exploratory testing in the large, which helps testers design overall test plans and strategies. Both are summarized here and covered in detail in Chapter 3, "Exploratory Testing in the Small," and Chapter 4, "Exploratory Testing in the Large." Finally, a third class of exploratory testing that combines elements of exploration with scripted manual testing is discussed in Chapter 5, "Hybrid Exploratory Testing Techniques."

Exploratory Testing in the Small

Much of what a manual tester does is about variation. Testers must choose which inputs to apply, what pages or screens to visit, which menu items to select, and the exact values to type into each input field they see. There are literally hundreds of such decisions to make with every test case we run.

Exploratory testing can help a tester make these decisions. And when a tester uses exploratory testing strategy to answer these sorts of questions, I call this exploratory testing in the small because the scope of the decision is small. A tester is looking at a certain web page or dialog box or method and needs focused advice about what to do for that specific situation. This is necessarily a localized decision-making process that testers will perform dozens of times in a single test case and hundreds of times over the course of a day of testing.

The problem is that many testers don't know what to do in the variety of "small" situations that they encounter. Which value do you enter into a text box that accepts integers? Is the value 4 better (meaning more likely to find a bug or force a specific output) than the value 400? Is there anything special about 0 or about negative numbers? What illegal values might you try? If you know something about the application—for example, that it is written in C++ or that it is connected to a database—does that change the values you might try? What, indeed, is the sum total of exploratory testing wisdom that we can use to help us make the right small decisions as we test?

Chapter 3 is devoted to passing along some of this wisdom. I'll be the first to admit, that most of it is not mine. I've been lucky enough to work around some of the best software testers to grace this planet. From IBM to Ericsson to Microsoft, Adobe, Google, Cisco, and many more names far less recognizable, I've gathered what I think is a large portion of this advice and I reproduce it here. Much of this information was embodied in How to Break Software, and so readers of that book can consider this an update to the body of knowledge that was published there. But as the attitude of that book was about finding bugs, the purpose of this book is much broader. Here we are interested in more than finding bugs. We want to force software to exhibit its capabilities and gain coverage of the application's features, interfaces, and code and find ways to put it through its paces to determine its readiness for release.

Exploratory Testing in the Large

There is more to testing, however, than making all the small decisions correctly. In fact, it is possible to nail all the small decisions and still not have an overall set of tests that confirm (or reject) release readiness. The sum total of all the test cases is definitely more than the individual parts. Test cases are interrelated, and each test case should add to the others and make the entire set of test cases better in some substantive, measurable (or at least arguable) way.

This points to the need for a strategy that guides test case design and exploration. Which features should a single test case visit? Are there certain features or functions that must be tested together? Which feature should be used first, and how do we decide which subsequent features to test? If there are multiple testers on a project, how can we make sure their strategies complement each other and they don't end up testing the same things? How does an exploratory tester make these larger scope decisions about overall test cases and testing strategy?

I call this exploratory testing in the large because the scope of the decisions to be made encompasses the software as a whole instead of a single screen or dialog. The decisions made should guide how an application is explored more than how a specific feature is tested.

In Chapter 4, I use a tourism metaphor to guide exploratory testing in the large. Think about it this way: As a tourist visiting a new city, you will use in-the-large advice to choose which restaurant to visit, but you will use in-the-small advice to choose what meal and drink to order. In-the-large advice will help plan your entire day and advise you on how to plan your entire stay, the landmarks you visit, the shows you see, and the restaurants at which you dine. In-the-small advice will help you navigate each of these events and plan the subtle details that a larger plan will always leave out. By perfecting the combination of the two, you've entered the world of an expert exploratory software tester.

Combining Exploration and Scripting

It isn't necessary to view exploratory testing as a strict alternative to script-based manual testing. In fact, the two can co-exist quite nicely. Having formal scripts can provide a structure to frame exploration, and exploratory methods can add an element of variation to scripts that can amplify their effectiveness. The expression opposites attract is relevant in the sense that because formal scripts and exploratory methods are at opposite extremes of the manual testing spectrum, they actually have a lot to offer each other. If used correctly, each can overcome the other's weaknesses, and a tester can end up in the happy midpoint of a very effective combination of techniques.

The best way that I have found to combine the two techniques is to start with formal scripts and use exploratory techniques to inject variation into them. This way, a single script may end up being translated into any number of actual exploratory test cases.

Traditional script-based testing usually involves a starting point of user stories or documented end-to-end scenarios that we expect our eventual users to perform. These scenarios can come from user research, data from prior versions of the application, and so forth, and are used as scripts to test the software. The added element of exploratory testing to traditional scenario testing widens the scope of the script to inject variation, investigation, and optional user paths.

An exploratory tester who uses a scenario as a guide will often pursue interesting alternative inputs or pursue some potential side effect that is not included in the script. However, the ultimate goal is to complete the scenario so that these testing detours always end up back on the main user path documented in the script. The detours from the script can be chosen based on structured ways of modifying specific steps in the script or by exploratory excursions off the script and then back again. Chapter 5 is dedicated entirely to script-based exploratory testing because it is one of the key tools in the manual tester's arsenal of techniques.

The techniques in Chapters 3 through 5 have been applied in a number of case studies and trials throughout Microsoft, and the results are presented in Chapter 6, "Exploratory Testing in Practice," as experience reports written by the testers and test leads involved in these projects. Chapter 6 examines how the exploratory testing techniques were applied to several different classes of software from operating system components to mobile applications to more traditional desktop and web software. Also, special tours written specifically for a particular project are described by their authors.

The remainder of the book highlights essays on, respectively, building a testing career and the future of testing, followed by past and current essays and papers while I was a professor at Florida Tech and an architect at Microsoft. Since I have now left Microsoft, this book may be the only place that the latter material can be found.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020