Home > Articles > Software Development & Management

This chapter is from the book

Reviews and Team Culture

While individual participants can always benefit from a peer review, a broad review program can succeed only in a culture that values quality. "Quality" has many dimensions, including freedom from defects, satisfaction of customer needs, timeliness of delivery, and the possession of desirable product functionality and attributes. Members of a software engineering culture regard reviews as constructive activities that help both individuals and teams succeed. They understand that reviews are not intended to identify inferior performers or to find scapegoats for quality problems.

Reviews can result in two undesirable attitudes on the part of the work product's author. Some people become lax in their work because they're relying on someone else to find their mistakes, just as some programmers expect testers to catch their errors. The author is ultimately responsible for the product; a review is just an aid to help the author create a high-quality deliverable. Sometimes when I'm reading a draft of an article or book chapter I've written, I hear a little voice telling me that a section is incorrect or awkwardly phrased. I used to tell myself, "I'll give it to the reviewers and see what they think." Big mistake: the reviewers invariably disliked that clumsy section. Now whenever I hear that little voice, I fix the problem before I waste my reviewers' time.

The other extreme to avoid is the temptation to perfect the product before you allow another pair of eyes to see it. This is an ego-protecting strategy: you won't feel embarrassed about your mistakes if no one else sees them. I once managed a developer who refused to let anyone review her code until it was complete and as good as she could make it—fully implemented, tested, formatted, and documented. She regarded a review as a seal of approval rather than as the in-process quality-improvement activity it really is.

Such reluctance has several unfortunate consequences. If your work isn't reviewed until you think it's complete, you are psychologically resistant to suggestions for changes. If the program runs, how bad can it be? You are likely to rationalize away possible bugs because you believe you've finished and you're eager to move on to the next task. Relying on your own deskchecking and unit testing ignores the greater efficiency of a peer review for finding many defects.

At the same time, the desire to show our colleagues only our best side can become a positive factor. Reviews motivate us to practice superior craftsmanship because we know our coworkers will closely examine our work. In this indirect way, peer reviews lead to higher quality. One of my fellow consultants knows a quality engineer who began to present his team with summaries of defects found during reviews, without identifying specific work products or authors. The team soon saw a decrease in the number of bugs discovered during reviews. Based on what he knew about the team, my colleague concluded that authors created better products after they learned how reviews were being used on the project and knew what kinds of defects to look for. Reviews weren't a form of punishment but stimulated a desire to complete a body of work properly.

The Influence of Culture

In a healthy software engineering culture, a set of shared beliefs, individual behaviors, and technical practices define an environment in which all team members are committed to building quality products through the effective application of sensible processes (Wiegers 1996a). Such a culture demands a commitment by managers at all levels to provide a quality-driven environment. Recognizing that team success depends on helping each other do the best possible job, members of a healthy culture prefer to have peers, not customers, find software defects. Having a coworker locate a defect is regarded as a "good catch," not as a personal failing.

Peer reviews have their greatest impact in a healthy software culture, and a successful review program contributes strongly to creating such a culture. Prerequisites for establishing and sustaining an effective review program include:

  • Defining and communicating your business goals for each project so reviewers can refer to a shared project vision

  • Determining your customers' expectations for product quality so you can set attainable quality goals

  • Understanding how peer reviews and other quality practices can help the team achieve its quality goals

  • Educating stakeholders within the development organization—and, where appropriate, in the customer community—about what peer reviews are, why they add value, who should participate, and how to perform them

  • Providing the necessary staff time to define and manage the organization's review process, train the participants, conduct the reviews, and collect and evaluate review data

The dynamics between the work product's author and its reviewers are critical. The author must trust and respect the reviewers enough to be receptive to their comments. Similarly, the reviewers must show respect for the author's talent and hard work. Reviewers should thoughtfully select the words they use to raise an issue, focusing on what they observed about the product. Saying, "I didn't see where these variables were initialized" is likely to elicit a constructive response, whereas "You didn't initialize these variables" might get the author's hackles up. The small shift in wording from the accusatory "you" to the less confrontational "I" lets the reviewer deliver even critical feedback effectively. Reviewers and authors must continue to work together outside the reviews, so they all need to maintain a level of professionalism and mutual respect to avoid strained relationships.

An author who walks out of a review meeting feeling embarrassed, personally attacked, or professionally insulted will not voluntarily submit work for peer review again. Nor do you want reviews to create authors who look forward to retaliating against their tormentors. The bad guys in a review are the bugs, not the author or the reviewers, but it takes several positive experiences to internalize this reality. The leaders of the review initiative should strive to create a culture of constructive criticism in which team members seek to learn from their peers and to do a better job the next time. To accelerate this culture change, managers should encourage and reward those who initially participate in reviews, regardless of the review outcomes.

Reviews and Managers

The attitude and behavior that managers exhibit toward reviews affect how well the reviews will work in an organization. Although managers want to deliver quality products, they also feel pressure to release products quickly. They don't always understand what peer reviews or inspections are or the contribution they make to shipping quality products on time. I once encountered resistance to inspections from a quality manager who came from a manufacturing background. He regarded inspections as a carryover from the old manufacturing quality practice of manually examining finished products for defects. After he understood how software inspections contribute to quality through early removal of defects, his resistance disappeared.

Managers need to learn about peer reviews and their impact on the organization so they can build the reviews into project plans, allocate resources for them, and communicate their commitment to reviews to the team. If reviews aren't planned, they won't happen. Managers also must be sensitive to the interpersonal aspects of peer reviews. Watch out for known culture killers, such as managers singling out certain developers for the humiliating "punishment" of having their work reviewed.

Without visible and sustained commitment to peer reviews from management, only those practitioners who believe reviews are important will perform them. Management commitment to any engineering practice is more than providing verbal support or giving team members permission to use the practice. Figure 2–1 lists eleven signs of management commitment to peer reviews.

Figure 2–1. Eleven signs of management commitment to peer reviews

To persuade managers about the value of reviews, couch your argument in terms of what outcomes are important to the manager's view of success. Published data convinces some people, but others want to see tangible benefits from a pilot or trial application in their own organization. Still other managers will reject both logical and data-based arguments for reviews and simply say no. In this case, keep in mind one of my basic software engineering cultural principles—"Never let your boss or your customer talk you into doing a bad job"—and engage your colleagues in reviews anyway (perhaps quietly, to avoid unduly provoking your managers).

A dangerous situation arises when a manager wishes to use data collected from peer reviews to assess the performance of the authors (Lee 1997). Software metrics must never be used to reward or penalize individuals. The purposes of collecting data from reviews is to better understand your development and quality processes, to improve processes that aren't working well, and to track the impact of process changes. Using defect data from inspections to evaluate individuals is a classic culture killer. It can lead to measurement dysfunction, in which measurement motivates people to behave in a way that produces results inconsistent with the desired goals (Austin 1996).

I recently heard from a quality manager at a company that had operated a successful inspection program for two years. The development manager had just announced his intention to use inspection data as input to the performance evaluations of the work product authors. Finding more than five bugs during an inspection would count against the author. Naturally, this made the development team members very nervous. It conveyed the erroneous impression that the purpose of inspections is to punish people for making mistakes or to find someone to blame for troubled projects. This misapplication of inspection data could lead to numerous dysfunctional outcomes, including the following:

  1. To avoid being punished for their results, developers might not submit their work for inspection. They might refuse to inspect a peer's work to avoid contributing to someone else's punishment.

  2. Inspectors might not point out defects during the inspection, instead telling the author about them offline so they aren't tallied against the author. Alternatively, developers might hold "pre-reviews" to filter out bugs unofficially before going through a punitive inspection. This undermines the open focus on quality that should characterize inspection. It also skews any metrics you're legitimately tracking from multiple inspections.

  3. Inspection teams might debate whether something really is a defect, because defects count against the author, and issues or simple questions do not. This could lead to glossing over actual defects.

  4. The team's inspection culture might develop an implicit goal of finding few defects rather than revealing as many as possible. This reduces the value of the inspections without reducing their cost, thereby lowering the team's return on investment from inspections.

  5. Authors might hold many inspections of small pieces of work to reduce the chance of finding more than five bugs in any one inspection. This leads to inefficient and time-wasting inspections. It's a kind of gamesmanship, doing the minimum to claim you have had your work inspected but not properly exploiting the technique.

These potential problems underscore the risks posed to an inspection program by using inspection data to evaluate individuals. Such evaluation criminalizes the mistakes we all make and pits team members against each other. It motivates participants to manipulate the process to avoid being hurt by it. If I were a developer in this situation, I would encourage management to have the organization's peer review coordinator (see Chapter 10) summarize defects collected from multiple inspections so the defect counts aren't linked to specific authors. If management insisted on using defect counts for performance appraisal, I would refuse to participate in inspections. Managers may legitimately expect developers to submit their work for review and to review deliverables that others create. However, a good manager doesn't need defect counts to know who the top contributors are and who is struggling.

When inspection metrics were introduced into one organization, a manager happily exclaimed, "This data will help me measure the performance of my engineers!" After the inspection champion explained the philosophy of software measurement to him, the manager agreed not to see the data from individual inspections. He publicly described the inspection process as a tool to help engineers produce better products. He told the engineers he would not view the individual inspection measures because he was interested in the big picture, the overall efficiency of the software engineering process. This manager's thoughtful comments helped defuse resistance to inspection measurement in his organization.

Why People Don't Do Reviews

If peer reviews are so great, why isn't everybody already doing them? Factors that contribute to the underuse of reviews include lack of knowledge about reviews, cultural issues, and simple resistance to change, often masquerading as excuses. If reviews aren't a part of your organization's standard practices, understand why so you know what must change to make them succeed.

Many people don't understand what peer reviews are, why they are valuable, the differences between informal reviews and inspections, or when and how to perform reviews. Education can solve this problem. Some developers and project managers don't think their projects are large enough or critical enough to need reviews. However, any body of work can benefit from an outside perspective.

The misperception that testing is always superior to manual examination also leads some practitioners to shun reviews. Testing has long been recognized as a critical activity in developing software. Entire departments are dedicated to testing, with testing effort scheduled into projects and resources allocated for testing. Organizations that have not yet internalized the benefits of peer reviews lack an analogous cultural imperative and a supporting infrastructure for performing them.

A fundamental cultural inhibitor to peer reviews is that developers don't recognize how many errors they make, so they don't see the need for methods to catch or reduce their errors. Many organizations don't collect, summarize, and present to all team members even such basic quality data as the number of errors found in testing or by customers. Authors who submit their work for scrutiny might feel that their privacy is being invaded, that they're being forced to air the internals of their work for all to see. This is threatening to some people, which is why the culture must emphasize the value of reviews as a collaborative, nonjudgmental tool for improved quality and productivity.

Previous unpleasant review experiences are a powerful cultural deterrent. The fear of management retribution or public ridicule if defects are discovered can make authors reluctant to let others examine their work. In poorly conducted reviews, authors can feel as though they—not their work—are being criticized, especially if personality conflicts already exist between specific individuals. Another cultural barrier is the attitude that the author is the most qualified person to examine his part of the system ("Who are you to look for errors in my work?"). Similarly, a common reaction from new developers who are invited to review the work of an experienced and respected colleague is, "Who am I to look for errors in his work?"

Traditional mechanisms for adopting improved practices are having practitioners observe what experienced role models do and having supervisors observe and coach new employees. In many software groups, though, each developer's methods remain private, and they don't have to change the way they work unless they wish to (Humphrey 2001). Paradoxically, many developers are reluctant to try a new method unless it has been proven to work, yet they don't believe the new approach works until they have successfully done it themselves. They don't want to take anyone else's word for it.

And then there are the excuses. Resistance often appears as NAH (not applicable here) syndrome (Jalote 2000). People who don't want to do reviews will expend considerable energy trying to explain why reviews don't fit their culture, needs, or time constraints. One excuse is the arrogant attitude that some people's work does not need reviewing. Some team members can't be bothered to look at a colleague's work. "I'm too busy fixing my own bugs to waste time finding someone else's." "Aren't we all supposed to be doing our own work correctly?" "It's not my problem if Jack has bugs in his code." Other developers imagine that their software prowess has moved them beyond needing peer reviews. "Inspections have been around for 25 years; they're obsolete. Our high-tech group uses only leading-edge technologies."

Protesting that the inspection process is too rigid for a go-with-the-flow development approach signals resistance to a practice that is perceived to add bureaucratic overhead. Indeed, the mere existence of a go-with-the-flow development process implies that long-term quality isn't a priority for the organization. Such a culture might have difficulty adopting formal peer reviews, although informal reviews might be palatable.

Overcoming Resistance to Reviews

To establish a successful review program, you must address existing barriers in the categories of knowledge, culture, and resistance to change. Lack of knowledge is easy to correct if people are willing to learn. My colleague Corinne found that the most vehement protesters in her organization were already doing informal reviews. They just didn't realize that a peer deskcheck is one type of peer review (see Chapter 3). Corinne discussed the benefits of formalizing some of these informal reviews and trying some inspections. A one-day class that includes a practice inspection gives team members a common understanding about peer reviews. Managers who also attend the class send powerful signals about their commitment to reviews. Management attendance says to the team, "This is important enough for me to spend time on it, so it should be important to you, too" and "I want to understand reviews so I can help make this effort succeed."

Dealing with cultural issues requires you to understand your team's culture and how best to steer the team members toward improved software engineering practices (Bouldin 1989; Caputo 1998; Weinberg 1997; Wiegers 1996a). What values do they hold in common? Do they share an understanding of—and a commitment to—quality? What previous change initiatives have succeeded and why? Which have struggled and why? Who are the opinion leaders in the group and what are their attitudes toward reviews?

Larry Constantine described four cultural paradigms found in software organizations: closed, open, synchronous, and random (Constantine 1993). A closed culture has a traditional hierarchy of authority. You can introduce peer reviews in a closed culture through a management-driven process improvement program, perhaps based on one of the Software Engineering Institute's capability maturity models. A management decree that projects will conduct reviews might succeed in a closed culture, but not in other types of organizations.

Innovation, collaboration, and consensus decision-making characterize an open culture. Members of an open culture want to debate the merits of peer reviews and participate in deciding when and how to implement them. Respected leaders who have had positive results with reviews in the past can influence the group's willingness to adopt them. Such cultures might prefer review meetings that include discussions of proposed solutions rather than inspections, which emphasize finding—not fixing—defects during meetings.

Members of a synchronous group are well aligned and comfortable with the status quo. Because they recognize the value of coordinating their efforts, they are probably already performing at least informal reviews. A comfort level with informal reviews eases implementation of an inspection program.

Entrepreneurial, fast-growing, and leading-edge companies often develop a random culture populated by autonomous individuals who like to go their own ways. In random organizations, individuals who have performed peer reviews in the past might continue to hold them. The other team members might not have the patience for reviews, although they could change their minds if quality problems from chaotic projects burn them badly enough.

However you describe your culture, people will want to know what benefits a new process will provide to them personally. A better way to react to a proposed process change is to ask, "What's in it for us?" Sometimes when you're asked to change the way you work, your immediate personal reward is small, although the team as a whole might benefit in a big way. I might not get three hours of benefit from spending three hours reviewing someone else's code. However, the other developer might avoid ten hours of debugging effort later in the project, and we might ship the product sooner than we would have otherwise.

Table 2–1 identifies some benefits various project team members might reap from reviewing major life-cycle deliverables. Of course, the customers also come out ahead. They receive a timely product that is more robust and reliable, better meets their needs, and increases their productivity. Higher customer satisfaction leads to business rewards all around.

Table 2–1. Benefits from Peer Reviews for Project Roles

Project Role

Possible Benefits from Peer Reviews

Developer

  • Less time spent performing rework
  • Increased programming productivity
  • Confidence that the right requirements are being implemented
  • Better techniques learned from other developers
  • Reduced unit testing and debugging time
  • Less debugging during integration and system testing
  • Exchanging of information about components and the overall system with other team members

Development Manager

  • Shortened product development cycle time
  • Reduced field service and customer support costs
  • Reduced lifetime maintenance costs, freeing resources for new development projects
  • Improved teamwork, collaboration, and development effectiveness
  • Better and earlier insight into project risks and quality issues

Maintainer

  • Fewer production support demands, leading to a reduced maintenance backlog
  • More robust designs that tolerate change
  • Conformance of work products to team standards
  • More maintainable and better documented work products that are easy to understand and modify
  • Better understanding of the product from having participated in design and code reviews during development

Project Manager

  • Increased likelihood that product will ship on schedule
  • Earlier visibility of quality issues
  • Reduced impact from staff turnover through cross- training of team members

Quality Manager

  • Ability to judge the testability of product features under Assurance development
  • Shortened system-testing cycles and less retesting
  • Ability to use review data when making release decisions
  • Education of quality engineers about the product
  • Ability to anticipate quality assurance effort needed

Requirements Analyst

  • Earlier correction of missing or erroneous requirements
  • Fewer infeasible and untestable requirements because of developer and test engineer input during reviews

Test Engineer

  • Ability to focus on finding subtle defects because product is of higher initial quality
  • Fewer defects that block continued testing
  • Improved test design and test cases that smooth out the testing process

Arrogant developers who believe reviews are beneath them might enjoy getting praise and respect from coworkers as they display their superior work during reviews. If influential resisters come to appreciate the value of peer reviews, they might persuade other team members to try them, too. A quality manager once encountered a developer named Judy who was opposed to "time-sapping" inspections. After participating under protest, Judy quickly saw the power of the technique and became the group's leading convert. Because Judy had some influence with her peers, she helped turn developer resistance toward inspections into acceptance. Judy's project team ultimately asked the quality manager to help them hold even more inspections. Engaging developers in an effective inspection program helped motivate them to try some other software quality practices, too.

In another case, a newly hired system architect who had experienced the benefits of inspections in his previous organization was able to overcome resistance from members of his new team. The data this group collected from their inspections backed up the architect's conviction that they were well worth doing.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020