Why Problems Hide
Problems remain hidden in organizations for a number of reasons. First, people fear being marginalized or punished for speaking up in many firms, particularly for admitting that they might have made a mistake or contributed to a failure. Second, structural complexity in organizations may serve like dense "tree cover" in a forest, which makes it difficult for sunlight to reach the ground. Multiple layers, confusing reporting relationships, convoluted matrix structures, and the like all make it hard for messages to make their way to key leaders. Even if the messages do make their way through the dense forest, they may become watered down, misinterpreted, or mutated along the way. Third, the existence and power of key gatekeepers may insulate leaders from hearing bad news, even if the filtering of information takes place with the best of intentions. Fourth, an overemphasis on formal analysis and an underappreciation of intuitive reasoning may cause problems to remain hidden for far too long. Finally, many organizations do not train employees in how to spot problems. Issues surface more quickly if people have been taught how to hunt for potential problems, what cues they should attend to as they do their jobs, and how to communicate their concerns to others.
Cultures of Fear
Maxine Clark founded and continues to serve as chief executive of Build-a-Bear Workshop, a company that aims to "bring the teddy bear to life" for children and families. Clark's firm does so by enabling children to create customized and personalized teddy bears in its stores. Kids choose what type of bear they want. Store associates stuff, stitch, and fluff the bears for the children, and then the kids choose precisely how they want to dress and accessorize the teddy bear. If you have young children or grandchildren, you surely have heard of Clark's firm.
Clark has built an incredibly successful company, growing it to over $350 million in sales over the past decade. She has done so by delivering a world-class customer experience in her stores. Clark credits her store associates, who constantly find ways to innovate and improve. How do the associates do it? For starters, they tend not to fear admitting a mistake or surfacing a problem. Clark's attitude toward mistakes explains her associates' behavior. She does not punish people for making an error or bringing a problem to light; she encourages it.
Clark credits her first-grade teacher, Mrs. Grace, for instilling this attitude toward mistakes in her long ago. As many elementary school teachers do, Mrs. Grace graded papers using a red pencil. However, unlike most of her colleagues, Mrs. Grace gave out a rather unorthodox award at the end of each week. She awarded a red pencil prize to the student who had made the most mistakes! Why? Mrs. Grace wanted her students engaged in the class discussion, trying to answer every question, no matter how challenging. As Clark writes, "She didn't want the fear of being wrong to keep us from taking chances. Her only rule was that we couldn't be rewarded for making the same mistake twice."15
Clark has applied her first-grade teacher's approach at Build-a-Bear by creating a Red Pencil Award. She gives this prize to people who have made a mistake but who have discovered a better way of doing business as a result of reflecting on and learning from that mistake. Clark has it right when she says that managers should encourage their people to "experiment freely, and view every so-called mistake as one step closer to getting things just right."16 Of course, her first-grade teacher had it right as well when she stressed that people would be held accountable if they made the same mistake repeatedly. Failing to learn constitutes the bad behavior that managers should deem unacceptable. Clark makes that point clear to her associates.17
Many organizations exhibit a climate in which people do not feel comfortable speaking up when they spot a problem, or perhaps have made a mistake themselves. These firms certainly do not offer Red Pencil Awards. My colleague Amy Edmondson points out that such firms lack psychological safety, meaning that individuals share a belief that the climate is not safe for interpersonal risk-taking. Those risks include the danger of being perceived as a troublemaker, or of being seen as ignorant or incompetent. In an environment of low psychological safety, people believe that others will rebuke, marginalize, or penalize them for speaking up or for challenging prevailing opinion; people fear the repercussions of admitting a mistake or pointing out a problem.18 In some cases, Edmondson finds that frontline employees do take action when they see a problem in such "unsafe" environments. However, they tend to apply a Band-Aid at the local level, rather than raising the issue for a broader discussion of what systemic problems need to be addressed. Such Band-Aids can do more harm than good in the long run.19 Leaders at all levels harm psychological safety when they establish hierarchical communication protocols, make status differences among employees highly salient, and fail to admit their own errors. At Build-a-Bear, Maxine Clark's Red Pencil Award serves to enhance psychological safety, and in so doing, helps ensure that most problems and errors do not remain hidden for lengthy periods of time.
In the start-up stage, most companies have very simple, flat organizational structures. As many firms grow, their structures become more complex and hierarchical. To some extent, such increased complexity must characterize larger organizations. Without appropriate structures and systems, a firm cannot continue to execute its strategy as it grows revenue. However, for too many firms, the organizational structure becomes unwieldy over time. The organization charts become quite messy with dotted-line reporting relationships, matrix structures, cross-functional teams, ad hoc committees, and the like. People find it difficult to navigate the bureaucratic maze even to get simple things accomplished. Individuals cannot determine precisely where decision rights reside on particular issues.20
Amidst this maze of structures and systems, key messages get derailed or lost. Information does not flow effectively either vertically or horizontally across the organization. Vertically, key messages become garbled or squashed as they ascend the hierarchy. Horizontally, smooth handoffs of information between organizational units do not take place. Critical information falls through the cracks.
The 9/11 tragedy demonstrates how a complex organizational structure can mask problems.21 Prior to the attacks, a labyrinth of agencies and organizations worked to combat terrorism against the U.S. These included the Central Intelligence Agency, the Federal Bureau of Investigation, the Federal Aviation Administration, and multiple units within the Departments of State and Defense. Various individuals within the federal government discovered or received information pertaining to the attacks in the days and months leading up to September 11, 2001. However, some critical information never rose to the attention of senior officials. In other cases, information did not pass from one agency to another, or the proper integration of disparate information did not take place. Individuals did not always recognize who to contact to request critical information, or who they should inform about something they had learned. On occasion, officials downplayed the concerns of lower-level officials, who in turn did not know where else to go to express their unease. Put simply, the right information never made it into the right hands at the right time. The dizzying complexity of the organizational structures and systems within the federal government bears some responsibility. The 9/11 Commission concluded:
- "Information was not shared, sometimes inadvertently or because of legal misunderstandings. Analysis was not pooled. Effective operations were not launched. Often the handoffs of information were lost across the divide separating the foreign and domestic agencies of the government. However the specific problems are labeled, we believe they are symptoms of the government's broader inability to adapt how it manages problems to the new challenges of the twenty-first century. The agencies are like a set of specialists in a hospital, each ordering tests, looking for symptoms, and prescribing medications. What is missing is the attending physician who makes sure they work as a team."22
Each organization tends to have its gatekeepers, who control the flow of information and people into and out of certain executives' offices. Sometimes, these individuals serve in formal roles that explicitly require them to act as gatekeepers. In other instances, the gatekeepers operate without formal authority but with significant informal influence. Many CEOs have a chief of staff who serves as a gatekeeper. Most recent American presidents have had one as well. These individuals may serve a useful role. After all, someone has to ensure that the chief executive uses his or her time wisely. Moreover, the president has to protect against information overload. The chief executive can easily get buried in reports and data. If no one guards his schedule, the executive could find himself bogged down in meetings that are unproductive, or at which he is not truly needed.23 Former President Gerald Ford commented on the usefulness of having someone in this gatekeeper function:
- "I started out in effect not having an effective Chief of Staff and it didn't work. So anybody who doesn't have one and tries to run the responsibilities of the White House, I think, is putting too big a burden on the President himself. You need a filter, a person that you have total confidence in who works so closely with you that, in effect, is almost an alter ego. I just can't imagine a President not having an effective Chief of Staff."24
Trouble arises when the gatekeeper intentionally distorts the flow of information. Put simply, the gatekeeper function bestows a great deal of power on an individual. Some individuals, unfortunately, choose to abuse that power to advance their agendas. In their study of the White House Chief of Staff function, Charles Walcott, Shirley Warshaw, and Stephen Wayne concluded:
- "In performing the gatekeeper's role, the Chief of Staff must function as an honest broker. Practically all of the chiefs and their deputies interviewed considered such a role essential. James Baker (President Reagan's Chief of Staff) was advised by a predecessor: 'Be an honest broker. Don't use the process to impose your policy views on the President.' The President needs to see all sides. He can't be blindsided."25
Gatekeepers do not always intentionally prevent executives from learning about problems and failures. In some cases, they simply make the wrong judgment as to the importance of a particular matter, or they underestimate the risk involved if the problem does not get surfaced at higher levels of the organization. They may think that they can handle the matter on their own, when in fact they do not have the capacity to do so. They might oversimplify the problem when they try to communicate it to others concisely. Finally, gatekeepers might place the issue on a crowded agenda, where it simply does not get the attention it deserves.
Some organizations exhibit an intensely analytical culture. They apply quantitative analysis and structured frameworks to solve problems and make decisions. Data rule the day; without a wealth of statistics and information, one does not persuade others to adopt his or her proposals. While fact-based problem-solving has many merits, it does entail one substantial risk. Top managers may dismiss intuitive judgments too quickly in these environments, citing the lack of extensive data and formal analysis. In many instances, managers and employees first identify potential problems because their intuition suggests that something is not quite right. Those first early warning signs do not come from a large dataset, but rather from an individual's gut. By the time the data emerge to support the conclusion that a problem exists, the organization may be facing much more serious issues.26
In highly analytical cultures, my research suggests that employees also may self-censor their intuitive concerns. They fear that they do not have the burden of proof necessary to surface the potential problem they have spotted. In one case, a manager told me, "I was trained to rely on data, going back to my days in business school. The data pointed in the opposite direction of my hunch that we had a problem. I relied on the data and dismissed that nagging feeling in my gut."27
In the Rapid Response Team study, we found that nurses often called the teams when they had a concern or felt uncomfortable, despite the lack of conclusive data suggesting that the patient was in trouble. Their hunches often proved correct. In one hospital, the initiative's leader reported to us that "In our pilot for this program, the best single predictor of a bad outcome was the nurse's concern without other vital sign abnormalities!" Before the Rapid Response Team process, most of the nurses told us that they would have felt very nervous voicing their worries simply based on their intuition. They worried that they would be criticized for coming forward without data to back up their judgments.
Lack of Training
Problems often remain hidden because individuals and teams have not been trained how to spot problems and how to communicate their concerns to others. The efficacy of the Rapid Response Team process rested, in part, on the fact that they created a list of "triggers" that nurses and other personnel could keep an eye on when caring for patients. That list made certain cues highly salient to frontline employees; it jump-started the search for problems. The hospitals also trained employees in how to communicate their concerns when they called a Rapid Response Team. Many hospitals employed a technique called SBAR to facilitate discussions about problems. The acronym stands for Situation-Background-Assessment-Recommendation. The SBAR methodology provides a way for health care personnel to discuss a patient's condition in a systematic manner, beginning with a description of the current situation and ending with a recommendation of how to proceed with testing and/or treatment. The Institute for Healthcare Improvement explains the merits of the process:
- "SBAR is an easy-to-remember, concrete mechanism useful for framing any conversation, especially critical ones, requiring a clinician's immediate attention and action. It allows for an easy and focused way to set expectations for what will be communicated and how between members of the team, which is essential for developing teamwork and fostering a culture of patient safety."28
The commercial aviation industry also provides extensive checklists for its pilots to review before, during, and after flights to enhance safety. It also conducts training for its flight crews regarding the cognitive and interpersonal skills required to identify and address potential safety problems in a timely and effective manner. The industry coined the term CRM—Crew Resource Management—to describe the set of principles, techniques, and skills that crew members should use to communicate and interact more effectively as a team. CRM training, which is employed extensively throughout the industry, helps crews identify potential problems and discuss them in an open and candid manner. Through CRM training, captains learn how to encourage their crew members to bring forth concerns, and crew members learn how to raise their concerns or questions in a respectful, but assertive, manner.29
Aviation experts credit CRM with enhancing flight safety immeasurably. In one famous incident in 1989, United Airlines Flight 232 experienced an engine failure and a breakdown of all the plane's hydraulic systems. By most accounts, no one should have survived. However, the crew managed to execute a remarkable crash landing that enabled 185 of the 296 people onboard to survive. Captain Alfred Haynes credited CRM practices with helping them save as many lives as they did.30