Risk Perception and Choice
Whereas risk assessment focuses on objective losses such as financial costs, risk perception is concerned with the psychological and emotional factors associated with risk. Research has demonstrated that the perception of risk has an enormous impact on behavior, regardless of the objective conditions.
In a set of path-breaking studies begun in the 1970s, decision scientists and psychologists such as University of Oregon's Paul Slovic, Carnegie Mellon University's Baruch Fischhoff, and others began studying people's concerns about various types of risks. They found that people viewed hazards with which they had little personal knowledge and experience as highly risky, and they especially dreaded their possibility. In the case of unfamiliar technologies with catastrophic potential such as nuclear power, people perceived the risks as much higher than did the experts.2
Research also found that people often perceive the world of low-probability and high-consequence events quite differently from experts, and that this impacts on their decision-making process and choice behavior. For years, however, this disparity was simply ignored by expert analysts, who made little effort to communicate the inventory, hazards, vulnerability, and losses from risks in ways that the public could accept and act upon. Sometimes, important underlying assumptions were not made explicit; other times, complex technical issues were not explained well; and often, little effort was made to help the public appreciate why experts could disagree with one another. Rarely were public perceptions even considered.
In recent years, however, the scientific and engineering communities have devoted increased attention to the psychological factors that impact on how individuals make decisions with respect to risks from natural and technological hazards. Rather than simply urging policy makers and organizational leaders to take actions on the basis of their traditional risk-assessment models, experts are increasingly incorporating salient human emotions such as fear and anxiety into the models.
Researchers have discovered that people are generally not well prepared to interpret low probabilities when reaching decisions about unlikely events. In fact, evidence suggests that people may not even want data on the likelihood of a disastrous event when the information is available to them. One study found, for instance, that when faced with several hypothetical managerial decisions that are risky, individuals rarely ask for data on the probabilities of the alternative outcomes. When one group was provided limited information about the choices they were facing and given an opportunity to find out more about their risks, fewer than one in four requested information on the probabilities, and none sought precise likelihood data. When another group was presented with precise probability data, fewer than one in five drew upon the concept of probability when making their choices between alternative courses of action.3
If people do not think probabilistically, how then do they make their choices in the face of risk? Extensive research on decision making now confirms that individuals' risk perceptions are affected by judgmental biases.4 One of the important forms of bias in the case of extreme events such as large-scale disasters is a tendency for people to estimate the risk they face on the basis of their own experience regardless of what the experts may have communicated. If an event is particularly recent or impactful, people tend to ignore information on the likelihood of a recurrence of the event and focus their attention on the consequences should another similar disaster occur.5 Following the terrorist attacks with hijacked aircraft on September 11, 2001, many of those living in the United States refused to fly because they believed that the chances of ending up on a hijacked aircraft were dangerously high—even though the actual likelihood was extremely low given the tightened security measures introduced in the wake of 9/11.
More generally, researchers have found that people tend to assess low-probability, high-consequence events by focusing on one end of the likelihood spectrum or the other: For some people, such events will surely happen, for others they will surely not happen, and few fall in between. For very unlikely events, however, people crowd toward the "will not happen" end of the spectrum. It is for this reason that there is a general lack of public interest in voluntarily purchasing insurance against natural disasters and in investing in loss-protection measures. People underestimate both the probability of a disaster and the accompanying losses, and they are often myopic when it comes to proper planning for disasters. If a disaster does occur, people then tend to overinvest in seeking to prevent a recurrence. Protective measures are thus undertaken when it is too late. A study of homeowners in California, for example, showed that most purchased earthquake insurance only after personally experiencing an earthquake. When asked about the likelihood of another quake occurring in their area, they correctly responded that it was lower than prior to the disaster because the stress on the fault had been reduced. And yet that is when they finally decided to acquire the insurance.6