Home > Articles

📄 Contents

  1. 3.1 Crisis and Opportunity
  2. 3.2 Crisis Communications, or Communications Crisis?
  3. 3.3 Equifax
  4. 3.4 Conclusion
  • Print
  • + Share This
This chapter is from the book

3.2 Crisis Communications, or Communications Crisis?

When planning for data breaches, many organizations emphasize the technical aspects of the response effort: modifying firewall rules on the fly, cleaning spyware and rootkits off endpoint systems, preserving evidence. This is part of the organization’s crisis management strategy, which addresses the “reality of the crisis.”13

If there is one area that is overlooked more than any other in data breach planning, it is crisis communications. Time and time again, we see organizations turn data breaches into reputational catastrophes due to classic communications mistakes.

“Crisis communications is managing the perception of that same reality,” explains Fink. “It is telling the public what is going on (or what you want the public to know about what is going on). It is shaping public opinion.”14 In a data breach crisis, a poor or nonexistent communications strategy can cause far more long-lasting damage than any actual harm caused by the breach itself. While a full exploration of effective crisis communications is outside the scope of this book, we will point out clear communications mistakes in the data breaches we study and share commonly accepted “rules of thumb” that can help your crisis communications go more smoothly.

When a data breach occurs, communications with key stakeholders such as customers, employees, shareholders, and the media are often developed on the fly. Sometimes multiple staff members talk to the press, leading to mixed messages. Other times, the organization goes radio silent, and the public is left with no answers, no reassurance, and a sense of distrust. In the next sections, we will break down why crisis communication is so important and provide reader with clear strategies for a strong response.

3.2.1 Image Is Everything

When a data breach crisis occurs, organizations face a significant threat to their image. “Image” is the perception of an organization in the mind of a stakeholder. Far from being a superficial matter, an organization’s image is vital.

A damaged image can impact customer relations, as well as investor confidence and stock values. Image is also critical for defining the organization’s relationship with law enforcement, regulators, and legislators. In a data breach, damage to an organization’s image can trigger consumer lawsuits, cause increased fines and settlement costs, and even affect the content of laws that are passed as a result of the crisis. It can impact hiring, morale, and employee retention. If image repair is fumbled, key executives may be forced to step down as a result of a breach, as Equifax’s CEO shockingly discovered.

The impact of a data breach on an organization’s image depends on many factors. Image repair expert William L. Benoit says that a threat to one’s image occurs when the relevant audience believes that:15

  1. An act occurred that is undesirable.

  2. You are responsible for that action.

Data breaches can damage the relationship between stakeholders and the organization. There is a risk that the organization will be perceived as responsible for the undesirable act (the breach). This, in turn, creates a threat to the organization’s image.

3.2.2 Stakeholders

Fundamentally, a corporate image is the result of a relationship that the organization develops with each stakeholder. To use Equifax as an example, key stakeholders include:

  • Consumers

  • Shareholders

  • Employees

  • Regulators

  • Board of Directors

  • Legislators

  • And more

These categories of stakeholders have different concerns in the wake of a breach.

3.2.3 The 3 C’s of Trust

A data breach can injure the relationship between stakeholders and the organization. Specifically, it damages trust. Military psychologist Patrick J. Sweeney conducted a study of enlisted soldiers in 2003 and found that three factors were central to trust:16

  • Competence - Capable of skillfully executing one’s job

  • Character - Strong adherence to good values, including loyalty, duty, respect, selfless service, honor, integrity, and personal courage

  • Caring - Genuine concern for the well-being of others

As we will see, these three factors apply as well in the context of trust between stakeholders and an organization.

3.2.4 Image Repair Strategies

Throughout this book, we will see that breached organizations work hard to preserve and repair their images. Here, we will introduce a model for analyzing different strategies, in order to evaluate their effectiveness.

Benoit lists five categories of image repair strategies:17

  1. Denial - The accused denies that the negative event happened or that he or she caused it.

  2. Evasion of Responsibility - The accused attempts to avoid responsibility, such as by claiming the event was an uncontrollable accident or that he or she did not have the information or ability to control the situation.

  3. Reducing Offensiveness - The accused attempts to reduce the audience’s negative feelings through one of six variants:

    • Bolstering - Highlighting positive actions and attributes of the accused

    • Minimization - Convincing the audience that the negative event was not as bad as it appears

    • Differentiation - Emphasizing differences between the event and similar negative occurrences

    • Transcendence - Placing the event in a different context

    • Attacking one’s accuser - Discrediting the source of accusations

    • Compensation - Offering remuneration in the form of valued goods and services

  4. Corrective Action - The accused makes changes to repair damage and/or prevent similar situations from occurring in the future.

  5. Mortification - The accused admits that he or she was wrong and asks for forgiveness.

All of these image repair strategies can, and have, been employed in data breach responses, some to greater effect than others.

3.2.5 Notification

Notification is perhaps the most critical part of data breach crisis communications, and it can have an enormous impact on public perception and image management. Key questions include:

  • When should you notify key stakeholders? Rarely, if ever, are all the facts about a data breach known up front. On the one hand, a quick notification can signal that you care and are acting in good faith. On the other hand, it may be the case that by waiting, you find out more information that reduces the scope of the notification requirements. There is no “right” time, and crisis management teams have many tradeoffs to consider.

  • Who should be notified? There are internal notifications (e.g., upper management, legal) In some cases, it may be appropriate to bring in law enforcement. Certain states require notification to an attorney general or other parties. Depending on the type of data exposed, it may be necessary to alert consumers or employees.

  • How should you notify? Paper mailings, email notification, a web announcement, or phone calls are all common options. Your notification requirements vary depending on the type of data exposed, the number of data subjects affected, the geographic location of the data subjects, and other factors. Notification can be expensive, and often cost is a limiting factor. Today, many organizations take a multipronged approach, which includes email or paper individual notifications, supported by a website FAQ and a call center where consumers can get more information.

  • What information should be included in a notification? On the one hand, you want to build trust and appear transparent. It’s also important to give data subjects enough information to reduce their risk, whenever that is possible. At the same time, current laws are not in line with the public’s expectations of privacy. Typically information that is not specifically regulated (such as shoppers’ purchase histories or web surfing habits) are not explicitly mentioned in data breach announcements, even if it is likely that information has been exposed.

In this section, we highlight some of the key challenges that breach response teams face when determining when, who, and how to notify.

3.2.5.1 Regulated vs. Unregulated Data

Data breach investigations are typically conducted to evaluate the risk that data regulated by a breach notification law or contractual clause was inappropriately accessed or acquired. Modern breach response teams are often led by an experienced attorney who acts as the “breach coach,” guiding the investigation and coordinating the participants. Digital forensic investigators take direction from the attorney, gathering and analyzing the evidence that the attorney needs to determine whether a notification statute or clause has been triggered.

Data breach notification laws emerged in the United States during a simpler time. Many state laws were created in response to the 2005 ChoicePoint breach (discussed in more detail in Chapter 4, “Managing DRAMA”), when financial fraud had captured the media’s attention. Credit monitoring and identity theft protection emerged during this period as well and became a part of the cookie-cutter breach response process.

State breach notification laws do not require organizations to make a full confession to consumers, detailing every single data element that may have been stolen. Rather, the laws are designed to protect a specific, limited subset of “personal information.” Recall from Chapter 1 that most of the time, “personal information” includes:18

  • [a]n individual’s first name or first initial and last name plus one or more of the following data elements: (i) Social Security number, (ii) driver’s license number or state- issued ID card number, (iii) account number, credit card number or debit card number combined with any security code, access code, PIN or password needed to access an account.

What about web surfing history, purchase history, “lifestyle interests,” salary information, and more? “As long as it doesn’t contain any of the data elements that would trigger notification such as Social Security Number or financial account information, then no, it would not trigger a notification obligation,” says data breach attorney and certified computer forensic examiner M. Scott Koller, of Baker Hostetler. Even in cases where regulated data elements are involved, breached organizations are not required to notify subjects about other, nonregulated elements that may have been accessed. “In my practice, I generally will include additional information so [affected persons] have a better sense of what occurred,” says Koller. “For example, if a real estate agent was breached, I would say that the information includes name, address, Social Security Number, and other information submitted with your application.” Koller cites mailing address as a common piece of information that may not be protected by statute but is often included in notification letters.

3.2.5.2 Left Out

Digital forensic analysis is often a painstaking, time-intensive, and expensive process. Reconstructing a picture of precisely what data elements were accessed, and when, can involve hundreds if not thousands of hours of labor, particularly if the organization did not retain good logs. Even breached organizations have limited budgets (and so do their insurers, who may be footing the bill). And again, there is the time pressure that comes from crisis communications needs.

As a result, data breach investigations often do not include the full range of an attacker’s activities. Rather, investigations normally focus on the regulated data elements and leave out systems that are not needed for complying with data breach notification requirements. Computers that don’t contain regulated data elements may not be included in digital evidence preservation at all.

For example, at Equifax, intruders reportedly first gained access to personal information in May 2017, after exploiting a vulnerability in a public-facing Equifax web server. Once the attackers gained a foothold, they explored the company’s internal network. They crawled through the network for more than two months before they were finally discovered on July 29. Bloomberg Technology later published an investigative report that revealed that criminals “had time to customize their tools to more efficiently exploit Equifax’s software, and to query and analyze dozens of databases to decide which held the most valuable data. The trove they collected was so large it had to be broken up into smaller pieces to try to avoid tripping alarms as data slipped from the company’s grasp through the summer.”19

Unregulated data such as web surfing activity, shopping history, or social connections may be stolen by an attacker, but data brokers would not be required to report that to the public, or even check to see whether anything was stolen in the first place. Equifax likely held extensive volumes of this type of data because it offers digital marketing services, including “Data-driven Digital Targeting” designed to track consumers and target advertisements. Exactly which Equifax databases did the attackers access? The public will likely never know. Equifax, like other data brokers, has amassed troves of sensitive consumer and business data, but only a small percentage is regulated by state and federal data breach notification laws.

Attorneys, forensics firms, the media, and the public are all focused on the potential exposure of SSNs and the risk of identity theft, just as they were a decade ago—but it is increasingly clear that technology and data analytics have changed the game. “There’s a trend toward expanding what qualifies as ‘personal information,’ and that trend has continued year after year,” says Koller. “So far, expansion is where people are sensitive . . . people are sensitive to medical information, sensitive to biometric information, usernames and passwords, because there’s harm to that.” In the coming years, data breach responders will need to stay up-to-date on the changing regulatory requirements, as well as key stakeholders’ (often unspoken) expectations.

3.2.5.3 Overnotification

Overnotification is when an organization alerts people to a potential data breach when it was not truly necessary. Since a data breach can cause reputational, financial, and operational damage, obviously overnotification is something to avoid. When it occurs, it is usually due to lack of evidence or easy access to log data.

Think of all the “megabreaches” you’ve read about in the news. Headlines announce that hundreds of thousands of patient records or millions of credit card numbers were exposed. Behind the scenes, there is often no proof that hackers actually acquired all of that data. Instead, the organization simply wasn’t logging access to sensitive information, and as a result there was no way for investigators to tell what data had actually been acquired and what remained untouched. Absent evidence, some regulations require organizations to assume that a breach occurred.

Today, cheap and widely available tools exist that will create a record of activities, such as every time a file is uploaded (or downloaded), every time a user logs in (or out), or every time a user views customer records. These log files can be absolutely invaluable in the event of a suspected breach.

Imagine that you are faced with a case where a hacker broke into a database server that housed 50,000 customer records. Upon reviewing the log files, your investigative team finds that only three customer records were actually accessed by the criminal. Intead of sending out 50,000 customer notifications, you send out three. Worth it? Definitely!

Every organization’s logging and monitoring system is unique and should be tailored to protect its most sensitive information assets. This reduces the risk of overnotification and can save an organization from a full-scale disaster.

3.2.5.4 Delays in Notification

Breach response teams are under enormous pressure to decide who needs to be notified as quickly as possible. As the public becomes savvier and more aware of the potential harm that can be caused by data breaches, they are less tolerant of delayed notifications. Even a lag of as little as a week can incur consumer wrath.

In the case of Equifax, the company reportedly spent six weeks investigating its data breach and preparing notifications. Forensic investigators, law enforcement agents, data breach attorneys, and other professionals involved in data breach management know that six weeks is a common notification window (certainly well within HIPAA’s 60-day period, for example)—but this was not your average breach. The theft of 145.5 million SSNs meant that organizations throughout the United States could no longer rely on SSNs as a means of authenticating consumers. (Of course, as outlined in Chapter 5, “Stolen Data,” much of the data was already stolen anyway, but until the Equifax breach occurred, most U.S. citizens maintained a healthy denial.) From the public’s perspective, every day that Equifax waited to disclose was one more day that affected individuals did not have the opportunity to protect themselves from potential harm.

When the notification delay stretches to years, you have a lot of explaining to do, and the delay may be far more damaging than the breach itself—as Yahoo discovered in 2016 when its data breach was finally uncovered.

“If a breach occurs, consumers should not be first learning of it three years later,” said Senator Mark Warner of Virginia, in response to Yahoo’s breach notification. “Prompt notification enables users to potentially limit the harm of a breach of this kind, particularly when it may have exposed authentication information such as security question answers they may have used on other sites.”20 This reflected a notable advancement in the public’s demonstrated understanding of data breaches: By the end of 2016, many people recognized that the compromise of their account credentials from one vendor could enable attackers to gain access to other accounts as well.

3.2.6 Uber’s Skeleton in the Closet

Woe to the company that keeps a data breach secret—and then eventually is unmasked.

Uber is one such company. In 2016, Uber fell victim to cyber extortion—and made a bad choice. An anonyous hacker (who called himself “John Dough”) emailed the company, claiming to have found a vulnerability and accessed sensitive data. It turned out that he had gained access to the company’s cloud-based repository at GitHub, where he found credentials and other data that enabled him to break into Uber’s Amazon web servers, which housed the company’s crown jewels—source code and data on 57 million customers and drivers (including approximately 600,000 driver’s license numbers).

The hacker politely but firmly demanded a payoff for the discovery of the “vulnerability.” At the time, Uber had a bug bounty program, managed by the speciality company HackerOne. After verifying the hacker’s claims, Uber discussed payment for the hacker’s report. Rob Fletcher, Uber’s product security engineering manager, informed John Dough that the bug bounty program’s typical top payment was $10,000. The hacker demanded more.

“Yes we expect at least 100,000$,” the hacker wrote back. “I am sure you understand what this could’ve turned out to be if it was to get into the wrong hands, I mean you guys had private keys, private data stored, backups of everything, config files etc. . . . This would’ve heart [sic] the company a lot more than you think.”21

Uber acquiesced and arranged for payment of $100,000. It turned out that there were actually two hackers—the original “John Dough,” based in Canada, and a second person—a 20-year-old man in Florida who had actually downloaded Uber’s sensitive data. According to reports, “Uber made the payment to confirm the hacker’s identity and have him sign a nondisclosure agreement to deter further wrongdoing. Uber also conducted a forensic analysis of the hacker’s machine to make sure the data had been purged.”22

Internally, the case was managed by Uber’s CSO, John Sullivan, and the company’s internal legal director, Craig Clark. Reportedly, Uber’s CEO at the time, Travis Kalanick, was briefed. Uber’s team made the decision that notification was not required, and the case was closed—or so they thought.

3.2.6.1 Housecleaning

The case probably would have stayed closed forever, but in 2017, Uber’s CEO resigned amid a growing scandal that revealed pervasive unethical and in some cases illegal behavior at the company. The new CEO took the reins in September 2017. The company’s board initiated an internal investigation of the security team’s activities, enlisting the help of an outside law firm. As part of this investigation, the unusual $100,000 “bug bounty” payment was uncovered—and investigated. The company also hired the forensics firm Mandiant to take an inventory of the affected data.

Cleaning the skeletons out of the closet was very important for Uber’s new leadership team. In order to rebuild trust with key stakeholders and the public, they needed to demonstrate openness and honesty. Any scandals that remained hidden could come back to haunt the new leadership team, which they did not want to risk. This was especially important given Uber’s rocky financial footing; were the company to be sold, the breach might well have come out during a cyber diligence review later. Exposing Uber’s dirty secrets all at once allowed Uber’s new team the opportunity to control the dialogue, point the finger at the old management, and continue on with a clean(ish) slate. As a result, Uber’s data breach case was cracked wide open.

On November 21, 2017, Uber’s new CEO, Dara Khosrowshahi, released a statement disclosing the company’s “2016 Data Security Incident.” In this statement, he revealed that the names and driver’s license numbers for 600,000 drivers had been downloaded, in addition to “personal information of 57 million Uber users around the world.” Khosrowshahi specifically called out Uber’s failure to notify data subjects or regulators as a problem, and announced that the company’s CSO John Sullivan and attorney Craig Clark had been fired, effective immediately.23

“None of this should have happened, and I will not make excuses for it,” he wrote. “While I can’t erase the past, I can commit on behalf of every Uber employee that we will learn from our mistakes.”24

3.2.6.2 Fallout

Angry riders and drivers immediately took the company to task on social media—not just for the breach itself, but for the way it was handled. Days later, two class-action lawsuits were filed against the ride-sharing company. Washington State, as well as Los Angeles and Chicago, filed their own lawsuits. Attorneys general from around the country began investigating, and in March 2018, Pennsylvania’s state attorney general announced that he was suing Uber for violating the state data breach notification law.

“The fact that the company took approximately a year to notify impacted users raises red flags within this committee as to what systemic issues prevented such time-sensitive information from being made available to those left vulnerable,” said U.S. Representative Jerry Moran (R-KS).25

Uber’s chief information security officer, John Flynn, was called to testify before Congress about the breach. A large part of his testimony was in defense of the bug bounty program, which had come under fire due to its role in the cover-up. “We recognize that the bug bounty program is not an appropriate vehicle for dealing with intruders who seek to extort funds from the company,” Flynn said. “The approach that these intruders took was separate and distinct from those of the researchers in the security community for whom bug bounty programs are designed. . . . [A]t the end of the day, these intruders were fundamentally different from legitimate bug bounty recipients.”

3.2.6.3 Effects

The Uber case rocked the boat for third-party breach response teams, who frequently based decisions of disclosure on a risk analysis. Many breach coaches and security managers would have reached the same conclusions as Sullivan and Clark. After all, the hacker had signed an NDA, and the company had conducted a forensic analysis of his laptop. For many attorneys, this would have been considered sufficient evidence to conclude that there was a low risk of harm.

Deferring to outside counsel may have helped. There is no public evidence that Sullivan and Clark called upon an outside cybersecurity attorney for legal assistance in this case. Involving outside counsel allows internal staff to defer to an experienced third party with regards to disclosure decisions, providing significant protection for the internal team in the event that the decision is later questioned. Given the complex state of cybersecurity regulation and litigation, it is always safest to involve outside counsel. Had Uber’s investigative team chosen to involve outside counsel, they may well have reached a different conclusion.26

As shocking as the Uber disclosure was, one has to question whether it was truly outside the norm. It’s safe to say that if Uber had not chosen to report the 2016 breach, it most likely never would have been revealed. How many companies today have similar skeletons in the closet that may never be uncovered?

  • + Share This
  • 🔖 Save To Your Account