Praise from readers
"A superb book on how to prevent and minimize technological disasters."
P. Roy Vagelos, M.D. Retired Chairman and CEO,
Merck & Co., Inc.
"If you want to know how serious technological disasters can be, how poorly we tend to handle them, and what can be done to reduce or eliminate the dangers associated with them, this is the book for you."
Russell L. Ackoff, Professor Emeritus of Management Science
at The Wharton School, University of Pennsylvania
"A thorough compendium of technological disasters, complete with detailed descriptions, analyses of what happened, what went wrong, and why. This lucid book candidly addresses human and societal failings that need to be corrected if future disasters are to be prevented."
Severo Ornstein, Internet Pioneer
and Founder of Computer Professionals for Social Responsibility
"Minding the Machines provides us with insights that are greatly needed to cope with the major technological disasters that are endemic to our times."
David A. Hounshell, David M. Roderick Professor of Technology and Social Change, Carnegie Mellon University
"An excellent, balanced, and highly readable book emphasizing human, social, and organizational elements universally present in technological disasters."
Carver Mead, Gordon and Betty Moore Professor Emeritus of Engineering and Applied Science
at the California Institute of Technology,
1999 Lemelson-MIT Prize Winner
"This book presents a systematic analysis of the root causes of technological disasters, accompanied by many riveting examples. More importantly, the authors provide the reader with an enlightening discussion on how we can prevent them."
David J. Farber, The Alfred Fitler Moore Professor of Telecommunication Systems
in the School of Engineering and Applied Sciences
and Professor of Business and Pubic Policy
at The Wharton School, University of Pennsylvania
A complete blueprint for preventing technological disasters in the 21st century.
Why do technological disasters occur, and how can we prevent them? How do we design technological systems that enhance human life rather than imperil it? How do we live with the technology we have created?
In Minding the Machines, William M. Evan and Mark Manion offer a systematic and provocative guide to preventing technological disasters. They reveal the hidden patterns and commonalities beneath more than 30 of the worst technological tragedies of recent historyand identify powerful preventive measures that address every key area of risk.
Minding the Machines throws light on:* Technological disasters: theories and root causes From systems theory to terrorism and counter-terrorism measures * Strategic responses to key risk factors Attacking the four key causes of disaster * Technical design failuresand the organizational failures connected to them How communications failures lead to system failures, and what to do about it * Socio-cultural failures: the lessons of Bhopal Two comparable Union Carbide plants: one safe in West Virginia, one murderous in India * The responsibilities of institutions, the responsibilities of individuals What corporate managers, engineers, scientists, and government officials can do * Participatory technology: the central role of the citizen Why citizens must play a far more active part in decisions about technology
In Minding the Machines, two leading experts in technological risk assessment analyze more than 30 disastersfrom the Titanic sinking to Exxon Valdez oil spill, the Challenger shuttle disaster to Chernobyl nuclear catastrophe, the Love Canal toxic waste contamination to Bhopal poison gas release. They present lessons learned and preventive strategies for all four leading causes of technological disasters: technical design factors, human factors, organizational systems factors, and socio-cultural factors. They also identify appropriate roles for every participant in technological systemsfrom corporations to regulators, engineering schools to individual citizens.
Technological disasters can kill thousands, and destroy the organizations in which they occur. In recent decades, much has been discovered about the causes and prevention of technological disasters, but many organizations have not learned the lessons or implemented appropriate preventive strategies.
(NOTE: Each chapter contains a Conclusion and References.)
List of Tables.
List of Figures.
Invitation to Our Readers.
I. INTRODUCTION.1. Technological Disasters: An Overview.
Dangerous Technologies. Selected Examples of Technological Disasters. Causes of Technological Disasters. Strategies for Prevention. Who Should be Concerned.2. Natural and Human-Made Disasters.
Natural Disasters. Human-Made Disasters. Comparison of Natural and Human-Made Disasters. Endnotes.
II. THE PREVALENCE OF TECHNOLOGICAL DISASTERS.3. The Year 2000 (Y2K) Debacle: An Ironic Failure of Information Technology.
The Overall Impact of Y2K. Anticipation of the Problem. The Causes of the Problem. The Scope of Y2K. The Cost of Y2K.4. Theories of Technological Disasters.
A Systems Approach to Technological Disasters. Feedback Mechanisms and the Design of Engineering Systems. Perrow's Theory of “Normal Accidents” (NAT). High Reliability Theory (HRT). A Sociotechnical Systems Analysis of Technological Disasters.5. The Root Causes of Technological Disasters.
Technical Design Factors. Human Factors. Organizational Systems Factors. Socio-Cultural Factors. Terrorism in the Nuclear-Information Age. Terrorism and Counter-Terrorism. Organizational Systems Factor Counter-Measurers.
III. TECHNOLOGICAL DISASTERS SINCE THE INDUSTRIAL REVOLUTION.6. Three Industrial Revolutions and Beyond.
Three Technological Revolutions. The First Industrial Revolution. The Second Industrial Revolution. The Third Industrial Revolution. A Fourth Industrial Revolution?7. A Matrix of Technological Disasters.
Testing Three Hypotheses about the History of Technological Disasters.
IV. ANALYSIS OF CASE STUDIES OF TECHNOLOGICAL DISASTERS.8. Twelve Exemplary Case Studies of Technological Disasters.
USS Princeton Explosion. Titanic Sinking. Aisgill Train Wreck. Johnstown Flood. DC-10 Crash. Tenerife Runway Collision. Santa Barbara Oil Spill. Love Canal Toxic Waste Contamination. Apollo I Fire. Three Mile Island. Challenger Disaster. Bhopal Poison Gas Release.9. Lessons Learned From the Case Studies of Technological Disasters.
Specific Lessons Learned. General Lessons Learned.
V. STRATEGIC RESPONSES TO TECHNOLOGICAL DISASTERS.10. The Responsibilities of Engineers and Scientists.
The Role of Engineering Schools. The Role of Engineering Societies. The Role of Science and Scientists.11. The Role of Corporations in the Management of Technological Disasters.
Corporate Management versus Mismanagement. Case Studies in Crisis Management. Crisis Management Theory. Endnotes.12. The Role of the Legal System in Technology Policy Decisions.
The Executive Branch. The Legislative Branch. The Administrative Branch. The Judicial Branch. The Legal Profession. Relative Effectiveness of U.S. Legal Subsystems in Technology Policy Decisions.13. Assessing the Risks of Technology.
Probabilistic Risk Assessment. Risk-Cost Benefit Analysis. Technology Assessment.14. Technology Decisions and the Democratic Process.
Technocratic versus Democratic Assessments of Risk. Participatory Technology. Mechanism for Citizen Participation. Toward an Alliance of Citizen's Organizations.Name Index.
We live in an age of breathtaking technological innovation. Two developments of the 20th Centurythe computer and the Internethave revolutionized our everyday lives, transforming the way millions of people communicate, do their work, fall in love, and even buy birthday gifts. But while technological innovations have transformed and enhanced our lives in myriad ways, they have also created the potential for technological disasters of unimaginable consequences. We are vulnerable in ways we have never been vulnerable before. And yet, to turn the clock backward and eliminate machines from our lives is impossible. We are therefore faced with the challenge of minding the machinesof anticipating and preventing technological disasters. At the same time, we are faced with the challenge of seeing to it that technology's designers develop a stronger sense of social responsibility, a concern for human security and well-being. How do we evaluate our own risk assessment procedures? How do we ensure that policy makers, experts, and others involved in the risk assessment process act not only according to cost-benefit ratios but also with a commitment to social responsibility?
In the pages that follow, we present case studies of technological disasters that have occurred in all corners of the globe. Our purpose in examining these case studies is to develop an array of strategiesprofessional, organizational, legal, and politicalthat can help prevent technological disasters. What emerges from these case studies is both illuminating and deeply troubling. Hard as it is to believe, some of these case studies illustrate disasters that were anticipated. Potential problems were recognized long before lives were lost or property was damaged. For example, studies of the Challenger Shuttle tragedy identified memoranda written by engineers warning about the possible failure of the O-rings if the shuttle were launched in below-freezing temperature. In fact, the record shows that engineers working on the O-ring design were well aware of the problem a full year before the tragedy. During a teleconference the night before the scheduled launch, several engineers explicitly recommended to management against launching the shuttle. What went wrong with the decision-making process of management that led to launching the Challenger shuttle? The answer has far less to do with technology and far more to do with the value judgments of the parties involved, the structure of organizations, and the inadequacies of human communication. Why did the people making the decisions disregard the recommendations of their own engineers? What risk evaluation procedures were in place at the time? How can the lessons from the Challenger shuttle be applied to the future design and assessment of technology?
Other case studies in our book focus upon technological disasters that were unanticipated. In these situations, there were inadequate provisions for training workers to cope with crises, building-in fail-safe mechanisms to counter human errors, and preparing a well-thought-out emergency plan to meet hazardous and unexpected developments. For example, the poison gas release at the Union Carbide plant in Bhopal, India, resulted in the death of thousands. A worker flushing some pipelines with water failed to insert a metal disk to seal valves in the pipeline leading to a storage tank. The tank contained a highly poisonous chemical, methyl isocyanate. Water from the pipeline leaking into the tank reacted violently with the methyl isocyanate, causing increased pressure and temperature in the tank. The ensuing chemical reaction caused the release of tons of toxic chemicals into the air surrounding the town. The result was catastrophic. In retrospect, certain questions haunt us: What design flaws and human errors made this catastrophe possible? What flaws in the risk assessment procedures made it possible for human beings not to consider that such problems might arise?
We distinguish between anticipated and unanticipated disasters. We study these tragic events in an effort to learn the lessons of historyand do everything we can to protect ourselves from similar events in the future.
As we move into the 21st Century, this is one of the greatest challenges that confront us. How do we design technology that will enhance human life? How do we live with the technology we have created? How do we mind the machines?
William M. Evan and Mark Manion