The Ethics of Free Software
- Introduction: The Failure of Proprietary Software
- Legal and Monetary Issues
- Educational Value
- Possible Objections
- Free Software Successes
The Failure of Proprietary Software
"Using Windows NT [closed software], which is known to have . . . failure modes, on a warship is similar to hoping that luck will be in our favor." - Anthony DiGorgio, Engineer, United States Atlantic Fleet Technical Support Center 19
It was a fairly normal day for the USS Yorktown in September, 1997. The Aegis Missile Cruiser was participating in maneuvers off the coast of Cape Charles, Virginia. No enemy ships were in sight.
Suddenly, the ship's entire propulsion system inexplicably failed. The USS Yorktown was dead in the water, but the engines were completely normal. 19
What, then, caused the problem that required the 80,000-horsepower ship to be towed into port for over 48 hours of maintenance?
It turns out that the difficulty encountered there was but one symptom of a serious problem affecting the very methods used to write the code responsible for running nine out of every ten of the world's microcomputers. The USS Yorktown failure is only one example of what can happen when trust is placed in inherently untrustworthy systems and design methodologies. Systems can crash on a daily or even hourly basis, 6 with data loss a potential and unfortunate reality when such events occur. 1 Critical systems can stop functioning, costing millions of dollars in lost revenue. Software has a tremendously broad reach in today's society. Almost every person in the United States is affected by software, either directly or indirectly. Everything from surface mail and phone calls to airline flights and Internet-based commerce is handled by software. When the software behind these various activities is of higher quality, people who use this software benefit because it performs better and is more reliable.
You're happier when your airline flights are on time. This means you are less likely to miss an important meeting or a connecting flight to your destination. Internet orders are more likely to arrive correctly, meaning that your birthday gift to Aunt Edna will be timely. To put it another way, poor software quality causes a great deal of harm. Errors in billing, credit reports, tax records, and the like are but a few examples of things attributed to software glitches. Any of these things can cause significant harm to people or companies. You can be unfairly targeted by collection agencies. They may waste money defending themselves against accusations of things they didn't do. You lose time dealing with those people. A rejected loan, for example, could mean that a growing family can't get a larger home that they need. All of this can lead from low-quality software — and it does happen.
Software developers also benefit from better design methods. They can produce software in a shorter amount of time, and the computerized tools that assist them in development make their job easier.
Therefore, if it can be shown that free software produces higher-quality software, then it follows that we achieve greater productivity and more benefit for mankind, and free software is the ethical choice for software developers. Below, several attributes of free software are discussed, along with the reasons that they help to improve software quality.
One of the most important aspects in the design of any large and important project is peer review. In large projects such as operating systems, hundreds or even thousands of people work on that which ends up being millions of lines of code. A single missing character in that code can be enough to allow a security breach or cause a system crash. Humans are not perfect, and while computer-scanning programs can catch some simple mistakes, there are many mistakes that can be caught only by another person looking at the code.
With closed software, in general, only employees of the company writing the program have access to its source code. This makes it impossible for others outside the company to find problems in the code before they cause damage. Worse, when there is a problem, the users of a program can do nothing to help track it down.
With free software such as Linux, the source is freely available for download. People are encouraged to look at it, to be critical, and to try to find bugs. And thousands of people do look at the code. The end result is that there are far more people proofreading and fixing code, and the program has fewer bugs and is more stable. 8
When no source code is available, many bad things can happen, from security problems to devastating system crashes. The failure of the USS Yorktown is a great example. Do we really want to prevent the military, airlines, or any other industry in which lives could be at stake, from being able to fix system problems in the field? If something breaks, they are denied the ability to fix it because they have no source code for the software! This isn't the only situation where vital government systems have been hampered by closed software. On December 15, 1998, a bug in Microsoft's closed mail server called Exchange took down two critical servers in the United States House of Representatives. Ironically, the bug occurred just when e-mail traffic was at its peak — days before a critical vote on the future of the President. House members, therefore, couldn't receive valuable feedback from their constituents. 4
Both these situations were caused by bugs in proprietary code. In both situations, the people running the computers were completely helpless. They could not have fixed the problem regardless of how much effort they put into it; the lack of source code completely prevents that. A computer glitch is bad enough, but when people are denied the ability to fix it themselves, or to hire someone else to fix it in a timely manner, there can be very serious consequences. Lives aboard crippled ships could be lost. Vital government business could grind to a halt as the communications lines between the people and their representatives are severed.
Free software promises a better world. In the free software world, anyone can fix a problem with the software in use. Even if a company does not have the expertise in-house to fix the problem, contractors are plentiful, and they can make quick fixes. In situations where even an hour of downtime can mean literally millions of dollars in lost revenue, having a fix in a matter of hours or even minutes as opposed to days or weeks can make a tremendous difference.
Users Are Developers
This is a concept completely foreign to the closed-source world, but it alone is of tremendous importance in the free software community. This concept is important in two separate areas: debugging and development.
With proprietary software, when a bug is found, even if it is serious, it often takes a long time for it to be fixed, if it ever is. A lot of things can go wrong. If the original manufacturer of the software program no longer exists, and the source to the program is not available, it is generally impossible to fix the problem. If this program is essential to someone's business and no longer works, the company has a big problem.
This is only a small part of the problem. If your software vendor is unable or unwilling to fix problems, you are stuck with broken software, and there's nothing that can be done about it.
For software vital to the operation of businesses, such as billing and financial software, these consequences are plenty to demonstrate the ethical problems with keeping the source code to software a secret.
Free software presents a better alternative. When you find a bug in your software, you can fix it yourself instead of depending on the original vendor to fix it. Or, somebody else can be hired to fix the problem. This instantly takes a difficult, possibly insurmountable problem with software, down to something that could be fixed in a matter of hours or even minutes in many cases—a clear benefit of free software.
Another important aspect of this is that the users of a given free program can add their own features to it. If you're not satisfied with how something works, you can improve it! This capability is especially useful if the software vendor does not exist any longer or is unwilling to implement your desired changes. You get a better product, and the people using it are more productive.
The benefits are even greater than that, though. When people find bugs or add new features on their own to free software, they are encouraged to submit their changes back to the maintainers of the software. This means that every user of a free program is a potential contributor to it! What's more, if people have made modifications to a program—new features or changes to fit their taste—there is free peer review of these modifications! When people that add features cooperate with the process, which is almost always the case, the other people using the software can see the code and can spot problems. As a result, free software products can have bugs found and fixed faster, new features implemented sooner, and better reliability than closed-source products. 11 People who make changes for their own use can get these changes peer-reviewed, increasing the quality of their software. The result is that software is less likely to crash, systems are down less often, and the software better meets the needs of those that use it—all of which contributes to an increase in utility.
Security is one of the most complex areas of software development, requiring expert programmers to write secure code and find security problems in existing code. With closed software, the number of people who are able to review the security of code is limited to a miniscule fraction of the total programmers that could do this.
There are many instances of security problems in closed-source software that never existed or have long ago been fixed in free software. These problems often can cause serious loss of important data, which can easily lead to devastating consequences. For instance, on July 8, 1997, the United States Coast Guard's database server crashed and remained down for 1,800 hours while 115 employees worked to restore the data. 3 In the first week of March 1998, cracker attacks caused thousands of Windows NT systems to crash, exploiting a security hole in that proprietary operating system, including some particular sites that suffered over one hundred failures. 17 These are just a few examples of exploitation of closed software bugs by crackers—bugs that are not present in free software.
The cause, however, goes deeper than the general issues of free software discussed above. In a concept called "security through obscurity," people try to keep their data secure or encrypted not by necessarily writing secure software, but rather by tightly restricting access to the source. The theory behind this is that if those attempting to crack systems do not know how the algorithms work, then they will not be able to breach security.
This argument fails quite easily, however. In 10, Mr. Perens states, "Security through obscurity is always a bad idea." While that statement is perhaps too bold, its sense is correct, and the reasoning behind it can be analyzed in terms of utilitarian ethics.
Many popular security systems and encryption algorithms are presently compromised, but manufacturers of such software continue selling it. This is often because others cannot review the code, and the manufacturers themselves may not even possess the knowledge necessary to find or fix the problems.
Software developers can conceal security holes in their software, 10 intentionally or unintentionally. The end result is that people think that their data is secure when it really may not be. The consequences of this can result in loss of utility. A simple hypothetical example will suffice: If the data on a courtroom computer holding privileged information about people's criminal records is leaked, people's careers and lives could be destroyed based on that information. Other cases of broken security could have a serious effect on national security, even causing loss of life in some situations.
When we have free software algorithms, we can be absolutely certain about their security level. This places the power to decide whether a given algorithm is appropriate in the hands of the users of it, not its author. The users of such algorithms can have a high degree of confidence that it is sufficient for their needs, thus increasing utility by decreasing not only the chances of compromise, but also extra efforts and worry needed to deal with it.