What Makes the Internet (In)Secure?
Now that you have a basic understanding of the structure, support organizations, and rules of the electronic community that you have chosen to live in, you are in a position to assess the general security that it provides to its residents. Unfortunately, we must conclude that, without taking any kind of precautions of our own, the Internet is not an inherently safe place for you to be, especially if you have any kind of valuable information assets that you want to protect.
Inherent Insecurity of the Technology
The goal of the original funders and designers of the Internet (and its predecessor, the ARPANET) was to develop a technology that enabled computers of any type to communicate with other computers of any other type (today, this means that a Microsoft Windows XP workstation can communicate with a Sun Solaris Web server without any problems). On their own, the primary TCP/IP protocols provide this capability, allowing one system to reliably transmit a stream of data to another system on the Internet. The effort put into the development of the Internet and associated protocols was geared toward creating a high-speed, high-capacity infrastructure designed to get packets through to their destination in a reliable manner, and security was not a primary design goal. The result of this is that the Internet is a fairly trusting medium that puts the onus of security on its end users rather than embedding it as a core service.
One of the biggest issues in Internet security is that trust implicitly is assumed at many levels. Examples of this include the following:
When an application receives an answer (in the form of an IP address) to a DNS query, it assumes that the answer is valid and will connect to the IP address that it was told to use.
Most application programs will take any data that is handed to them and assume that the data is formatted properly by a valid source.
Most TCP/IP protocol "stack" developers have taken into consideration problems that might arise in the Internet, such as packets getting lost, but they tend to assume that any packets that do arrive will follow the rules of the protocol standard that they are implementing.
The good-natured trust that is pervasive throughout the Internet comes directly from the philosophy of the original developers. The early Internet community was made up of researchers, students, and technologists who were looking for ways to share information and computing resources, not build critical infrastructures that would carry highly sensitive and private information. Credit card theft and denial-of-service attacks were simply not on the list of major concerns when the technology was developed.
Because of the recent attention that hacking and computer crime have received, many system and software developers are beginning to become more suspicious and less trusting in their work. As awareness improves, the security of new products will become stronger; however, it is important to remember that a lot of "legacy" code running on the Internet does very little error checking and makes a lot of assumptions about the source and the quality of the data it is processing.
Lack of Authentication
There is no provision for an authentication service at the lower layers of the protocol stack. This is part of the trust issue we have discussed. It is especially critical for management and address-assignment protocols such as DNS, which effectively directs traffic for the overwhelming majority of connections on the Internet. In effect, anyone can answer a DNS query; if a person gets an answer back to the requester first, he can masquerade as or redirect any network session desired. A solution to this problem in the form of public-key infrastructures and digital signatures has been developed and proposed; however, it could be some time before it is implemented.
Higher-layer protocols have done a somewhat better job or authentication. For example, the popular telnet and FTP protocols require usernames and passwords to gain access to systems, although these credentials are passed "in the clear" on the network, making them vulnerable to monitoring, or "sniffing."
Recent advances in security standards might begin to change some of this. A new authentication protocol (IEEE 802.1x) will allow enterprise network designers to require authentication to even transmit packets into a network. This is a promising development; however, it could be some time before other vendors provide support for the protocol and it is implemented in many networks.
There is a humorous cartoon of a dog surfing the Internet and saying to an onlooking cat, "On the Internet, nobody knows you're a dog." It is pretty easy to maintain your anonymity on the Internet. Online usernames often are picked by the users themselves, allowing anyone who wants to be j_doe to assume that identity. Mail anonymizers also can be used to relay email in such a way that the original source is hidden to the recipient. Without good record keeping by all of the computers on the Internet, it can be quite difficult to track down an attacker who has "hopped through" several systems before attacking his target.
Lack of Privacy
Many of the privacy issues raised in the debate over the Internet are really the result of business practices (such as selling client lists) or careless configurations of applications and operating systems. These are not so much weaknesses in the fundamental technology; this is simply a "nature of the beast" problem. Many companies do not respect the privacy of user data, or they have concentrated on getting their online services up and running, focusing on functionality first and figuring that they will get the security right "later." Unfortunately, when the day-to-day operations of their Internet site get underway, "later" never happens.
Some inherent privacy issues are associated with the Internet. The primary one is that most data is transported "in the clear," meaning that anyone who can gain access to the physical network can read the traffic. This is not always as easy as some people think; however, it is not impossible. The biggest threat here is that many of the operating system and application authentication mechanisms that do exist employ simple password schemes, with the password transmitted in the clear along with the subsequent data.
Lack of Centralized Security Management and Logging
Many products have security features built into them or are dedicated to a specific security function (such as a firewall). Unfortunately, the capability to centrally manage and monitor these systems is very limited. For example, many operating systems and applications maintain extensive logs that can tell an administrator if users or programs are doing things that they shouldn't be doing. The problem is, if the network is even moderate in size, it can be very difficult to review these logs in any useful way. Yes, there are solutions to these problems (such as log file analyzers and real-time log parsers), but relatively few IT administrators have implemented them.
Day-to-Day Security Is Hard!
Anyone who has ever tried to secure somethingbe it a building, a room, or a networkknows that, in the end, no matter how many locks and cameras and guards you deploy, the user community can make or break the security. Most people will tolerate only so much extraneous "stuff" that they need to do to use their computer. If an administrator insists that all users have a 15-character password with a mix of letters, numbers, and punctuation marks in them, it is highly likely that the passwords will end up being taped to the monitors and keyboards all over the building. So, perhaps it will make more sense to enforce "strong password" rules on power users and administrators, who have more online privileges than the typical user.
Similarly, if users are not educated about the dangers of opening unexpected email attachments, then the administrator might have to decide between rather draconian email-filtering systems or risking massive Trojan horse and virus infection.