A Brief History of (Internet) Time: From the Beginnings of Malicious Code to Their Likely Future
Computers and computing has taken various forms throughout the course of history. The earliest “computing” devices were probably in the form of tally sticks dating back to 35,000 BC. These sticks were used for tracking numbers by a series of marks or notches on the stick. Much better known historically is the humble abacus (dating back to approximately 2700 BC), used to perform various arithmetic operations.
The earliest analog computers were mechanical devices used to perform various calculations. The Antikythera mechanism is an ancient mechanical calculator used to compute astronomical positions. It was discovered in a shipwreck and dates back to 150–100 BC.
The concept of the “programmable” computer is credited to Charles Babbage. At this time in history during the early 1800s, people working with numerical tables and performing calculations were referred to as “computers,” meaning “one who computes.” Babbage wanted to automate these calculations and remove human error, so in 1822 he began work on his difference engine. His first creation was 8 feet tall and weighed more than 15 tons. Unfortunately, the work was never completed. He later developed an improved design, but never lived to see it constructed. His dream was finally realized between 1989–1991 when the Difference Engine No. 2 was built and tested using Babbage's plans and 19th-century manufacturing tolerances. It performed its first calculation at the London Science Museum, returning results to 31 digits.
In the time leading up to World War II, electronic circuits and vacuum tubes began to replace gears and levers, bringing in the age of digital computers. Memorable milestones from this era include the Colossus and ENIAC (ElectronicNumerical Integrator And Computer). Both were built by hand using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium.
With their overwhelming size and cost, this led to some famous (or infamous) quotes that are rather embarrassing in retrospect. One of my favorites from that era is:
"I think there is a world market for about five computers."—Thomas J. Watson, chairman of the board of IBM, 1943
The first generation of computers was characterized by vacuum tubes. In the later 1950s, the first transistors began replacing vacuum tubes, leading to the second generation of computers. Transistors brought several benefits to design:
- First, they reduced the size, power requirements, and heat output while dramatically increasing the meantime-between-failure. This allowed for the creation of much more powerful computers in a much smaller space.
- It also led to lower cost of production and maintenance. No longer would it require a team of workers just to run around replacing burned out vacuum tubes.
- Another characteristic of second generation computers was the use of printed circuit boards. These computers were generally large centralized computers accessed via dumb terminals. This was not true networking; although there were many users, they were actually sharing one central computer system.
This era of mainframe and minicomputers served as low-cost computer centers for industry, business and universities.
The third generation of computers came about with the development of the integrated circuit (or microchip), and later the microprocessor. The microprocessor led to the development of the microcomputer in the late 1970s and early 1980s. These were small, low-cost computers that could be owned by individuals and small businesses. Although Apple is credited with developing the first computers designed and market toward home users, the Altair 8800 was one of the earliest computers based on the microprocessor. It lacked any sort of keyboard or monitor and used a series of switches on the front panel for programming input.
The Beginnings of Internetworking
Early computer architecture was based on a centralized mainframe computer with remote terminals connecting and sharing the resources of one massive system. No network could “talk” to any other; thus these networks were isolated pools of information and resources that could not be easily shared. In the 1960s, several research programs began looking into ways to connect separate physical networks. This work led to the development of packet-switching networks such as ARPANET (Advanced Research Projects Agency Network). ARPANET was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA), which initially connected four major computers at universities in the southwestern U.S. (UCLA, Stanford Research Institute, UC-Santa Barbara, and the University of Utah). The Internet was designed in part to provide a communications network that would work even if some of the sites were destroyed by nuclear attack. If the most direct route was not available, routers would direct traffic around the network via alternate routes. This early “Internet” was used primarily by computer experts, engineers, scientists, and librarians.
Various communications and application protocols were developed within the next several years to provide advanced functionality. The Internet matured in the 1970s when the internet protocol suite was developed. After the ARPANET had been up and running for several years, ARPA looked for another agency to hand off the network to; ARPA's primary mission was funding cutting-edge research and development, not running a communications utility. Eventually, in July 1975, the network had been turned over to the Defense Communications Agency, also part of the Department of Defense. In 1983, the U.S. military portion of ARPANET was broken off as a separate network—the MILNET. MILNET was later broken up into other segmented networks based on the classification of information being handled. In 1984, the Internet with 1,000 hosts converted en masse to using TCP/IP for its messaging.