- What Is MPLS?
- Why Is MPLS Needed?
- How Is MPLS Done?
- When and Where Is MPLS Used?
- Who Is Doing MPLS?
- The Label Switching Paradigm
- A Quick Introduction to MPLS
- Evolution of Internet Network Models
- Basics of the Internet
- Internetworking Technology Basics
- More Basics: Graph Theory and Modeling Language
- The Promise of MPLS
- The Promise of the Promise of MPLS
Basics of the Internet
MPLS cannot be understood outside the context of the Internet and its associated networking technologies. Since the Internet began over 30 years ago, it has become the most important global infrastructure for exchanging information. It is important to understand how the Internet evolved by presenting a quick history and taking a look at its current direction.
A Short History of the Internet
In 1969, the Internet began as the experimental data network called the Advanced Research Projects Agency Network (ARPANET). The Department of Defense (DOD) used the network as a testing ground for research and emerging network technologies, primarily for military purposes. The original network connected four universities: UCLA, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah. It was viewed as a success and was expanded by adding computers and connectivity throughout the U.S. In the following year, the ARPANET host computers began using the first host-to-host protocol: Network Control Protocol (NCP). Also in 1970, AT&T installed the first transcontinental connection. It was a 56-Kbps line between UCLA and Bolt, Beranek, and Newman (BBN).
In the early 1970s, computer scientists started developing network applications and protocols to enhance the use of this internetwork. In 1972, scientists at the National Center for Supercomputing Applications (NCSA) developed the Telnet application. Telnet allowed a user to log in to remote computers. The following year, the File Transfer Protocol (FTP) was released. This application standardized the transfer of files between computers on the internetwork. In the late 1970s, e-mail and Usenet user groups were standardized and came into frequent use.
In the 1980s, the TCP/IP protocol suite became the only set of protocols used on the ARPANET. This was an important decision because it set the stage for the Internet as a set of networks that could successfully communicate and interoperate. The early 1980s also saw the dramatic rise in the deployment of the personal computer (PC) and host applications. The Berkeley version of the UNIX operating system (OS) included TCP/IP-based network software. UNIX PCs and minicomputers could FTP and Telnet to share and distribute files and applications over the Internet. In 1982, the Exterior Gateway Protocol (EGP) routing specification (RFC 827) was released. EGP was the first routing protocol used for gateways between networks. The Internet Activities Board (IAB) was established in 1983. This group was later renamed to the Internet Architecture Board; it is now the guiding organization for development activities within the Internet.
A major step in expanding the Internet occurred in the mid-1980s, when the National Science Foundation (NSF) connected the six primary supercomputing centers. This internetwork was called the NSFNET "backbone." The backbone was expanded by the NSF by creating regional networks that allowed universities and other institutions connectivity and access to the Internet. In 1987, the NSF granted Merit Network, Inc. the right to operate and manage the future development of the NSFNET backbone. Merit Network worked with International Business Machines (IBM) and MCI Telecommunications Corporation to research and develop newer, faster networking technologies. By 1987, there were over 10,000 hosts connected to the Internet. By 1989, the number of hosts had exploded to over 100,000!
Also in 1989, the NSFNET backbone was upgraded to "T1" trunks. This allowed backbone traffic to run at 1.544 megabits per second. Less than four years later, the backbone was upgraded to"T3" trunks (45 megabits per second). The NSFNET backbone was replaced in the mid-1990s by an even newer network architecture called the Very High-Speed Backbone Network System (vBNS). This system has a hierarchical layout that uses NSPs, regional networks, and network access points (NAPs). As this book was being written, there was discussion beginning on the design and deployment of Internet 2 (the sequel!).
An important Internet service called Gopher was developed at the University of Minnesota in 1991. It made accessing information in the format of files much easier by providing a set of file lists. These lists were accessible via hierarchically arranged menus. In the Gopher client/server model, the client could use a text viewer interface to read individual files.
Two years later in 1993, the European Laboratory for Particle Physics (CERN), located in Switzerland, released the World Wide Web (WWW). The WWW was developed by Tim Berners-Lee and others the previous year as a way of exchanging research and other information over the Internet. The Web introduced other essential Web technologies, including the Hypertext Transfer Protocol (HTTP), Hypertext Markup Language (HTML), hypertext and hypermedia links, and the Universal Resource Locator (URL) addressing scheme. With the development and release of the Mosaic graphical browser in the same year from the NCSA, the Internet got exposure to a much wider audience.
As of this writing, the Internet has millions of computers allowing tens (or perhaps hundreds) of millions of users to exchange information throughout the world.
Latest Internet Directions
In the new millennium, the Internet continues to grow, the user population doubling every few months. By January 2001, the number of hosts passed 100,000,000! Change will not only be in the number of users, but in the number and types of devices plugging into this matrix. The use of personal digital assistants (PDAs) and wireless telephones that can access the Internet is growing at an unprecedented pace. Technology is rapidly being upgraded to meet these new scaling and performance requirements. There is a definite place for new technologies such as MPLS. The rise of the optical core and use of associated frameworks such as GMPLS may figure prominently in the new Internet.