Home > Articles

  • Print
  • + Share This
From the author of

Multiple Technologies and Protocols

Not long ago, mission-critical systems all ran on mainframe technology. With the advent of the PC, local area networks (LANs), and related technologies, many business applications moved from the protected realm of the mainframe data center to the free-flowing desktops of users.

At the host or central processing area, UNIX-based midrange computers and workstations arrived, handling scientific applications more gracefully, and serving as the breeding ground for the Internet. In place of mainframes, IT professionals can now choose specialized fault-tolerant or symmetric multiprocessing systems, or even low-end PC-based servers and workstations. Today's hottest gadgets include Palm PCs and other devices that connect to PCs and networks, allowing users to carry data in their pockets.

Once, if your system talked SNA, IBM's proprietary networking communication protocol, you could be understood by virtually all important systems and components. Today, you may need to be fluent in TCP/IP and other protocols. Even your options for implementing networks can be overwhelming: Ethernet, Fast Ethernet, Gigabit Ethernet, ATM, ISDN, frame relay, xDSL, and many others.

Computing systems are no longer restricted to running on yesterday's computing platforms of choice: IBM's MVS or VSE. They can now run on UNIX, Linux, OS/400, Mac OS, Windows, OS/2, NetWare, Palm, Java-based devices, and many other alternative platforms. Even within these platform families, many versions and releases exist, and they're not necessarily compatible with each other. For example, UNIX has more than 40 variants, even without counting the multiple distributions of Linux. Windows has Windows 3.1, Windows for Workgroups, Windows 95, Windows 98, Windows NT, Windows CE, and now Windows 2000 and Windows ME.

When dealing with the architecture or system deployment strategy, no longer are you bound to use host-based configurations where a large, powerful central computer does all the processing, and users interact via dumb terminals. In today's client/server architectures, each computing resource can be a client, a server, or both at various times. With this architecture, the mainframe is regarded as a fat server, and the dumb terminal becomes a thin client. Alternatively, you can choose a fat client (a powerful PC) that communicates with a thin server, or something in-between—or a newer web-centered or n-tier architecture. Each of these approaches presents unique deployment, management, and availability challenges.

  • + Share This
  • 🔖 Save To Your Account