Reinventing the Mainframe
The 1990s witnessed the world of mainframes coming full circle. During the 1970s many companies centralized their large processors in what became huge data centers. Favorable economic growth in the 1980s allowed many businesses to expand both regionally and globally. Many IT departments in these firms decentralized their data centers to distribute control and services locally. Driven by the need to reduce costs in the 1990s, many of these same companies now recentralized into even larger data centers.
These same business pressures that drove mainframes back into centralized data centers also forced data center managers to operate their departments more efficiently than ever. One method they employed to accomplish this was to automate parts of the computer operations organization. Terms such as automated tape libraries, automated consoles, and automated monitoring started becoming common in the 1990s.
As data centers gradually began operating more efficiently and more reliably, so did the facilitation function of these centers. Environmental controls for temperature, humidity, and electrical power became more automated and integrated into the overall management of the data center. The function of facilities management had certainly been around in one form or another in prior decades, but during the 1990s it emerged and progressed as a highly refined function for IT operations managers.
Mainframes themselves also went through some significant refinements during this time frame. One of the most notable advances again came from IBM. An entirely new architecture that had been in development for over a decade was introduced as System/390 (S/390), with its companion operating system OS/390. Having learned their lessons about memory constraints in years past, this time around IBM engineers planned to stay ahead of the game by designing a 48-bit memory addressing field into the system. This gives OS/390 the capability to one day address up to approximately 268 trillion bytes of memory.
Will this finally be enough memory to satisfy the demands of the new millennium? Only time will tell, of course. But there are currently systems in development by other manufacturers with 64-bit addressing schemes. I personally feel that 48-bit memory addressing will not be the next great limiting factor in IT architectures; more likely it will be either the database or network arenas that will apply the brakes.
IBM's new architecture also introduced a different hardware technology for its family of processors. It is based on complementary metal-oxide semiconductor (CMOS) technology. While not as new or even as fast as other circuitry IBM has used in the past, it proves to be relatively inexpensive and extremely low in heat generation. Heat has often plagued IBM and others in their pursuit of the ultimate fast and powerful circuit design. CMOS also lends itself to parallel processing, which more than compensates for its slightly slower speeds.
Other features were designed into the S/390 architecture that helped prepare it for some of the emerging technologies of the 1990s. One of these was the more robust use of high-speed fiber-optic channels for I/O operations. Another was the integrated interface for IBM's version of UNIX, which it called AIX for advanced interactive executive. A third noteworthy feature of the S/390 was its increased channel port capacity. This was intended to handle anticipated increases in remote I/O activity primarily due to higher Internet traffic.