Home > Articles > Hardware

  • Print
  • + Share This
This chapter is from the book

In Summary

  • Early development of systems had to strike a balance between performance and the need to limit complexity. For example, growing hardware complexity in the superscalar implementations of RISC machines was successfully contained at one point by understanding how instructions were executed and how this process could be made more efficient.

  • The initial goal of the design team from HP and Intel was to create an advanced architecture that could move ahead of the competition by establishing a new benchmark for speed, reliability, and ease of transition for legacy systems.

  • The design team for what would become the Itanium architecture came up with four main conclusions, which formed the basis for the architecture design: The new architecture must be explicitly designed for executing multiple operations in every machine; a hard limit exists that prevents a RISC-based microprocessor from getting above the level of about four instruction executions per cycle; compilers explicitly must schedule code to sustain execution of multiple instructions per machine cycle; the new architecture had to be scalable well beyond the existing limit.

  • Several factors in the business world pushed Itanium's development as well, particularly the increasing need to manage and access bigger and bigger databases with tremendous speed, allowing businesses to analyze more and more data in a cost effective fashion.

  • Itanium's power will be utilized a number of ways, the earliest adopters of the technology have been the engineers and scientists who run applications for simulations that were previously solely in the realm of very large and expensive supercomputers. This trend is now extending into mission-critical applications in every enterprise.

  • + Share This
  • 🔖 Save To Your Account