Home > Articles > Hardware

  • Print
  • + Share This
Like this article? We recommend


One of the big debates in the 1980s and 1990s was whether the CISC or RISC approach to instruction set design was correct. The idea behind RISC was to provide a simple set of instructions that could be used to perform complex operations, while the CISC people wanted complex instructions that could be used on their own.

The embodiment of the CISC design was the VAX. Writing VAX assembly was not much different from writing high-level code. In later VAX systems, many of these instructions were microcoded, which meant that they were decomposed into simpler real instructions that were then run on the real hardware.

After the VAX, Digital created the Alpha, a chip at the opposite extreme. The Alpha had a very small instruction set, but it ran incredibly fast. For many years, it was the fastest microprocessor that money could buy. Even now, several of the top 500 supercomputers are Alpha-based, in spite of the fact that the chip hasn’t been in active development for five years.

In the early years, RISC did very well. Compiler writers loved the chips; they could easily remember the instruction sets, and it was easier to map complex language constructs onto sequences of RISC instructions than to try to map them to CISC instructions.

The first problems in the RISC philosophy became apparent with improvements in the way division was handled. Early RISC chips didn’t have a divide instruction; some didn’t even have a multiply instruction. Instead, these were created from sequences of more primitive operations, such as shifts. This wasn’t a problem for software developers; they would just copy the sequence of instructions to accomplish a divide from the architecture handbook, put it in a macro somewhere, and then use it as if they had a divide instruction. Then someone worked out a more efficient way of implementing a divide instruction.

The next generation of CPUs with divide instructions could execute the operation in fewer cycles, while those without took the same number of cycles to execute the series of instructions used as a substitute. This has been taken even further with Intel’s latest Core micro-architecture. Some sequences of simple x86 operations are now combined into a single instruction that’s executed internally.

Some components of the RISC philosophy live on. It’s still widely regarded as a good idea for instruction sets to be orthogonal, for example, because providing multiple ways of doing the same thing is a waste of silicon. The idea of a simple instruction set is being eroded, however. Even modern Power PC and SPARC chips that are marketed as RISC processors wouldn’t be recognized as RISC by those who invented the term.

  • + Share This
  • 🔖 Save To Your Account