Home > Articles

  • Print
  • + Share This
This chapter is from the book

Memory Pages

The CPU sends data to memory in order to empty its registers and make room for more calculations. In other words, the CPU has some information it wants to get rid of, and sends that information out to the memory controller. The memory controller shoves it into whichever capacitors are available and keeps track of where it put everything. Each bit is assigned a memory address for as long as the controller is in charge of it (no pun intended).

When the CPU wants to empty a register, it waits for one of its internal electrical pulses (processor clock tick). When the pulse arrives, it sends out a data bit, usually to a memory cache. Very quickly, a stream of bits generates bytes in multiples of eight (8-bit byte, 16-bit bytes, and so on). The cache waits for the slower pulses of the motherboard clock, and then sends each bit over to the memory controller. The controller then directs each electrical charge into a memory cell. The cell might be a capacitor, in which case it has to be recharged. Or, it might be a transistor, in which case a switch opens or closes. Regardless of how wide a register or an address is, each bit ends up in its own cell, somewhere in the memory chip.

Page Ranges

Typically, memory is divided into blocks. At the main memory level, a block of memory is referred to as a memory page. A page is a related group of bytes (and their bits). It can vary in size from 512 bits to several kilobytes, depending on the way the operating system is set up. Understand that physical memory is fixed, with the amount of memory identified in the BIOS. However, the operating system dictates much of how the memory is being used. For example, a 32-bit operating system will structure memory pages in multiples of thirty-two bits; a 64-bit operating system will use pages that are multiples of sixty-four.

DRAM cells are usually accessed through paging. The controller keeps track of the electrical charges, their location, and the state (condition) of each capacitor and/or transistors of each memory chip. This combination of states and locations is the actual address.

Pages are similar to named ranges in a spreadsheet. Without ranges, a spreadsheet formula must include every necessary cell in a calculation. We might have a formula something like =SUM(A1+B1+C1+D1+E1). Now suppose we assign cells C1, D1, and E1 to a range, and call that range "LastWeek." We can now change the formula to include the range name: =SUM(A1+B1+"LastWeek"). The range name includes a set of cells.

A named range is analogous to the memory controller giving a unique name to part of a row of charges. This range of charges is called a page address. A memory page is some part of a row in a grid. A page address means that the controller doesn't have to go looking for every single capacitor or transistor containing particular data bits.

Do you remember that cheap little printing toy we talked about earlier—the one with the rubber letters and the rail? One way to think of memory addressing is as if we were trying to locate every single piece of rubber in the rail. The memory controller has to ask, "Get me letter 1, at the left end. Now get me letter 2, next to letter 1. Now get me letter 3, the third one in from the left," and so on. But suppose we don't worry about each letter, and think instead of the whole rail. Now the controller has only to ask, "Get me everything in the rail right now." This is more like memory paging.

Burst Mode

A burst of information is when a sub-system stores up pieces of information, and then sends them all out at once. Back in World War II, submarines were at risk every time they surfaced to send radio messages to headquarters. To reduce the time on the surface, people would record a message at a slow speed, and then play it back during the transmission in a single high-speed burst. To anyone listening, the message would sound like a quick stream of unintelligible noise.

"Bursting" is a rapid data-transfer technique that automatically generates a series of consecutive addresses every time the processor requests only a single address. In other words, although the processor is asking for only one address, bursting creates a block of more than that one. The assumption is that the additional addresses will be located adjacent to the previous data in the same row. Bursting can be applied both to read operations (from memory) and write operations (to memory).

On a system bus, burst mode is more like taking control of the phone line and not allowing anyone else to interrupt until the end of the conversation. However, memory systems use burst mode to mean something more like caching: The next-expected information is prepared before the CPU actually makes a request. Neither process is really a burst, but rather an uninterrupted transmission of information. Setting aside the semantics, burst mode takes place for only limited amounts of time, because otherwise no other sub-systems would be able to request an interruption.

Fast Page Mode (FPM)

Dynamic RAM originally began with Fast Page Mode (FPM), back in the late 1980s. Even now, many technical references refer to FPM DRAM or EDO memory (discussed next). In many situations, the CPU transfers data back and forth between memory, in bursts of consecutive addresses. Fast page mode simplifies the process by providing an automatic column counter. Keep in mind that addresses are held in a matrix, and that a given row is a page of memory. Each bit in the page also has a row-column number (address).

In plain DRAM, the controller not only had to find a row of bits (the page), it also had to go up and "manually" look at each column heading. Fast Page Mode automatically increments the column address, when the controller selects a memory page. It can then access the next cell without having to go get another column address. The controller uses fast page mode to make an assumption that the data read/write following a CPU request will be in the next three columns of the page row. This is somewhat like having a line of letters all ready to go in the toy stamp.

Using FPM, the controller doesn't have to waste time looking for a range address for at least three more times: It can read-assume-assume-assume. The three assumptions are burst cycles. The process saves time, and increases speed when reading or writing bursts of data.

NOTE

Fast Page Mode is capable of processing commands at up to 50 ns. Fifty nanoseconds is fifty billionths of a second, which used to be considered very fast. Remember that the controller first moves to a row, then to a column, then retrieves the information. The row and column number is a matrix address.

The Data Output Buffer

Suppose the CPU wants back 16 bits of data (two bytes). Figure 3.6 illustrates what happens next. Note that the controller has stored the data in what it calls Page 12, in the cell range 1–16. It passes through the memory chip, looking for Page 12, bit 1 (Cell A12). It then moves each bit into the data output buffer cell at the top of each column. Remember: The controller doesn't have to look again at the page number for bits number 2, 3, or 4. It's already read "page 12," and assumes-assumes-assumes. For the fifth bit, it quickly re-reads the page address, and then goes and gets bits 5, 6, 7, and 8. Notice that in two reads, the controller has picked up one byte: half of a 16-bit address.

After the controller completes its pass through the entire page (four reads: one complete number), it validates the information and hands it back to the CPU. The controller then turns off the data output buffer (above the columns, in Figure 3.6). This takes approximately 10 nanoseconds. Finally, each cell in the page is prepared for the next transmission from the CPU. The memory enters a 10 ns wait state while the capacitors are pre-charged for the next cycle. In other words, that part of the row is given a zero charge (wiped out) and prepared for the next transmission.

Figure 3.6Figure 3.6 Memory controller retrieves cell data.

NOTE

Understand that FPM has a 20 ns wait state: 10 ns to turn off the data output buffer, plus 10 ns to recharge specific cells in a page.

Extended Data Output (EDO) RAM

FPM evolved into Extended Data Out (EDO) memory. The big improvement in EDO was that column cell addresses were merely deactivated, not wiped out. The data remained valid until the next call from the CPU. In other words, Fast Page Mode deactivated the data output buffer (10 ns), and then removed the data bits in the column cells (10 ns). EDO, on the other hand, kept the data output buffer active until the beginning of the next cycle, leaving the data bits alone. One less step means a faster process.

EDO memory is sometimes referred to as hyper-page mode, and allows a timing overlap between successive read/writes. Remember that the data output buffers aren't turned off when the memory controller finishes reading a page. Instead, the CPU (not the memory controller) determines the start of the deactivation process by sending a new request. The result of this overlap in the process is that EDO eliminates 10 ns per cycle delay of fast page mode, generating faster throughput.

Here's another way to look at it. When you delete a file, the operating system has two ways to go about the process. It can either write a series of zeroes over every bit of data pertaining to that file, everywhere they exist, or it can simply cancel the FAT index reference. Obviously it's a lot faster to just cancel the first letter of the file's index name than it is to spend time cleaning out every data bit. Utility software applications allow you to "undelete" a file by resetting the first letter of a recoverable file. These applications also provide a way to wipe out a disk by writing all zeros to the file area. In the latter case, nobody can recover the information. FPM is like writing all zeros to a disk, and EDO is like changing only the first letter of the index name.

Both FPM and EDO memory are asynchronous. (In the English language, the "a" in front of synchronous is called a prefix. The "a" prefix generally means "not," or "the opposite.") In asynchronous memory, the memory controller is not working with any other clocks. DRAM is asynchronous memory. In asynchronous mode, the CPU and memory controller have to wait for each other to be ready before they can transfer data.

  • + Share This
  • 🔖 Save To Your Account