Home > Articles > Hardware > Upgrading & Repairing

Understanding the GeForce4 Family

  • Print
  • + Share This

Understanding the GeForce4 Family

NVidias GeForce4 chipsets, introduced in February 2002, have replaced the GeForce3 Ti (Titanium) chipsets introduced last fall at the top of nVidias line. With so many chipsets in the GeForce4 family, and so many vendors producing video cards based on these chipsets, it can be easy to be confused when its time to look at a video card upgrade.

In this article, youll learn about the distinct differences between the Ti and MX branches of the GeForce 4 family and how to choose the best member of the family for your needs and budget.

This article will make frequent mention of the GeForce3 and GeForce3 Ti series. The GeForce4 Ti series isnt so much a brand-new design as it is the refinement and improvement of technologies originally developed for the GeForce3 and GeForce3 Ti series.

One Family Name Two GPU Technologies

For years, nVidia has used the same family name for groups of graphics processors which were introduced with similar 3D technology, but differed in memory size, graphics core size, graphics processor speed and memory speed, such as the numerous members of the GeForce2 and GeForce3 families. However, the GeForce4 family is a different matter. The GeForce4 family has two branches:

  • the top-of-the line GeForce4 Ti series
  • the business and casual-gamer GeForce4 MX series

Longtime followers of the GeForce saga will recognize MX as nVidias term for its low-end chipsets in a given series. However, in previous GeForce families, the MX series has used the same basic 3D display technologies as other members of the family. This time, its different. While the GeForce4 MX shares some features of the GeForce4 Ti, its 3D technology more similar to the GeForce2 MX series that its designed to replace.

The nVidia NV25 graphics chip is at the heart of the GeForce4 Ti series, while the GeForce4 MX series uses the nVidia NV17 graphics chip.

Comparing the GeForce4 Ti to the GeForce3 Series

The GeForce4 Ti series replaces the GeForce3 Ti series at the top of nVidias line of graphics processor chips (GPUs); the GeForce3 Ti 200 is still part of the current nVidia product line. Compared to the GeForce3 Ti series, the GeForce4 Ti offers the following technology improvements, most of which affect the chips 3D performance:

  • Lightspeed Memory Architecture II
  • nFiniteFX II 3D architecture
  • Accuview antialiasing
  • nView multiple display support

Lightspeed Memory Architecture II: Setting the Stage for Faster Everything

The GeForce3 series introduced nVidias Lightspeed Memory Architecture (LMA), which used a four-part crossbar memory controller and four memory partitions to enable memory accesses of as little as 64 bits at a time up to 256 bits at a time. Variable-sized memory accesses enable the video card to process 3D graphics more efficiently. Since some scenes may require the entire screen be redrawn, and other require just a few pixels to change, the crossbar memory controller can use all four 64-bit memory partitions to redraw the entire screen, or just use a single 64-bit partition to redraw a minor screen change. Since most of the time only some of the memory partitions are used, memory latency (the amount of time a system spends waiting for memory to be ready for a read or write operation) is greatly reduced.

LMA II, the version of Lightspeed Memory Architecture found in the GeForce4 Ti series, has the following major features:

  • A faster crossbar memory controller than on the GeForce3 series
  • Four separate caches: vertex, graphics primitives, dual texture caches (also in GeForce3) and pixel cache
  • Z-occlusion culling, a refinement of a method used in the GeForce3 series for determining which surfaces of a polygon wont be visible and not rendering them
  • Fast Z-clear, a faster method of zeroing out the frame buffer

While the GeForce4 Ti also benefits from very fast core speeds (see chart below) and the advanced nFiniteFX II rendering engine, LMA II is what enables cards based on this chipset to do so much, so fast.

nFiniteFX II a Faster, Smarter Sequel

Like the GeForce3 series nFiniteFX engine, nFiniteFX II features advanced programmable vertex and pixel shaders. Vertices are the corners of 3D polygons; a vertex shader processes the relationship of each vertex to the adjoining vertices as a 3D object moves. While the GeForce3 series graphics chip had only one vertex shader, the GeForce4 Ti series has two, for faster and more realistic 3D rendering. A pixel shader provides realistic lighting and texture effects for the surfaces of the 3D polygons controlled by the vertex shader. The pixel shaders in the GeForce4 Ti are 50% faster than those in the GeForce 3 Ti. Since the vertex and pixel shaders can be programmed by software developers, incredibly realistic effects are possible, such as hair or fur detail and variable lighting effects rendered at full speed. For an example of nFiniteFX II in action, see the Wolfman demo at the nVidia website.

Accuview Antialiasing

No matter how realistic the 3D lighting, texture, or movement effects might be in a computer-rendered scene, one dead giveaway that youre looking at a computer simulation instead of real life has always been the jaggies, the sawtooth edges on the border between different colors on a 3D object or between the object and the background. While antialiasing technology has progressed from smoothing the jaggies on foreground objects only to full-scene anti-aliasing in recent 3D graphics chips, theres always been a balancing act between high visual quality and high performance. Traditionally, you could have one or the other; enable high-quality antialiasing, and full-motion 3D graphics looked as if they were wading through molasses and might look a little bit out of focus at the same time. If you disabled antialiasing, full-motion performance came back but so did the jaggies.

Early forms of antialiasing worked on a pixel level, but fine details were lost. Later forms of antialiasing, such as the supersampling antialiasing used by ATI, and the multisampling antialiasing used by 3dfx and nVidia, work with subpixels, the red, green, and blue elements of every pixel. Subpixel-based antialiasing produces smooth edges without the blurring inherent in traditional antialiasing. Microsoft uses a type of subpixel antialiasing for its ClearView display option in Windows XP. While ClearView is designed specifically for LCD panels, subpixel antialiasing in 3D graphics is useful for gamers with any type of display.

The GeForce3 introduced a new method of antialiasing which reduced the number of samples needed from the four samples used by high-quality (and slow!) 4x supersampling with a new technique called Quincunx-AA, which uses just two samples plus a proprietary smoothing technique.

The GeForce4 Ti and MX both feature nVidias new Accuview Antialiasing (Accuview AA) technology. Accuview AA supports GeForce3-style Quincunx-AA, conventional 2x and 4x supersampling, plus a new 4XS mode which uses more subpixels for even smoother edges without loss of fine detail. Accuview AA also supports the combination of anisotropic filtering (used for 3D objects which extend from foreground to background) with bilinear filtering (which improves the appearance of small textures stretched across a large polygon) and trilinear filtering (which combines bilinear filtering with MIP mapping, which mixes low-res and high-res textures in a 3D object extending from foreground to background). Because of the high speed of the nVidia GeForce4 GPU chipset, you can enable high-quality antialiasing settings and still enjoy true full-motion video in your favorite games.

Multiple Displays with nView

Before nVidias arch-rival, ATI, introduced the Radeon VE Dual Display edition, dual-display video cards were extremely rare. While the Radeon VE used a crippled version of the regular Radeons powerful 3D graphics engine, the ability to connect a CRT and an LCD panel, or two CRTs made the Radeon VE a hit, and encouraged ATI to make dual display support standard in its mid-range Radeon 7500 and top-of-the-line Radeon 8500 video cards and chipsets. The Radeon VE also supports TV out, an increasingly popular feature for gamers looking for a big-screen deathmatch.

In the GeForce3 series, nVidia, just as ATI had done previously, relegated dual-display support to its low-end MX series chips. The GeForce4 Ti series, for the first time in nVidias history, combines nVidias fastest and most sophisticated 3D graphics chip with the business and gaming convenience of dual display support plus TV-out. The NV25 chip contains dual 350MHz integrated RAMDACs, enabling dual CRTs. Up to two TDMS transmitters can be added to the NV25 chip to enable digital flat panels to be connected, although current card models feature a single DB15 analog VGA connection and a DVI-I analog/digital flat panel connection. Check with the card vendor for specific details of the display types supported on a particular card.

Borrowing from Old and New The GeForce4 MX series

The GeForce4 MX series, which uses the nVidia NV17 chip, has an odd combination of features. Like the GeForce4 Ti series, the GeForce4 MX series supports Accuview antialiasing. The GeForce4 MX also has a simplified version of LMA II, using only two memory controllers instead of four as on the Ti series. These features enable the GeForce4 MX 460 (fastest of the GeForce4 MX family) to tie the GeForce4 Ti 4400 in memory bandwidth. The GeForce4 MX series also supports nView for multiple displays, but at that point, the similarities end.

The GeForce4 MX processors support only 64MB of RAM, compared to a maximum of 128MB of RAM on the GeForce4 Ti series. And, the GeForce4 MX processors actually have less 3D support than the GeForce3 MX processors they are replacing!

Heres why: the GeForce4 MX, based on the NV17 chip, is essentially a desktop version of the new GeForce4 Go, a graphics chip based on the nVidia NV17M chip. The GeForce4 Go, like the older GeForce2 Go, is designed for use in notebook computers with special features such as power management lacking in desktop graphics chips. However, both the GeForce4 MX and the GeForce4 Go chips contain nVidias Video Processing Engine technology, which includes a powerful hardware MPEG2 decoder which reduces the load on the CPU during the playback of DVD movies and full-screen playback at all popular screen resolutions.

While lots of notebook computer users like to watch a DVD after a long days worth of meetings, they arent generally hardcore game players. Thus its not surprising that the NV17M and its NV17 descendent use the nVidia Shading Rasterizer (NSR) first introduced by the nVidia GeForce2 GTS for 3D shading effects. NSR supports per-pixel shading, a far cry from the sub-pixel operations supported by the nFiniteFX II engine used in the GeForce4 Ti series. Both the GeForce3 series (based on the nVidia NV20 chip) and the GeForce4 Ti series support the more sophisticated 3D effects in Microsoft DirectX 8, but the GeForce4 MX is limited to the same DirectX 7 support found in the GeForce 2 series.

Comparing the GeForce4 Family to the GeForce3 series

When nVidia introduced its enhanced GeForce3 Ti and GeForce2 Ti series chips in the fall of 2001, we compared the existing GeForce3 and GeForce2 chips to the new Ti models in the article GeForce, the Next Generation.

At that time, cards based on some of the old GeForce2 and GeForce3 series chips were actually better deals than their replacements. While the advanced features of the GeForce4 Ti have never been surpassed by other GeForce chips, the same cant be said about the GeForce4 MX chips, as you can see from the following table.

New GeForce4 Ti- and MX-Series Chips Versus Existing GeForce3 Chips

New Chip

Old Chip

Core Clock Rate

Memory Clock Rate

Fill Rate


Typical Card & Street Price

GeForce 4 Ti 4600


650MHz (325MHz*2)

1,200 Mpixels/sec

DDR, dual display

VisionTek Xtacy model #3001522


GeForce 4 Ti 4400




1,100 Mpixels/sec

DDR, dual display

PNY Verto model # VCGF4TI44Z


GeForce3 Ti500




960 Mpixels/sec

DDR, DVI-I, VGA, and TV-out ports

VisionTek Xtacy #6964


GeForce 4 Ti 4200



900 Mpixels/sec

DDR, dual display

(estimated street price)


GeForce3 Ti200



700 Mpixels/sec

DVI-I and VGA ports

VisionTek Xtacy model #72145


GeForce4 MX 460



600 Mpixels/sec

DDR memory, dual display

(estimated street price, $180)

GeForce4 MX 440



540 Mpixels/sec

DDR memory, dual display

VisionTek Xtacy model# 30001520


GeForce4 MX 420



500 Mpixels/sec

SDR memory, dual display

VisionTek Xtacy model



As long as cards based on the GeForce3 Ti-series remain on the market, theres little reason to buy videocards based on the more expensive GeForce 4 MX 460 or 440 chips unless you prize dual-display capabilities over performance. The GeForce3 Ti200 is faster and has much better 3D graphics. The best price-performance deal of all the new chips looks to be cards based on the GeForce4 Ti 4200, which is just a hair slower than the GeForce3 Ti500, is much faster than the GeForce3 Ti200, and offers dual-display as well as the best 3D graphics yet from nVidia. This chip, along with the GeForce4 MX 460, has been slow to come to market, with the first products not shipping until April or May 2002, compared to February and March 2002 for the other new chips. Most observers agree that the delay in rolling out the Ti 4200 enabled card vendors to sell through their inventories of GeForce3 Ti-series cards.

Why The Ti 4200 Version is So Much Cheaper than the Ti 4400/4600 Series Cards

While most video card customers dont pay much attention to card construction, the GeForce4 Ti 4600 and 4400 cards are currently the kings of the graphics performance castle. To achieve the very high memory speeds of the 4600/4400 series, nVidia has adopted a high-speed memory chip design which attaches to the video card with the same BGA (ball grid array) mounting method used on the newest motherboard chipsets from major vendors. BGA design enables the memory to use lower voltages that the previous edge-mounted TSOP (thin small outline package) standard and to run at lower temperatures. Thus, BGA memory doesnt need the cooling blocks or passive heatsinks found on many TSOP-based video cards to run reliably. However, BGA memory needs to be mounted on a thicker, more costly circuit board than normal; the 4600 and 4400 use an eight-layer board.

The Ti 4200 uses GeForce3-style construction to hold down costs: a conventional 6-layer board with conventional TSOP memory enables the Ti 4200 to have a street price under $200.

3rd-party Enhancements to the Basic Design

Although the basic features of any video card based on the GeForce4 Ti series chips are sensational enough, some vendors have gone further than the basic nVidia reference design followed by other card vendors. Here are two examples:

  • Video-in Theres no provision for video-in on the GeForce4 Ti series, but some video cards based on the Ti 4600, such as the sole VisionTek 4600-based card and the ASUS V8460Ultra/Deluxe have added a video-in chip to make video capture possible
  • Improved Cooling The graphics core and memory chips on any GeForce4-series card get hot, but eVGA has the solution, offering cards with both standard nVidia reference fans and the second generation of its unique Asymmetric Cooling System (ACS2) heat pipe, which provides a high-performance fan thats only 11.5mm thick. Look for eVGAs 128-A4-NV81-S1 (GeForce4 Ti 4400) and 128-A4-NV83-S1 (GeForce4 Ti 4600), both of which cool the memory chips as well as the GPU. EVGA also makes a version of its GeForce4 MX 440, 064-A4-NV70-S1, which uses ACS cooling for the GPU only.


While the GeForce4 Ti 4600 is somewhat faster than its nearest rival, the ATI Radeon 8500 when antialiasing is not used, the improved performance of nVidias new Accuview AA feature shows up when antialiasing is enabled. In tests performed by Toms Hardware, for example, the GeForce4 Ti 4600 was about 33% faster at 1280 x 1024 resolution, 32-bit color, running the popular Quake III demo 001 benchmark without antialiasing enabled. The gap widened to over 200% faster for the GeForce4 Ti 4600 when high-quality 2x antialiasing was enabled on the ATI card and either 2x or the proprietary Quincunx antialiasing was enabled on the nVidia-based card. Other benchmarks confirm the trend: the GeForce4 Ti 4600 is the fastest gaming video chipset on the market both with and without antialiasing enabled.


The GeForce4 Ti series is the most powerful gaming GPU yet, offering a high-speed memory architecture and better quality antialiasing with less sacrifice in performance than with previous nVidia or even current ATI graphics processors. The GeForce4 MX series, on the other hand, is primarily aimed at low-end customers. Business customers will appreciate the series dual display feature, but casual gamers on a budget might be better served by an ATI Radeon 7500-based solution or by the remaining GeForce3 Ti products while theyre available.

For More Information

Technical details on the GeForce4 chip families, the Ti and MX, are available at the nVidia website


Benchmark tests and comparisons of the Ti-series chipsets against other nVidia and ATI chips are available at:



Some of the leading vendors of nVidia-based video cards in the US market include:

ASUS http://usa.asus.com

VisionTek http://www.visiontek.com

PNY http://www.pny.com

eVGA http://www.evga.com

To learn more about GeForce3/GeForce4 antialiasing techniques, see this writeup on Toms Hardware


Digit-Life provides an extremely detailed chart comparing the technical specifications of GeForce4 Ti 4600, ATI Radeon 8500, and GeForce3 Ti 500 as part of its review of the GeForce4 Ti 4600


Learn more about nVidia GeForce and ATI Radeon chipsets in our previous articles:

ATI Radeon Everywhere?

The New Breed of Radeon from ATI

GeForce - the Next Generation

Putting the Vroom in Gaming The Latest Video Card Chipsets

Copyright©2002 Pearson Education. All rights reserved.

  • + Share This
  • 🔖 Save To Your Account

Related Resources

There are currently no related titles. Please check back later.