Accurate Versus Conservative
Another categorization applied to garbage collectors is whether they're accurate or conservative. The difference between the two is that an accurate collector can identify all pointers, whereas a conservative collector can identify things that might be pointers. Often there's some overlap. For example, a collector may be able to identify all pointers on the heap, but have to scan the stack conservatively.
The Boehm GC is conservative because it has to interoperate with C. In C, you can store pointers anywhere: in structures allocated with malloc(), in variables with an integer type, and so on. The collector assumes that anything pointer-aligned is a pointer, unless explicitly told otherwise.
In contrast, a Java VM knows the layout of every object, so it can see which fields are pointers and which aren't. Objective-C is somewhere in the middlethe compiler provides layout information for objects but not for structures, so the collector has to scan some things conservatively, but can scan objects accurately.
There are two advantages of accurate collectors. The first is a simple performance benefit: They don't have to scan values that don't contain pointers, and because they're never confused about the difference between integers and pointers, they can free objects even if an integer happens to contain a value that's the same as the object's address.
The more interesting benefit is that they can move objects. If you've ever seen code using realloc() in C, you've probably seen code involving bugs related to realloc(). The realloc function returns a new pointer and invalidates the old one. If your pointer is the only one referring to the allocated memory, that's fine. If the object is aliased, you have a problem, because now you've made the other pointer into a dangling reference.
An accurate garbage collector lets you move objects, which can be useful for changing their size. Some Smalltalk VMs let you add instance variables to classes and restructure all existing instances using this techniquebut it's mainly useful for heap compaction. With most memory-management schemes, heap fragmentation is a problem. The simplest case of heap fragmentation happens when you have chunks of free space that are too small to satisfy an allocation. For example, if you allocate 1024 one-word chunks, and then free every other one, allocating a two-word chunk has to acquire more memory, even though you have 512 words of memory that you should be able to reuse.
Fragmentation isn't such a problem on modern systems, where you have lots of memory, and where allocations of similar sizes typically come from pools, but it can be a problem for swapping. If you have an object in a memory page that's used frequently, that entire page needs to be kept in memory, even if all the other objects are rarely accessed. The amount of memory that can be paged out without affecting performance is therefore much smaller.