Home > Articles > Software Development & Management

Garbage Collection: Why, When, and How?

  • Print
  • + Share This
Over the last decade, garbage collection has gone from being a sign of decadence among programmers to a must-have feature for a language. David Chisnall examines how various collectors work, along with their costs and benefits.
Like this article? We recommend

Like this article? We recommend

Recently I've spent a little bit of time adding Apple-compatible garbage collection and automatic reference counting to the GNUstep Objective-C runtime, so I thought I'd take this opportunity to spend some time discussing the advantages and disadvantages of garbage collection (GC) in general, and in Objective-C in particular.

These days, garbage collection is pretty much ubiquitous. C and C++ are the only two popular languages that don't support it, and even they can handle it with libraries like the Boehm-Demers-Weiser garbage collector (often called Boehm GC), although they don't get all of the advantages. The presence or absence of garbage collection is often seen as one of the key features differentiating high-level languages from low-level ones.

If I'd written this article a decade ago, I'd have started by explaining what GC is and why you'd want it. These days, it's more common to encounter programmers who have never used a language without garbage collection than ones who have never used a language with it.

Memory Management

If you're writing code in an assembly language for a system with no operating system, then you just have a big blob of memory that you can partition as you wish. Most low-level languages provide some abstract data type to manage this space, with two operations: get a block of memory of a specific size, and return a block for reallocation. Algol-family languages typically use this interface, with functions like malloc() and free(). Early implementations of these functions used a heap as the data structure for managing free memory; therefore, dynamically allocated memory was referred to as "coming from the heap" or, later, just as "heap memory."

This sort of interface is fine for simple uses, but it becomes problematic when you add in aliasing. When two bits of code hold pointers or references to the same dynamic allocation, which one is responsible for freeing it?

This conflict has two traditional solutions. One option is to enforce a strict ownership policy. Every object has exactly one owning pointer, and some other number of non-owning pointers. The programmer must ensure that the non-owning pointers don't persist longer than the owning pointer. Some software patterns make this solution easier, but it's still problematic.

The other solution is reference counting, which is the traditional approach in Objective-C. When you want to retain a pointer to an object, you send it a -retain message, which increments its reference count. When you no longer need this reference, you send it a -release message. When an object's reference count reaches 0, it's freed. All pointers are equal.

The reference counting approach has two problems. The first is the problem of returning temporary objects. It's fairly common for programmers to ignore return values sometimes. With simple reference counting, you'd need to make sure you explicitly released every returned object. Objective-C fixes this issue with the addition of autorelease pools. You can add an object to an autorelease pool, and it will have its reference count decreased at some point in the future, typically at the end of the current run loop. This solution is simple to use, but has the disadvantage that the lifetime of temporary objects is artificially extended.

The other major problem is garbage cycles. For example, in a typical model-controller-view application, the view needs a reference to the controller to be able to send it user events, and the controller needs a reference to the view to be able to update it when the model changes. If each holds a reference to the other, neither will ever be freed, because their reference counts will be 1 when no other references to either exist. Objective-C applications typically solve this problem with the non-retained delegate pattern, a slightly ugly hack in which views (by convention) don't increment the reference count of the controller, so they're freed when the controller is no longer referenced. This technique works well for trivial cycles, but not for more involved ones.

It's not possible for static analysis to prove the absence of cycles in the general case—especially in the presence of loadable libraries, although in practice careful coding can make them rare.

  • + Share This
  • 🔖 Save To Your Account