Home > Articles > Home & Office Computing > Mac OS X

How Core Animation Changed Cocoa Drawing

  • Print
  • + Share This
OS X is a descendant of the NeXTSTEP operating system introduced in the mid 1980s. During this time, NeXT and Apple have evolved the display model considerably. David Chisnall looks at how things have changed between Display PostScript and CoreAnimation, and where they might end up in the future.
From the author of

Cocoa has been the standard toolkit for creating applications on OS X since it was released, and is a direct descendent of the OpenStep frameworks used on NeXT workstations previously. With 10.5, Apple introduced the CoreAnimation framework. This made a lot of effects easy. Under the hood, it made some fundamental changes to the Cocoa drawing model. In this article, I'm going to look at how this model has evolved and what the changes mean.

From PostScript to Quartz

The original NeXT workstations licensed Display PostScript from Adobe. PostScript is a stack-based language designed for printing. PostScript is a Turing-complete language, meaning that it can implement any algorithm, and has complex built-in support for drawing Bezier curves. Display PostScript extended this with a few other features, such as support for multiple contexts (so each window could run independent PostScript programs) and event handling.

The OpenStep specification required Display PostScript and provided a set of classes for interacting with it. You could write PostScript programs and send them to the display server to run. A competing system at the time, NeWS from Sun Microsystems, worked on the same idea. This was much more responsive than X11 over a remote connection because entire view objects could be run on the display server and only send high-level events to the program. This functionality was not often used with Display PostScript on NeXT systems.

It turned out, in fact, that no one was really making much use of the DPS layer on NeXT to run real programs on the display server (with the exception of one infamous proof-of-concept remote exploit). Everyone else was just treating it as a canvas for drawing commands.

With OS X, Apple (having bought NeXT) had the opportunity to replace DPS with something a bit more modern. One of the motivating factors for this, no doubt, was the desire to stop paying a license fee to Adobe for every copy sold.

One of the advantages of Display PostScript was the fact that the display server could cache drawing commands, rather than bitmap images. If you have a screen resolution of 1024*768 in 24-bit color, then you need 2.25MB of RAM for the frame buffer. Not a huge amount by modern standards, but close to the limit of video memory for most machines in the 1990s. Now consider what happens when you move one window across another.

There are two common solutions to this problem. The simplest one is when every time a new bit of the bottom window is exposed, to ask the application to redraw it. This is not ideal. It involves several context switches between the display server and the application, and a lot of inter-process communication. The other option is for the display server to keep a buffer of both windows and just composite them. This is conceptually simpler, but requires a lot more RAM. If each window is almost as big as the screen, then you need 2.25MB of RAM for the frame buffer, and 2MB for each window. On a system with 8 to 32MB of RAM, you quickly reach a limit as to the number of windows you can have on screen at once.

Display PostScript had a third option. It could keep copies of the PostScript programs used to generate the window contents—typically much smaller than the rendered images—and just run them again. This was quite fast, didn't use much memory, and generally worked well.

By the time OS X was introduced, RAM was a lot cheaper. The first iMacs had 8MB of dedicated video memory and 64MB of main memory—about as much as the most expensive workstation NeXT ever produced with all of the upgrades. The graphics card was not a simple frame buffer; it could handle things like OpenGL rendering, including textures.

To take advantage of this, Apple made a few changes to the display server architecture. The first change was to switch from a PostScript to a PDF model. Unlike PostScript, which is a full programming language, PDF is just a display language. It does not contain things such as loops or conditionals. This simplified the drawing code a lot, because it just had to handle drawing, not flow control.

The second change was to move to a buffered rendering model. Instead of storing the PDF display lists, the new window server (Quartz) stored a bitmap image for each window. Initially this was just a shared image, but in later versions it was a texture on the GPU. These buffers were then composited by the window server.

There are a few advantages to this. One of the most obvious is that it makes things like transparency and the “genie” effect for miniaturization easy. Each window is just texture on the GPU; it can be drawn on any polygon and with any blending functions. The other big advantage is less obvious and relates to the lower level parts of the operating system.

With Display PostScript, the window server did all of the drawing. Simple and complex drawing commands were equally easy to do in the client; they were just function or method calls that appended a few PostScript commands to the stream. The resulting program ran on the display server. This made process accounting very difficult. Two programs could be using equal amounts of CPU time but one could be responsible for 90 percent of the time the window server spent drawing, yet both would receive equal priorities in the operating system's scheduler. Moving the drawing into the client processes made this much simpler.

The final change introduced with Quartz was to remove the well-defined binary communication interface between clients and servers. If you communicate with the window server on OS X, you go via the QuartzCore framework. This communicates with the display server via a private protocol. This is in contrast with something like X11, where a sufficiently motivated individual can write his or her own client library talking directly to the window server, or can go via one of the existing libraries such as XCB and XLib. If the Quartz window server gains some capabilities, then Apple can update the QuartzCore functions to make use of them. Developers of frameworks such as GTK+, Qt, or GNUstep on X11 can do the same thing, but people (including toolkit authors) using xlib directly have to modify their code to take advantage of new extensions.

  • + Share This
  • 🔖 Save To Your Account