A Half-Way Step to Apple’s Source Code: An Interview with David Chisnall
David Chisnall is a prolific writer of code, books and articles. While a PhD student at Swansea University, Chisnall wrote portions of A Practical Guide to RedHat Linux, Second Edition and numerous articles for InformIT. Not shy of deadlines and multitasking, he completed his PhD thesis just two weeks after finalizing the complete draft of The Definitive Guide to the Xen Hypervisor. Chisnall is also very active in the open source community; he is a founding member and core developer of the Étoilé project, which aims to build an open source user environment for desktop and mobile computing systems, and is a contributor to GNUstep, a cross-platform, object-oriented framework for desktop application development. His involvement in GNUstep has given him a different perspective in his approach to his upcoming book, a developer's handbook to Cocoa, Apple's commercial implementation of the APIs on which GNUstep is based.
I caught up with Chisnall as he was getting ready to spend his Friday night participating in his new-found passion: salsa dancing.
Linda Leung: GNUstep is the Free Software Foundation's implementation of NeXT Software's OpenStep standard (Apple acquired NeXT in 1996, and its commercial implementation of OpenStep is Cocoa). Given your role as a contributor to GNUstep, how are you approaching the writing of your next book, Cocoa Programming Developer's Handbook due out later this year?
David Chisnall: One of the major disadvantages when writing about a proprietary API is that you have to guess how it really works. When I was writing the Xen book, if I wanted to know how something worked, I could just read the source code. With Cocoa, this is not possible, because no one outside of Apple has access to it. GNUstep is a convenient half-way step between these two. It doesn't show you how something works in Cocoa, but it shows how it could work, which is often enough. In places where the API documentation is ambiguous in Cocoa, typically someone has tested what it really does and has implemented that in GNUstep.
Being familiar with the GNUstep means that I can usually find the relevant bits of code there. In a couple of places in the book, I showed some examples from the GNUstep code to demonstrate how something worked. I'd really recommend this to anyone programming on Cocoa; if you want a deeper understanding look at the GNUstep code. Like any long-running project, some of the older code is very crufty, but a lot of it is very readable.
LL: How far is GNUstep able to keep up with the changes that Apple makes to Cocoa and is there ever a point when the GNUstep community says it's not worth making the updates to GNUstep to catch up?
DC: OpenStep was a collaboration between NeXT and Sun, which sadly lost momentum when Sun decided to focus on Java. This was a shame, because it was a beautiful set of APIs. The GNUstep project languished for a bit before OS X started to become popular. People looked at this weird-looking Objective-C language and didn't see why it was worth learning. With a lot more developers learning Cocoa, GNUstep has started to become more popular.
GNUstep claims that it only aims to follow the Foundation and AppKit frameworks. [Foundation defines the "nuts and bolts" classes for Objective-C programming, and Application Kit includes higher-level controls such as windows, buttons, menus, and text fields.] If these are the only bits of Cocoa you use, then your application can often be run on Windows/UNIX with GNUstep with just a recompile. There are quite a few other frameworks that also work but aren't part of the core GNUstep releases. You can find an implementation of CoreData in the GNUstep repository, for example. The GNUstep Application Project maintains a version of the AddressBook framework. In a branch in the Étoilé repository there is a partial (work-in-progress) implementation of CoreGraphics. Hopefully the last one won't be there much longer — at last week’s Étoilé hackathon we talked to the GNUstep AppKit maintainer about moving this upstream to GNUstep.
One of my recent projects has been getting the compiler and Objective-C runtime support up-to-date. Objective-C in GCC is an unmaintainable mess, and no-one appears interested in working on it. More recently, Chris Lattner at Apple started an open source (BSD-licensed) C/C++/Objective-C front end for LLVM [Low Level Virtual Machine]. I've been working on code generation for this for the GNU runtime. I've also been writing a framework for Étoilé that implements all of Apple's public APIs for runtime. Prior to Leopard, the runtime library interactions were very ad-hoc and required poking structures that should really be private. The GNU runtime is in the same state. With Leopard, Apple implemented a very clean set of public APIs. I've re-implemented these on top of the GNU runtime, and added various other bits that are needed to support features like declared properties. If you compile with Clang and link against this framework, you now get support for most of Objective-C 2.0.
LL: How did you get involved in Étoilé?
DC: I got my first Mac just before I started my PhD. I was on the same grant as Nicolas Roard, who is a GNUstep developer (and an unrepentant Smalltalk fanboy), who persuaded me that it was worth learning Cocoa. I didn't want to be tied to a single-platform API again, so I started to get involved in GNUstep.
At the time, GNUstep was going through a bit of an identity crisis. It wasn't sure if it wanted to be a set of developer tools, or a full-blown desktop environment. If it did want to be a full-blown environment, what kind? Did it want to recreate OPENSTEP (NeXT’s specific implementation of OpenStep) or OS X, or be something new? The Étoilé project was formed by those of us who wanted to create something new. For the first year I mainly contributed by arguing about the kind of UI we should be building. After that, I had a bit more time to write code.
LL: How many downloads of Étoilé has there been since it was made available and how are developers using it?
DC: The first question is difficult to answer. We don't release binaries, and we don't have a way of tracking source downloads. You can find our stuff in the package repositories for a few *NIX systems. FreeBSD, in particular, has a very good set of Étoilé ports, and I know someone has been working hard on Arch Linux packaging.
Our latest release, 0.4, is part of our developer-focused release cycle. We are hoping to get 0.5 out around the end of the summer (so expect it around Christmas), which will be our first user-focused release. So far we've been working on creating the core technologies we need to be able to build the things that we really want. We've almost finished that, so expect some exciting demos later this year — we had some at the hackathon, but they need a bit more polishing before they're ready for the public.
LL: How has Étoilé changed since it was first released? How have users improved on it so far and which other areas need improving?
DC: There were a few ideas I had during my PhD as almost throw-away "wouldn't it be nice if..." or "it should be possible to..." things. One of them was automatic serialization, where you can store an object on disk, or send it over the network, without having to write any code. This is trivial in a language like Java, but much harder in Objective-C. Technically it's impossible in Objective-C, but getting it working most of the time and providing a fall-back mechanism for when it can't work automatically is possible — and now works.
Because everything is an object in Objective-C, this includes messages. All communications between objects are messages, which are the equivalent of method or function calls in C/C++. The automatic serialization code works for them too. This means that we can record, not only the state of an object, but every change to it. I wrote this core code, and Quentin Mathé [another member of the Étoilé team] has been working hard building the CoreObject framework on top. CoreObject records the entire history (including branches) of complex documents, automatically. It is incredibly easy to use, and means that no Étoilé program using it needs a "save" button; everything you do will be automatically saved to disk (or even to a remote machine, streaming the changes over the network). This was something I wanted from the start, but the hackathon last week was the first time I saw it really working.
Quentin has also been doing some fantastic stuff with EtoileUI. This provides a higher layer of abstraction than AppKit. If you're using CoreObject, you can often just say "give me a user interface displaying this data" and have it just work. You can also use it to modify the user interface in very complex ways at run time. This borrows a lot of ideas from various Smalltalk implementations, but it's still very impressive when you see it working.
Now that we have Clang in a useable state, I've started working on some ideas that I had a long time ago but needed compiler modification. The Object Planes concept started as something quite nebulous, but Damien Pollet helped me a lot in creating a concrete definition. He also has a PhD student working on implementing it in Smalltalk — hopefully we should be showing it off a bit more at ESUG this year. The basic idea is that you draw a line around a set of objects and say "these are related." When they send messages to each other, it all works normally. When they send messages to objects outside, this message can be intercepted and rewritten. For example, you could say "these objects are in a separate thread" and whenever you send a message to them, all of its arguments are copied into the receiving plane and the message is added to a message queue, rather than being executed immediately. Or you can use them for persistence, dramatically simplifying some of the current code in CoreObject. To do this, you need the message sender to be available to the code doing the dispatch. I had modifications to Clang to support this committed yesterday.
LL: You're very active in the open source community on technologies that at some level are tied to certain vendors. For example, Xen was developed by Ian Pratt, a senior lecturer at Cambridge University who later founded XenSource (which was acquired by Citrix in 2007). GNUstep has a history with Apple. What should be the ideal relationship between vendors and the open source community?
DC: When I first announced on the GNUstep lists that I was working on Clang, there was a bit of hostility to the idea. As a BSD-licensed compiler, Apple could easily decide to keep all of its changes private and leave the community in the cold. I mentioned this to Chris Lattner, and his reply was:
"Of course we could, but why would we? Apple isn't in the compiler business."
A lot of people have tried to pin down exactly what business Apple is in, and there are constant debates over whether it’s a hardware or a software company. My take is that it’s a systems integration company. Nothing in the iPhone, for example, is particularly innovative, but the way it all works together is far better than anything else on the market.
Apple and Étoilé both benefit from Clang being open source. The benefit to us is obvious; we get a compiler that parses all of Objective-C 2.0, and just have to write the code-generation parts that are specific to the GNU implementation. The benefit to Apple is that it doesn’t have to do all of the work itself. A simple example is a bug I fixed a few months ago with incorrect termination of constant strings. This was causing Clang to emit code that sometimes worked perfectly, but sometimes failed in strange ways. It took me a while to track it down, but then the fix was trivial (just changing two characters). Apple did not have to pay anyone to fix the bug. I'm by no means the only non-Apple contributor to Clang (or LLVM, on which it is built). The total cost for Apple to write and maintain a compiler would be massive.
A few companies look at BSD licensed things — or open source in general — and say "but our competitors could take this!" It's true, but then they have the same choice. They can either maintain their own, private, in-house fork, or they can contribute patches. If a company takes Clang and only contributes a single bug fix, it's still a net win for Apple; it has one less bug than if Clang was an internal-only project.
I think this is a much better way of encouraging corporate involvement in open source than legal bludgeons like the GPL. The BSD license is easy for even a non-lawyer to read and understand, so there is no confusion when using BSD-licensed code.
LL: I want get your thoughts on some virtualization issues. Some security experts say virtualization is a big security risk. As Joanna Rutkowska demonstrated at RSA 2008, a malware could be used to take control of a machine running virtualization software and all the other machines that are controlled by the hypervisor. Do you think the virtualization industry is close to closing that vulnerability?
DC: I think this kind of vulnerability is overblown. If there's a vulnerability in your hypervisor, an attacker can gain access to all of your virtual machines. Similarly, if there is a vulnerability in your OS, an attacker can gain control of all of your machines that run that OS.
In the end, there are degrees of security. I was talking to someone a while ago who did some work with a certain three-letter agency on virtualization. They had two machines on every user's desk, one for classified and one for unclassified work. They wanted to replace these with one machine running two VMs. After some experimentation, they found that malware installed on both VMs could cooperate and send a few characters per minute from the classified machine to the other one by sending network packets at a certain rate, encoding the leaked data in the interrupt frequency.
If your adversary is a foreign government with millions spent on espionage, this is the kind of thing you need to worry about. For the rest of us, it isn't. There is no such thing as a secure system, just one that is more effort to break into than it's worth. Virtualization will never be more secure than running two separate physical machines, but very often the other benefits outweigh this cost.
LL: The industry is talking about pushing desktops — both consumer and business — into an external cloud using virtualization. The advantage for consumers and small businesses is that they won't have to worry about managing desktops, such as upgrading operating systems and issuing patches. Does this scenario excite or scare you?
DC: When someone has a good definition of what "the cloud" is, I'll have some opinions on it. At the moment, it seems like Autonomic Computing; a few good ideas and a lot of terrible ones under the same marketing umbrella. A question I've been asking for the last few years is "what do you want to take with you?" Is it your data, your applications, the application UI, or just some form of remote access? When I can get two people to give me the same answer to this question, I'll know better what form the cloud should take.
LL: Is Sun's OpenxVM project competitive to Xen, and if so, how does it stack up against Xen?
DC: Asking if xVM competes with Xen is a bit like asking if Ubuntu competes with Linux. Sun's Open xVM is a massive marketing umbrella. It includes VirtualBox, which is a very nice desktop virtualization system, which I use for Étoilé development. It also includes Sun’s Xen-based system.
There are two important components to a Xen-based virtualization setup. One is the Xen hypervisor; the other is an operating system that runs in a more-privileged mode and provides administration and device access. There are currently three operating systems that can do this: Linux, NetBSD, and Solaris. Sun's xVM is a package which combines Xen and Solaris in this way.
At the moment, Solaris has one really compelling feature over Linux and NetBSD for this: ZFS. If you use ZFS for your VM storage you can have snapshots, redundancy, per-sector error checking, and all of the other nice benefits it gets. For this reason alone, xVM looks like one of the best ways of running Xen at the moment.
There might be some bias here. I've been a fan of Sun for a long time. For almost three decades it has been building great technology and marketing it spectacularly badly. Any company with better engineering than marketing is attractive to people in the technology world, and Sun is the company that would build a better mousetrap, market it as a "cat replacement," and wonder why all of its customers were complaining when their fingers were injured trying to stroke it. It will be interesting to see if this changes since the Oracle acquisition.
LL: In a podcast interview with FLOSS Weekly you lamented the demise of two Apple creations: Newton soup, a persistent store for all documents on the Apple Newton, and OpenDoc, a cross-platform technology that replaces conventional applications with user-assembled groups of software components. Why do you miss these Apple technologies, and are there others that you miss?
DC: No one ever notices a good UI, but everyone notices a bad one. The Newton is a perfect example of a good UI; you only notice it when you use it then switch to other systems that don't do all of the nice things it did. It had lots of trivial things that all added together to make something that was a joy to use. For example, there was no copy-and-paste; just drag-and-drop. To drag between two full-screen applications, you dragged to the edge of the screen and a little tab appeared.
One of my favorite experiences is coming up with an idea and discovering someone has already implemented it and that it worked well. I came up with the idea of doing copy-and-paste like this about a year ago (not to take too much credit for this, it's effectively a generalization of a feature of the Mac OS 8 Finder), and someone told me that someone at Apple had invented it 15 years earlier.
Newton Soup and OpenDoc are both similar in that they blur the lines between applications. Jef Raskin, father of the Mac and UI guru, hated applications. He saw them as the ultimate modal user interface, and his arguments are quite compelling. I could talk about this for a really long time, but instead I'll just recommend that anyone interested to read The Humane Interface. Étoilé is adopting a lot of ideas from this book — we try to only steal the best ideas and Jef Raskin had a lot of these.
There aren't really any others I miss, probably because I didn't use Macs much before OS X. Both Pink and Copland (two Apple research operating systems that failed to become Mac OS X) had some nice ideas — EtoileUI's event model is very similar to the one in Pink — but neither survived long enough to be missed.
LL: Someone once said to me that programming is like art. There are programmers who develop apps that run OK and there are programmers who can write code that look like works of art. Do you agree with this, and if so, what distinguishes from a well-written mobile app from a poorly-written one?
DC: Writing a mobile application is not very different from writing a desktop application a few years ago. A typical mobile system has at least a 200MHz CPU, now often 600MHz-1GHz, has a decent GPU, and a reasonable amount of RAM. The main difference is the screen size, and this is more of a user interface problem. It's possible to have beautifully-written code that presents a horrible user interface and vice-versa.
I think of programming as more of a craft than an art; the programmer is the modern equivalent of a blacksmith or a clockmaker. As with these crafts, it is possible to get the job done badly. Anyone can make a door hinge, or a mechanical clock, but a door hinge that opens smoothly and never squeaks or sticks, or a clock that keeps good time is much harder. In both cases, it requires an in-depth understanding of the materials you are working with.
For a programmer, the materials are harder to grasp, because they aren't physical. A good programmer is one who can think at several levels of abstraction at once. You have to think about how your code will run, how it interacts directly with the CPU. This requires studying computer architectures and spending a bit of time doing assembly-language programming or writing a compiler. You also need to think about the very high-level interactions between parts of your program, right up to the user interface.
In The King’s English, the Fowler brothers write that you should always prefer a simpler word to a longer one. This is especially true of programming. Writing a program is different from writing prose because it has to be understandable by both other humans and by computers. It's easy to spot well-written code, because it's as simple as possible, but no simpler. A bad programmer will either make the code more complex than it needs to be, or make it so simple that it doesn't (or can't easily be extended to) solve the problem.
The real test of good code is how flexible it is. It would be very nice if someone could accurately write a specification of everything that a program will ever need to do, but outside of avionics this almost never happens. The OpenStep (Cocoa) frameworks that Apple inherited from NeXT is an example of this. The original frameworks were written over two decades ago, but have gradually evolved to their current state. A lot of the things they do would have been completely impractical on the kind of machines NeXT was shipping (for about 10 times the price of a new Mac) in the '80s, but the frameworks were able to adapt incrementally to gain these new features. Compare this with the Windows API, which originated around the same time and has undergone several radical rewrites.
LL: Now you've finished your PhD thesis, what's next on the horizon?
DC: Shortening my to-do list to under one lifetime's worth of things...
Getting Étoilé to the point where I can use it as my primary working environment is a big priority for the moment. I'm also writing and consulting on a freelance basis. It's nice to be able to decide to work in the park on days when the weather is nice.
I'm quite heavily involved with the Swansea History of Computing Collection. This aims to build a collection, not just of machines, but of oral accounts and supporting documentation from the history of computing. In particular, we have managed to collect some fascinating stories from people who were involved in the transition from mechanical to electrical calculating engines.
I've always believed that if you want to understand a piece of technology, you have to understand the people who built it. Working with the HoCC has been an amazing opportunity for me because of this. It's also been practically useful — I recently came across a company that was producing a CPU based on an entirely new architecture. Its instruction set had a lot of similarities with a machine that was used to design the Port Talbot Steelworks, and so I was able to give them some useful hints for building their compiler.
There are an astonishing number of ideas that have been discarded in the last 60 years. Many of these deserved to be discarded, but a lot were simply thrown away because they were incompatible with constraints that no longer exist. Some ideas come too late to change the world, but a surprising number arrive too early. Revisiting them a few decades later can be quite enlightening.
LL: Final question: What's your salsa style?
DC: I first learned Cuban style, and since then have done some LA style, some New York, and some Latin American. I still prefer Cuban, but it's nice to mix some elements from the other styles in. New York has some nice steps, but I don't really like the in-a-line style, so I often dance New York steps in a Cuban style.
More recently I've taken up Argentine Tango, which is even more fun. It's an amazingly expressive dance. I'm very lucky here because a few people from my tango class have recently formed a band and play a lot of tango, milonga, and tango-waltz pieces, so I get to dance to live music quite often.