DragonFly BSD: UNIX for Clusters?
In 2000, the FreeBSD team branched the 5.x series in CVS and began developing it as an unstable branch. This new 5.x branch had an impressive list of proposed features: mandatory access control from the TrustedBSD project, UFS2, a new volume manager, background fsck, a dynamic devfs, and fine-grained kernel locking.
The June 2003 release, 5.1, was still nowhere near the stability expected of FreeBSD. Users had a choice of either staying with 4.8 and keeping a stable OS or moving to 5.1 for new features. Most chose to stay with 4.8. Matt Dillon, one of the FreeBSD kernel developers, decided that several of the approaches being used in the 5.x series were dead-ends, and in July 2003 forked the stable 4.x codebase to form DragonFly BSD.
The 4.x FreeBSD Foundation meant that DragonFly has been a solid platform from the start. DragonFly, like the other BSDs, imports code from other members of the family when it makes sense, such as the malloc() security features from OpenBSD, parts of the WiFi subsystem from FreeBSD, and USB code from NetBSD. In spite of this, development has been pushed in some unique directions.
Not a Microkernel
Matt Dillon originally became known as an Amiga guru and author of the DICE C compiler for that platform. It is not surprising, then, that DragonFly would gain some inspiration from the Amiga.
The Amiga implemented inter-process communication via message passing. Because the Amiga did not have protected memory, this could be implemented by passing a pointer rather than requiring an expensive copy operation. As such, many would not class the Amiga kernel as a “pure” microkernel. In spite of this, the clear abstraction between components made the code easier to maintain and reason about.
DragonFly uses a similar model. Message passing primitives are the main interface between kernel components and between the kernel and userspace programs. Matt explained some of the rationale behind this decision:
The message passing model is used in a lot of places. The networking stack is now multithreaded, and uses the messaging interface for all communication between components. Socket function by user space programs invoke a message sending system call to pass the data into the top of the network stack, and the same mechanism is used to hand it down through the layers until it hits the network interface.
Once the packet arrives at the driver layer, things begin to look a bit more like a traditional UNIX system. Matt described the reasons for this, saying: