The previous article in this series provided background information on how data storage is organized and allocated on Linux and Unix systems, highlighting some of the more modern approaches used to improve performance, deal with larger files, and so on. One constant among all classic Linux and Unix filesystems is the general approach to the way the disk is updated when writing to a disk. Writing to a disk drive or other long-term storage is one of the slowest operations performed by computers, simply because it requires physical rather than electronic motion. For this reason, writing to a filesystem is usually done asynchronously, so that other processes on the system can continue to execute while data is being written to disk. Many filesystems cache data in memory until sufficient processor time is available, or a specific amount of data needs to be written to disk.
The problem with standard caching and asynchronous disk updates is that if the system goes down in the middle of an update, the filesystem is usually left in an inconsistent state. File information may not have been updated to reflect blocks that have been added to or deallocated those files, and directories may not have been correctly updated to reflect files that have been created or deleted. Similarly, the free list or filesystem bitmap may not have been correctly updated to reflect blocks that have been allocated or deallocated from files and directories.
To verify the consistency of a filesystem before attempting to mount and use it, Linux systems run a program called fsck, which stands for "file system check." If a filesystem isn't marked as being clean (by a bit in the filesystem superblock), the filesystem must be exhaustively checked for consistency before it can be mounted. Among other things, the fsck program for the ext2 filesystem verifies the consistency of all of the inodes, files, and directories in the filesystem, checks that all blocks marked as allocated are actually owned by some file or directory, and verifies that all blocks owned by files and directories are marked as allocated in the filesystem bitmap. As you can imagine, this can take quite a while to do on huge filesystems, and could therefore substantially delay making your system available to users.
Journaling filesystems keep a journal (or log) of the changes that are to be made to the filesystem, and then asynchronously apply those changes to the filesystem. Sets of related changes in the log are marked as being completed when they have been successfully written to the filesystem, and are then deleted from the log. If a computer crashes during the middle of these updates, the operating system need only replay the pending transactions in the log to restore the filesystem to a consistent state, rather than having to check the entire filesystem. Journaling filesystems therefore minimize system downtime due to filesystem corruptionby replacing the need to check the consistency of an entire filesystem with the requirement of replaying a fairly small log of changes, systems that use journaling filesystems can be made available to the user much more quickly after a system crash or any other type of downtime.
Minimizing system restart time is the primary advantage of using a journaling filesystem, but there are many others. As "newer" filesystems, journaling filesystems can take advantage of newer techniques for enhancing filesystem performance. Many journaling filesystems create and allocate inodes as they are needed, rather than preallocating a specific number of inodes when the filesystem is created. This removes limitations on the number of files and directories that can be created on that partition, increases performance, and reduces the overhead involved if you subsequently want to change the size of a journaling filesystem. Journaling filesystems also typically incorporate enhanced algorithms for storing and locating file and directory data, such as B-Trees, B+Trees, or B*Trees.
Nowadays, the terms "logging" and "journaling" are usually used interchangeably when referring to filesystems that record changes to filesystem structures and data to minimize restart time and maximize consistency. Classically, log-based filesystems are actually a distinct type of filesystem that uses a log-oriented representation for the filesystem itself, and also usually require a garbage collection process to reclaim space internally. Journaling filesystems use a log, which is simply a distinct portion of a filesystem or disk. Where and how logs are stored and used differs with each type of journaling filesystem. I tend to use the term "journaling filesystem" so as not to anger any of my old Computer Science professors who may still be living.
More Filesystems than You Can Shake a Memory Stick at
One of the biggest features of Linux as an Open Source endeavor is that the availability of the source code for the operating system makes it easy to understand and extend the operating system itself. All operating systems provide APIs for integrating low-level services, but having the source code is like the difference between reading the blueprints for a house and being allowed inside it with a toolbelt. Having the source code also eliminates the chance of undocumented APIs, which you might only be familiar with if your mailing address is in Redmond.
The availability of kernel source code and decent APIs for integrating low-level operating system services has resulted in some excellent extensions to the core capabilities of Linux, especially including support for new and existing filesystems. The best-known journaling filesystem for Linux, the Reiser File System, is an excellent example of this. The ReiserFS was born on Linux, and was the first journaling filesystem whose source code was integrated into the standard Linux kernel development tree. More recently (later versions of the 2.4 kernel family), the source code for the ext3 and JFS journaling filesystems have been integrated into the core Linux kernel source tree. As you see later in this article, the ext3 filesystem is a truly impressive efforta logical follow-on to the ext2 filesystem that is completely compatible with existing ext2 filesystems and data structures. However, Linux has also benefited from some excellent journaling filesystems (such as JFS) with surprising rootsproprietary Unix vendors.
To a large extent, Linux is ringing the death knell for proprietary versions of Unix. Why spend a zillion dollars for hardware and a proprietary version of Unix when Linux is freely available and will run on everything from a sexy SMP machine to the PDA in your pocket? Most of the standard Unix vendors have seen the light to some extent, and understand the importance of embracing (or at least playing nicely) with Linux. To this end, existing Unix vendorssuch as IBM and Silicon Graphicshave contributed the source code for some of their most exciting research efforts, the journaling filesystems that these proprietary vendors use on some or all of their hardware. IBM released the source code for its Journal File System, JFS, as Open Source in 2000. Similarly, Silicon Graphics released the source code for its XFS (eXtended File System) as Open Source at the same time. Regardless of the PR value inherent in releasing projects on which they've spent millions of research dollars, the bottom line of these contributions is the tremendous benefit that the capability to understand and use these filesystems brings to Linux systems.
The next few sections highlight the most popular journaling filesystems that are available for Linux and discuss some of the things that make each of them unique. As you'd expect, there are plenty of other journaling filesystems that are under development for Linux, as both research and open source projects. This article focuses on the ones that are actively used on Linux systems today, and which you may therefore actually encounter in the near future.