- The Network Is the Filesystem, or Something Like That
- NFS, the Default Network Filesystem
- OpenAFS: Complex but Amazingly Powerful
OpenAFS: Complex but Amazingly Powerful
OpenAFS is the latest generation of a distributed filesystem that has been in constant use for more than 15 years. AFS was originally developed at Carnegie-Mellon University in the early-to-mid 1980s as a combination research project and solution for the infrastructure needs of the university. Today, AFS has grown into a commercial product with a well-established user base, storing terabytes of data. Originally marketed by Transarc Corporation in Pittsburgh, Pennsylvania, AFS became the property of IBM when it acquired Transarc in 1994.
In 2000, IBM released the then-current base source code for AFS to the Open Source community, in which it is known as OpenAFS. IBM still sells and supports AFS as a commercial product, but OpenAFS has been taken to heart by the Open Source community and is flourishing in that environment. One of the most impressive aspects of OpenAFS is that it is truly cross-platform on as large a scale as any other filesystem has ever been. CMU, Transarc, and IBM have always supported AFS on almost every conceivable platform; and OpenAFS continues this goal of universal support by supporting platforms such as every modern flavor of Windows and up-and-coming platforms such as Mac OS X.
Like NFS, OpenAFS is based on the client/server model. The OpenAFS environment uses specialized machines called file servers to deliver files and directories to OpenAFS clients in response to requests for those files by the people using client machines. Computer systems within the OpenAFS environment are not exclusively client or server machines, but are traditionally organized that way for simplicity and ease of system administration.
A fundamental way in which OpenAFS and AFS differ from NFS is in how they identify the location of the servers on which distributed filesystems are stored. NFS filesystems are mounted by directly identifying the name or IP address of the server on which a specific filesystem is stored. That information is hard-coded into the /etc/fstab files of the clients that mount those filesystems.
OpenAFS uses a location-independent mechanism for finding OpenAFS filesystems, which are known as volumes. OpenAFS uses a volume location database, available over the network, which is consulted whenever a user requests access to a file or directory. AFS identifies the volume in which that file or directory is located, based on the last volume that is mounted in that file or directory path and then consults the volume location database to locate the file server where that volume is stored. Using a volume location database also provides administrative advantages by making it possible to move volumes from one file server to another for administrative purposes, even while clients are using files in those volumes. When a volume is moved, the volume location database is updated, and subsequent requests for the volume go to the file server on which it is currently located.
Liberating OpenAFS clients from having to hard-code the servers on which OpenAFS files and directories are located makes it possible for OpenAFS to support a global namespace for the OpenAFS filesystem. NFS clients mount separate filesystems at separate points in the local directory hierarchy on a client, and they do this identically on all the clients that need access to those filesystems. OpenAFS takes a fundamentally different approach by internally mounting its filesystems in a single namespace (the directory /afs) to which all systems have access, enabling each client to access that filesystem through a single identical mountpoint. This is known as providing a uniform namespace because all clients have exactly the same view of the distributed OpenAFS filesystem. Providing a uniform namespace simplifies information sharing across wide-area networks such as the Internet because the focus and organization of the filesystem begins at a higher level than the root directory of a single machine.
Because OpenAFS is a distributed filesystem designed for wide-area operation, minimizing the amount of network resources it consumes is important. OpenAFS supports extensive caching of server data on client systems to minimize redundant network communications. A filesystem cache is a portion of a system's disk or memory that temporarily houses files being used on that computer system. When a user on an OpenAFS client system requests access to a file stored on an OpenAFS file server, the client typically retrieves the entire file, and stores it in a specially organized directory on the client machine. By using this sort of caching, no additional network communication is required until it's necessary to save the file back to the server.
Unlike NFS, the OpenAFS cache is persistent, meaning that its contents are preserved on the system, even if the system is rebooted. When a client system is rebooted, the majority of the files that have been retrieved from OpenAFS file servers are still located on the local disk. Users of these systems can continue working on these files without requiring any additional network traffic until the user saves the file back to the file server.
Data on OpenAFS file servers is organized into volumes, which are conceptually similar to filesystems. Volumes contain discrete portions of the filesystem hierarchy of OpenAFS, can be managed as discrete units, and are mounted in OpenAFS systems just like filesystems are in standard Linux filesystems. The disk space in which volumes are created consists of physical or logical partitions (if you are using a logical volume management) that are mounted on the OpenAFS file server.
Organizing portions of the OpenAFS filesystem into volumes provides system administrators with a number of advantages over simply using standard partitions and even logical volumes:
Volumes are independently managed entities that can be moved from one OpenAFS partition to another, and even from one AFS file server to another. Volumes can easily be moved while they are mounted and actively in use. Being able to move volumes from one partition to another makes it easy to free up space to dynamically increase the size of other volumes. This also enables you to move volumes to provide load balancing or to move heavily used volumes to more powerful file servers.
Volumes can be replicated to provide multiple, read-only copies of heavily used volumes, which can be distributed to multiple file servers. OpenAFS clients can be instructed to prioritize file servers differently, providing another opportunity for load balancing.
Volumes support transparent file access because they are independent of any specific file server or partition. As mentioned earlier, OpenAFS clients locate volumes by querying the Volume Location Database rather than by looking for them on a specific file server.
Volumes provide an efficient unit for storage management because they are independent of the physical size of any given partition. Just as with standard Linux logical volume management, OpenAFS volumes can easily be resized independently of the organization of the physical disk space on which they are located. Creating OpenAFS volumes on partitions that themselves use the logical volume management capabilities of an operating system such as Linux provides the ultimate in flexible storage management.
Another impressive benefit of OpenAFS is the capability to create and maintain read-only versions of existing volumes, which are known as replicas or clones of the source volumes. Replicas do the following:
Increase the availability of the data in the volumes that they are copies of by making multiple copies of that data available.
Help load-balance file servers by distributing copies of heavily used volumes across multiple file servers
AFS was originally designed to unify the computing environment at Carnegie-Mellon University by providing a uniform namespace to all AFS client machines to simplify file sharing and access. At the same time, it's easy to see that different departments within the university would have different needs, different sets of users, and potentially different access policies for those users. OpenAFS uses the Kerberos authentication mechanism to provide secure authentication as well as data security, and also supports hierarchical authentication within administrative OpenAFS entities known as cells. OpenAFS uses access control lists (ACLs) to provide an impressively rich and robust set of privileges on OpenAFS directories. Any OpenAFS user can create an OpenAFS group with specific privileges on one or more directories, assign users to that group, and so on.
OpenAFS is an extremely powerful, dependable, and high-performance distributed filesystem that is extremely scalable and works well in huge computing environments and over any type of network, including the Internet itself. OpenAFS is a natural choice for managing development projects that are spread out across multiple networked sites within a company, multiple companies, and even multiple countries. With all of the power and additional features that OpenAFS provides, its administration and configuration can be quite complex, but the administrative and operational gains typically far outweigh the pain involved.