Home > Articles

  • Print
  • + Share This
This chapter is from the book

Directory Replication

As described previously, the DIB can be (and commonly is) distributed to multiple directory servers to improve network responsiveness and provide the redundancy necessary for a robust directory service. These copies of the directory database are called replicas, and the process of creating and updating these copies is referred to as directory replication. Replication is performed on a per-partition basis—that is, each replica is a copy of a single partition.

Replicas provide two substantial benefits:

  • Fault tolerance—Replication increases fault tolerance by providing secondary (backup) copies of the directory information. This provides the mechanism to prevent the directory from becoming inaccessible if a single directory server goes down. With a directory that is replicated to multiple servers, when one directory server becomes inaccessible, other directory servers holding replicas for that partition can still fulfill any requests.

  • Performance—For a directory to provide fast service, a sufficient number of directory servers must be deployed to handle the task load and still remain responsive. Replication reduces response latency in directory lookups and authentication services, and it increases directory availability. Replication also supports distribution of the task load for directory data retrieval to multiple directory servers distributed throughout the network.

Managing Replication

You must consider many things when determining replication approaches for your directory, such as the selection of the data to replicate, periodicity of data synchronization, and server roles in the synchronization process. Replication factors such as these can have implications for the performance and manageability of your network. Keep in mind that different directory service implementations support various ranges of replication capabilities.

You should consider a number of replication factors in conjunction with the specific operating contingencies of the directory service software and network environment. These replication factors include the following:

  • Types of replicas supported

  • The replication strategy

  • The specific dataset replicated and amount of data sent with each update

  • The synchronization latency period (that is, the length of time it takes for changes to reach all directory servers)

  • The data synchronization method employed

  • The bandwidth of the network connections used for replication

One Replica of a Partition Per Server

Although many directories allow a server to hold more than one replica and, therefore, more than one partition, a single server can have only one replica of any specific partition.

When managing a directory service, you may have limited options when it comes to fine-tuning replication traffic because much of the replication methodology is predetermined by the design of the directory service. Even if you cannot change a specific replication factor, however, you should be aware of the implications of the directory's replication design on your network operations, as well as your application of the directory service. For example, the amount of data transmitted in each replication matters more when replicating an e-commerce directory across WAN links, than it does when replicating a network directory across local high-speed LAN connections. By having a thorough understanding of a particular vendor's approach, you can discover which factors can be manipulated to optimize replication in your particular environment.

The next section describes the different types of replicas you can use while working with a directory services implementation.

Replica Types

All replicas are not created equal; replica types and the operations that they support vary widely. At the most simplistic level, replicas can be divided into two types: those that can be written to, and those that cannot. In the X.500 model for directory services, these two types of replicas are referred to as masters and shadows, respectively.

Although the differentiation between writeable and non-writeable is stark, in practice there are variations on each of the fundamental replica types. An examination of the different replica types will help you to better understand what functionality each provides.

What Is The "Right Number" of Replicas?

Each directory deployment is different, with individual replica requirements. The directory processes a substantially higher number of queries than changes or updates to the directory database. This means that, in almost all cases, the network traffic generated by queries will be of a much greater magnitude than replication traffic. You need to keep this ratio in mind as you plan your directory and be sure that you are placing adequate numbers of replicas to service requests promptly.

This doesn't mean that you should put a replica of every partition in your directory on every directory server. Especially when no WAN links are involved, however, you should not hesitate to put extra replicas in places where they seem useful. You can always remove them if replication traffic proves to be more of a detriment to network performance than expected.

Replicas are used to provide fault tolerance as well as to improve availability. Yet a minimum number of replicas for each partition need to be implemented to provide fault tolerance in case of server failure—for example, Novell recommends having at least three replicas for each eDirectory partition.

Writeable Replicas

A writeable replica supports complete (or almost complete) directory functionality. A server holding a writeable replica can accept directory modifications and is responsible for replicating those changes to other servers holding a replica of that partition. Although the specific capabilities of the replicas supported by any given directory service implementation will vary somewhat, writeable replicas come in two basic flavors.

  • Master replicas are fully functional, allowing all directory operations. Everything in the directory—objects, tree design, the schema, and so on—is updateable via a master replica. At least one master replica must exist per partition. Whether there are more writeable replicas than that depends on the directory service implementation.

  • Read/write replicas allow most operations, but they may restrict a few high-level operations such as schema modifications or tree-level changes. Read/write replicas provide additional points of administration for day-to-day directory operations. There are no requirements for read/write replicas—they are totally optional.

Non-Writeable Replicas

A non-writeable replica is a read-only copy of the master replica. Although there is generally no requirement for read-only replicas of any kind, their use can greatly enhance directory performance by providing additional replica servers for load balancing purposes.

This type of replica has limitations, some of which may not be as apparent as the inability to update objects directly. If a replica is not writeable, for example, any operation that requires updating the directory cannot be performed on that server. Therefore, a directory server maintaining a non-writeable replica may perform the lookups needed for locating network resources, but not provide logon authentication because properties of the user object may be updated at logon (such as logon time or workstation address).

A few basic types of non-writeable replicas exist:

  • Read-only replicas contain a complete partition and are generally distributed as needed to support directory lookups. Because, from the user's perspective, a read-only replica is indistinguishable from a writeable one, a server holding a read-only replica commonly has a way of transparently redirecting write requests to an appropriate replica.

  • A catalog is a read-only copy of a subset of directory objects and attributes; generally those properties commonly used in queries. Catalog servers usually hold a partial set of data from every directory partition, so that they can provide high-speed lookups for the entire directory. Catalogs are not considered authoritative—they are commonly a high-speed index used to locate the directory server with the partition that contains the object sought.

  • Cache replicas are less well-defined, implementation-specific, read-only collections of directory information. Typically, information obtained during client lookups is stored for a specified period of time and used to satisfy repeated requests for the same data, increasing availability of directory information.

Figure 3.8 shows the various replica types. The replica types are displayed in a continuum of replica capability, ranging from less to more functionality and completeness of the dataset. As shown, the functionality of a replica grows as its dataset grows.

Figure 3.8 The various types of replicas have different read/write status and contain different sets of directory data.

Replicas interact in different ways depending on the replication strategy used. The following section examines the different strategies used for replication.

...of Servers and Replicas

Although it's discussed that way for simplicity, the relationship between directory servers and replicas is not necessarily one to one. In fact, it is somewhat more complicated than that. For instance:

  • Copies of partition data (called replicas) are almost always held by more than one server for performance and fault-tolerance purposes.

  • In many directory implementations, servers can hold more than one replica. This is particularly useful. If you can store replicas of multiple partitions on a single server, you can distribute adequate partitions to service requests without the expense of a separate server for each replica.

  • A directory server may well have different roles in relationship to each partition it manages. A single directory server may hold the master replica of one partition, and a read-only copy of one or more others.

Replication Strategies

The number, capabilities, and status of the master replicas for a directory determine how replication operates. In general, a directory supports one of these three variations:

  • Single master

  • Multimaster

  • Floating single master

To explain the sort of functionality each of these replication approaches provides, the following sections focus on the basic styles of replication.

Single Master

Many directory vendors use a single master model when designing their replication strategy. This is primarily because, from a programmatic perspective, it is by far the easiest method. When a directory has a single master, data can be modified on only one directory server—although copies of the information usually reside on other servers. The directory server holding the master replica is also responsible for updating all other replicas whenever there is a change to the directory.

The directory replication process in a single master model, as shown in Figure 3.9, is relatively straightforward. Directory updates are transmitted unidirectionally from the server holding the master replica to the servers holding read-only and catalog replicas. Because those replicas do not accept changes from users (only from the master replica), they do not need to transmit any directory updates.

Single master replication is the easiest style to implement because no real data integrity issues arise—all updates come from a single supplier, so no possibility of conflict exists. This method has its drawbacks, however, primarily that this scheme requires that the master replica be available for all modifications to directory information. If the master replica becomes unavailable, directory operations will be limited until the master is brought back online or another replica is designated as the new master.

A directory service may be able to compensate for loss of the server holding the master replica by allowing selection of a new master from among the existing replicas. This selection may be done automatically by providing for election of a new master upon failure of the current one, or it may require the directory administrator to manually promote a replica to serve as the new master.

Figure 3.9 Single master replication operates in one direction, from the master to all subordinate replicas.

Single master replication offers a simple model for managing replication, such that the X.500 standards still define only a single master replication model. Individual vendors have overcome the obstacles raised by the single master design, however, and have independently developed a number of (highly similar) multimaster replication models, as discussed next.

Multimaster Replication

Replication of directory information can also be implemented in a multimaster style, where more than one replica can accept changes. Use of multimaster replication ensures that nonavailability of a given replica will not impede the use or administration of the directory.

In multimaster replication, all writeable replicas might be considered equivalent and perform exactly the same functions, or there may be a mix of master and read/write replicas. In this model, most (or all) replicas can perform all directory functionality for that partition. Changes usually can be written to the directory on any available directory server, instead of needing access to the server holding the single master. Any directory server holding a writeable replica is responsible for accepting changes and propagating those changes to the other master replicas as well as any servers containing down-level replicas (such as catalog servers). Figure 3.10 shows a directory using a multimaster replication scheme with three master replicas and a catalog. As you can tell, the paths taken by directory updates have increased.

Figure 3.10 With multimaster replication, directory update traffic takes many different paths.

Of course, multimaster replication is much more complicated than single master replication. Because multiple writeable copies of the same information exist, some form of data synchronization must be employed to reconcile multiple updates to the same object. Methods of directory data synchronization are discussed later in this chapter.

Floating Master

You might come across less-common variations on the single master model. The designation of the master replica may not be static, for example, but may be assigned to different directory servers as needed. This is considered a floating master (FM) or floating single master (FSM). A floating master operates exactly like a single master—just a temporary one. A directory may use a floating single master all the time, or only to provide support for a specific operation. Active Directory, for example, employs a number of floating single masters to manage various aspects of directory operations.

Another use of FSMs is by directories that normally use a multimaster model. When more than one replica is writeable, some operations require temporary designation of a single master to guarantee directory integrity. Operations such as schema modifications and partitioning (which change basic directory parameters) must ensure that the replica being operated on is the only one currently accepting any changes. Accordingly, a floating single master is often designated for the duration of the operation.

Figure 3.11 demonstrates how this works. On the left side of the figure, a multimaster directory with four master replicas is shown. If an operation that requires selection of an FSM occurs, the directory will effectively switch to the operational mode shown on the right side of the figure. After the FSM determines that the operation is complete, the directory will return to the mode on the left with multiple masters.

Figure 3.11 Some directory operations require election of a temporary floating single master.

An operation that requires the election of a single master is called a floating single master operation (FSMO). The FSM functions just like a single master for that period, and it is responsible for replicating changes to all appropriate servers prior to relinquishing its floating master status.

The details of how replicas exchange information are defined in the replication agreement established between the servers holding the two replicas, which is examined in the next section.

Initial Population of the Directory

As the directory database may contain anywhere from tens of thousands to tens of millions of directory objects, the initial entry of the objects in the directory is a significant factor that must be planned for.

The process of initially populating the directory is as significant as the volume of objects to be added to the directory datastore. For example, if the directory entries are loaded off the local hard disk of a directory server it can be a relatively fast process, yet if the same directory entries must be updated across slow network connections (such as WAN links) this process can be excruciatingly slow.

Many directory service products provide some sort of bulk loading utility to enable the initial loading of directory objects. Most of these bulk loading utilities use the LDAP Data Interchange Format (LDIF) to support the loading of new directory entries. LDIF is a text-based method of storing directory entries in a structured format which facilitates populating a directory via LDAP.

For a geographically distributed enterprise the initial population of the directory datastore presents technical challenges, especially if the directory holds large amounts of data. Attempting to initially populate a directory with many millions of objects across WAN links to multiple locations is problematic at best. It is clearly more effective to distribute the directory contents as compressed LDIF files to the remote locations and having administrative staff load the directory entries locally.

However you approach the initial population of the datastores on directory servers, it is a factor that deserves serious consideration.

Replication Operations

For replicas to exchange information to keep the directory updated, they must first enter into some sort of replication agreement. This agreement specifies the parameters that will govern the replication process, including factors such as:

  • Roles—The roles that each server will take during the replication processes

  • Replication dataset—The directory information that is to be replicated

  • Replication schedule—The periodicity of replication transmissions

The following sections explore these aspects of replication agreements, starting with roles.

Replication Roles

As mentioned during the discussion of replica types, a directory server can take one of two roles during a replication process:

  • Supplier—The directory server sending the update

  • Consumer—The directory server receiving the update

A single replica can be a supplier, a consumer, or both (yet only one at a time). Although a directory server is designated as either a supplier or consumer for a particular replication agreement, servers can have reciprocal agreements.

In a multimaster environment, every directory server holding a writeable replica of a particular partition is likely to have a pair of replication agreements with every other server holding a writeable replica of that partition—one replication agreement for each role (supplier and consumer). The servers may transfer data in both directions during a single replication session, or require a separate process for each update.

The following section explains how the dataset that will be sent during a replica update is determined.

Replication Dataset

One of the things that directory servers must agree on prior to replication is the set of information that will be transferred during replication. This is sometimes described as the unit of replication—that is, the dataset sent with each directory update (also referred to as the replication dataset).

The primary delineation of the replication dataset is the contents of a partition. Many agreements between directory servers will specify this as the unit of replication because most replicas need to be fully updated with all the changes to the objects in the partition. Not all replicas need the entire partition contents, however.

Consequently, although the initial scope of replicated data is tied to the partition boundary, some directory services provide methods of fine-tuning the dataset to be sent during replication. Filters can be established to selectively transmit updated directory information, usually based on object or attribute type. This enables an additional level of control over replication—you can exclude directory information from being replicated, even if it has changed.

Filtering replication information can be useful in several ways:

  • Catalogs of directory information can be maintained for high-speed lookups.

  • Replication traffic is reduced because directory information is not unnecessarily updated.

  • Filtering the information available to replicas in less-secure environments can strengthen security (such as replicating only username and e-mail address, and not salary or other sensitive data).

The use of replication filters means that the amount of data a directory server sends can vary from one replica of a partition to another. This variance is part of what is specified in the replication agreements. For example, if an update is being sent to another master replica, every piece of updated information is likely to be sent. If the same server were sending the "same" update to a catalog server, however, the dataset sent would be greatly reduced via the use of filters.

Modifying Filters

Many directories allow modification of default filters, like the one used for replication to catalog servers. However, filtering is a capability that should be used carefully. Be sure that you thoroughly understand the ways that any specific directory data is used before you exclude it from being updated during replication.

You may also want to add properties to the set that is replicated to the catalog. For example, an application might need frequent access to objects not normally included in catalog replicas. Without catalog entries, every lookup of one of these objects would require referrals to another directory server just for name resolution (and, thus, increase server and bandwidth consumption). By adding these objects to the catalog, queries may be fulfilled more quickly and without unnecessary referrals to other directory servers.

Catalogs, which are supported by many directories, are probably the primary use of this filtering capability. By providing an index of objects, along with a few commonly searched for attributes, catalog servers can service the name resolution needs of many lookup requests. However, this limited focus (as an index to directory objects) means that catalog replicas typically need to contain only a small subset of the possible attributes of any object type.Of course, the catalog server could just accept a normal replica update and ignore, or discard, the extra information, yet this would occupy network bandwidth unnecessarily. By filtering data prior to replication, however, network traffic and directory update time is reduced, and the target datastore remains as small as possible, speeding responses to directory searches.

After the replication dataset has been defined, some kind of replication schedule must be arranged between the directory servers. The following sections examine the scheduling of replication.

Scheduling Replication

Most directory updates are scheduled for replication in relationship to when the change took place—five minutes after the modification occurs, for example. The vendor predetermines the default schedule for most replication processes, however, you may be able to customize replication scheduling somewhat.

Replication Granularity

Replication granularity is a continuum—a directory server can send anything from a complete copy of the DIB to only a single attribute-value pair (that is, an attribute and the data paired with it, such as Username=Steve) when transmitting updates to another directory server. The amount of data sent with each directory update can significantly impact the performance of both the directory and the network.

Directory services that support differential scheduling priorities for discrete types of directory changes can provide greater flexibility in directory management, improve security, and help minimize the traffic caused by directory updates. A specific directory service may schedule replication based on the following:

  • The property being updated—Different properties can be replicated according to different schedules, based on the importance of the information related to that property. A directory may replicate a password change immediately, for example, to ensure integrity of critical security information. In the same directory, and for the same user a change to a home phone number may not be propagated to other replicas quite as quickly as the password change. This type of prioritization provides a means of allowing rapid updates for the truly important information while not incurring too much unnecessary traffic in the DIB update process. For example, attributes in eDirectory can be designated for either fast synchronization (10 seconds) or slow synchonization (5 minutes), to control the delay in synchronization.

  • Destination of the update—Replication across WAN links may be scheduled on a slower schedule than updates to other directory servers on the local network. This may be done to allow more time for aggregation of directory changes, minimizing the number of individual updates. Replication across WAN links may also leverage store-and-forward protocols usually used for e-mail, such as the Simple Mail Transfer Protocol (SMTP), to handle replication traffic between physical sites.

After scheduling of the replication is established, another aspect to consider is the variation in the set of information transmitted during replication. The actual set of data sent will depend on whether the directory service uses complete or incremental replication, as discussed next.

Complete Versus Incremental Replication

Another factor in replication is whether the directory server sends the entire contents of the unit of replication even if it has not been changed. Two basic methods are used:

  • Complete replication sends a copy of the entire datastore, sometimes including schema information and other overhead as well as all objects, to each server with every directory update. To cut down on network traffic, updates may be scheduled infrequently, perhaps only once a day, preferably at a time when network traffic is light. Because of this, the data contained in the various replicas will generally be somewhat more "out of sync" than in a directory using more frequent replication.

  • Incremental replication sends only a subset of the DIB, containing the data that has been changed. The update information sent may be only the data that has actually changed, or a superset of the changed objects and attributes. Obviously, incremental replication is a much more efficient method of synchronization than complete replication.

As you can imagine, complete replication is a bandwidth-intensive method of transmitting directory changes. If you just add a user, for example, sending the entire DIB to every directory server will consume unnecessary network bandwidth and take longer to synchronize the DIB contents across the enterprise.

Most current directory products use incremental replication for normal operational updates and reserve the use of complete replication for special cases such as initial population of a new replica or repair of a damaged replica.

Replication Processes

As mentioned earlier, directory servers perform replication using supplier/consumer roles. It's largely the same process for most directory services with a few variations in specific behavior. In general, the replication process follows these steps:

  1. The directory servers connect and authenticate. Either the supplier or the consumer may trigger this.

  2. The supplier assesses what directory information needs to be updated (by an implementation-specific method).

  3. The supplier then transmits data to the consumer, which updates its replica.

  4. The information used in Step 2 is updated with the values that will be used in the next replication process.

The information used in Step 2 is used to ensure data consistency, and is based on the data synchronization method used. Data consistency is discussed in the following section.

Data Consistency

When more than one copy of a piece of information exists (as happens with a replicated partition), it is critical to ensure that the information contained in those multiple copies is the same and that they contain the correct dataset. Therefore, when multiple replicas of a directory partition exist, changes made in one replica must also be made in all other replicas.

When all replicas of a partition contain the same data, the replicas are considered synchronized. Synchronization of the replicas ensures data consistency throughout the directory.

When a directory uses a single master design, ensuring consistency when multiple users are making changes to one directory object is comparatively easy. Simply put, the server holding the master copy of the information accepts all the writes and is the sole supplier for all the update operations.

When using multiple writeable copies of a directory partition, however, data consistency among the different replicas on the network becomes a key issue. Clearly a method of providing updated information to all directory servers has to be addressed, while also guaranteeing the integrity of the data contained in the DIB.

Approaches to Data Consistency

Data consistency can be thought of as the degree of correlation between the directory data contained within the replicas. Convergence is the state achieved when the contents of the replicas are identical. Approaches to replication can be viewed by the required degree of consistency between directory servers and how quickly changed data is replicated. For a fuller understanding, consider replication from the "consistency-of-the-data" perspective, ranging from tightly consistent to loosely consistent.

Approaches that require all replicas to always have the same data are defined as tightly consistent—that is, all replicas of the partition are quickly updated when changes are made to any master replica. Tightly consistent replication may work in a transactional fashion, requiring that all replicas of a partition be updated before a directory modification is successfully completed. This guarantees that all the data in every replica of a partition is always exactly the same. If a single replica becomes unavailable, however, no directory updates can take place until it is once again available, or removed from the replication group.

With loosely consistent replication, the data on all directory servers does not have to be exactly the same at any given time. Changes to the DIB are replicated more slowly and network servers gradually "catch up" to the changes made on other directory servers. Loosely consistent replication does not immediately replicate changes to other servers. Instead of sending each individual directory update, numerous small changes to the directory can be aggregated and replicated as a group. Directory services that use a loosely consistent process for most DIB updates may nonetheless use an immediate update method for certain types of information (such as data used in authentication and access control).

Most current directory service implementations, including Active Directory and NDS eDirectory, are considered to be loosely consistent.

Synchronization Methods

In a distributed directory that allows writes to multiple replicas of a partition, the potential for conflicting updates of the same directory information always exists. The directory server must

Failed Replication Processes

Some replication processes are designed in such a way as to require the completion of replication to every directory server holding a replica of that partition before replication is considered a success. If a particular replica is not available, it could stall the replication process (for directory servers sharing that partition), possibly requiring administrative intervention to remedy the situation.

have some way of resolving these update conflicts. Where multiple changes to the same directory objects occur, the directory server must then evaluate the submitted changes and select the correct change to commit to the DIB.

A directory synchronization method is commonly based on one (or more) of these three approaches:

  • Time of change

  • Sequence number of change

  • Changelog file

Defining Propagation and Synchronization

Although propagation and synchronization are often used as synonymous terms when describing directory operations, they actually refer to two distinctly different processes used to update directory replicas:

  • Propagation is an unconfirmed process where a server unilaterally sends directory updates to other servers containing partition replicas. The servers may or may not receive the data and update their copy of the DIB; in any case, propagation does not provide a means of knowing whether the directory update completed successfully.

  • In contrast, synchronization is a transaction-like process in which a master replica updates a subordinate replica, where a successful update must be confirmed by the other replica before the update replication is considered completed.

Interdirectory Data Synchronization

Data synchronization factors must be given serious consideration in directory implementations that support operations on multiple directory services at the same time. For example, the question of synchronization between a directory service that uses sequence numbers for synchronization and a directory service that uses time stamps can become complex and should be considered carefully before deploying an enterprisewide solution. Meta-directory technologies are commonly employed for synchronizing disparate directory service implementations.

Time Stamps

One common method of resolving multiple updates to the same object is comparison of the time of the change to ensure that the latest change is the one written to the DIB. When a change is made to the directory, the information is marked with the time of the modification. When directory replication occurs, the time stamp of the change is checked against the time stamps of other changes to the same object or property to ensure that the latest change is written to the DIB.

In a directory service using time-based synchronization, all directory servers must share the same time, requiring that time services be provided. Time servers may provide time to directory servers, client systems, or both. The time may be based on either an arbitrary network time or external clocks representing the "actual" time.

For example, Novell's NDS eDirectory uses a time-based synchronization method, which is discussed in more depth in Chapter 9, "eDirectory."

Sequence Numbers

A directory implementation may use another kind of event stamp, such as an Update Sequence Number (USN), to indicate the order of the directory changes. Each directory server maintains internal counters on each entry, and generates the next number in sequence when a change occurs. During the replication process, the consumer replica is provided all changes with any USN later than the last USN received from that supplier.

Using sequence numbers to mark updates and synchronize directory data can be substantially more complex than using time stamps because each change must be tracked, and reconciled, on a per-server basis. Microsoft's Active Directory uses a USN-based synchronization method, and is discussed in more detail in Chapter 10.

Changelog File

Another method of updating directory information is done using a changelog file. A changelog is essentially a file containing a log (listing) of all the changes that have been made to the directory. When a replication process is initiated, the supplier just "replays" the changelog to the consumer, effectively processing every directory update just as it was received by the supplier server.

Although this is a straightforward approach, and doesn't require the sort of extensive behind-the-scenes work of the preceding two methods, it also has the disadvantage of sending a raw list of updates. This means that if a single object has been updated multiple times, the same property may be updated several times during a single directory update.

DIB Management—Flexibility Matters!

<rant>

If serious thought has been given to the overall robustness of a directory service, flexible partitioning and replication will be seen as essential. Directories intended to meet the requirements of modern businesses must allow customization of partitioning and replication schemes. By doing so, vendors will provide the means to optimize DIB management for a particular organization.

Some directory services provide a default replication and partitioning configuration that works surprisingly well for many businesses. This is a great start; we can always hope for intelligent defaults. However, it is seldom that simple—networks change, business reorganizations happen, and the directory service needs to change right along with everything else. This may mean rearranging partitioning because of changing security concerns, changing where replica servers are placed, or rearranging where specific replicas reside.

This does not mean that the actual management of the DIB must be complicated; in fact, it needs to be easy to configure the desired replication and partitioning scheme because errors can be costly (in terms of recovery time if nothing else). What is most critical is that a directory service design enables you to use a customized partitioning and replication scheme and that you be able to do so without an inordinate amount of hassle.

</rant>

  • + Share This
  • 🔖 Save To Your Account