Application-Specific Design Guidelines
In this section we will introduce some guidelines for installing applications on your cluster. For several applications we have added detailed guidelines in Chapter 8—for example, for GroupWise and iFolder. But whatever application you have to install, these guidelines will make them more robust and easier to manage.
Install Applications on the Shared Disk
In a clustered environment, there is a central storage facility to which all the cluster nodes have access. The servers can work with the data if they have the clustered resource assigned to them. An example of this is the GroupWise domain and post office directories that are stored on the external shared storage to which the MTA and POA on the cluster node need to have access.
It is obvious that the shared data is not on a local disk on one of the servers. Therefore, why should the application be stored on all local disks of all cluster nodes? This is what you will find in much of the clustering documentation: Install the application on every cluster node to make sure that every server can load that application and work with the shared data. This approach creates extra work, because you will have to install it the same number of times as you have cluster nodes. Additionally, for every update, you have to update all the servers in the cluster.
An easier and more efficient approach is to install the application on the external shared storage. That way, there is only one installation required, and if the application needs updating, only one update does the job for the entire cluster. Another advantage of this approach is that in case a server's hardware crashes, you only have to replace the server with a new NetWare or Linux server and add that to the cluster. The rest will come from the shared disk.
As an example, we'll discuss what this looks like for a GroupWise environment. In your cluster, there will be a storage pool with a volume where the GroupWise domain and post office are located. Let us call that volume GROUPWISE. On the GROUPWISE volume, you create a GRPWISE folder where all of GroupWise's files are stored. You will then create a SYSTEM directory in the root of the GROUPWISE volume, where you install the necessary agents and configuration files. This system directory will also contain the NetWare Control File (NCF) script files to start and stop GroupWise. The directory structure would look like the following list (we left out a few default directories to keep it simple):
GROUPWISE:\SYSTEM GROUPWISE:\GRPWISE\DOM GROUPWISE:\GRPWISE\PO
One important item to add to your NCF script files is a search mapping to the new SYSTEM directory on the GROUPWISE volume. Otherwise, the application will not have access to modules that it would have to auto-load.
Execute NCF and Script Files from the Load and Unload Scripts
The configuration of what happens when a failover is performed or when a resource is migrated is defined in the load and unload scripts of the cluster resource. In these scripts, you enter the commands to start and stop the correct modules, add IP addresses, and so on.
Our tip here is to not add all those commands to load and unload scripts but to put everything into NCF files or Linux script files and call those files from the scripts. This has the following advantages:
- You can set up delegated administration for your application and the cluster. For example, if you start GroupWise from an NCF file or a Linux script, there is no need for the GroupWise administrator to have rights on the cluster object, and the GroupWise administrator can use his own scripts to stop and start GroupWise, without having to work with cluster management software.
- You do not have to bring the cluster resource offline and online when making changes to the load/unload commands, which you must do when they are in the load and unload script.
- You cannot run into the problem in which the commands do not fit into the text box available for script commands. The maximum number of characters for the load and unload scripts is 600 total.
The best place to store these load and unload script files is on the shared storage. That way you do not have to manage files on all individual cluster nodes. You can manage one centrally stored file, making it easier to update the file when required.
It is the best option for the default commands that are added to your cluster volume resource to stay there. Do not move them into the NCF file or Linux script because they will be specific to the cluster and not to the application, and the cluster will manage them for you. For example, when you change the IP address of a cluster resource in ConsoleOne or iManager, these management applications also automatically modify the Add Secondary IP Address statement in the load and unload scripts.