Tips for Data Service Setup
Before you develop your scripts, you need to determine what configuration changes are required and how those changes are performed. You need to know on which nodes specific changes need to be made and in what sequence they must be performed. Sun Cluster 3.0 software data services setup requires both local and global changes to the cluster environment. Local changes are those that are performed on each cluster node and may have to be completed on each node before some global changes are made. Global changes are made to the cluster configuration database, performed only once, and can be executed from any active cluster node.
Local, or per node, changes are usually performed via the Cluster Console when setting up data services in an interactive mode. However, to automate the process by creating scripts, the Cluster Console is not an option. To design a script that performs the same function you would perform during an interactive setup, you need to determine what those local changes are. The following are examples of common local changes.
Creating NAFO groups.
At least one NAFO group has to exist on each node before a logical hostname resource can be created. This is to assure that the data service assigned to the logical hostname will not have to fail over to another cluster node because of a single network adapter failure. The reason each node is configured separately is that NAFO groups can be created using different sets of adapters on different cluster nodes.
Installing the data service package.
Data service agents are not automatically installed when the cluster software is installed. A set of data agents is contained on the Sun Cluster 3.0 software Data Services CD-ROM and some are provided with application software such as the iPlanet suite of products. One reason why they are not installed automatically is that it is best to load the current version when you are ready to begin using the data service and not have a potentially outdated agent installed but not being used.
Modifying the filesystem table.
Global filesystems are mounted using information contained in /etc/vfstab, similar to normal filesystem mounts. This file needs to be modified on each cluster node so either cluster node can perform the mount during system booting.
Creating global filesystem mount points.
Directories, where global filesystems get mounted, do not get created automatically. A directory with the same pathname needs to be created on each cluster node.
Adding the logical host to your name service.
Before a logical hostname resource can be created, the hostname associated with it must be resolvable through either a name service or /etc/hosts. If you choose not to use a name service, which eliminates an external point of failure, an entry in /etc/hosts needs to be created on each cluster node.
Global changes are made to the cluster configuration database which is automatically propagated to all cluster nodes. Therefore, these type of modifications only have to be performed from one cluster node. Examples of common global changes are:
Creating a mirrored volume on shared storage.
Either Solstice DiskSuite' (SDS) or Veritas Volume Manager (VxVM) software needs to be installed on both nodes prior to performing this step.
Creating a UFS filesystem on the shared volume.
The newfs or mkfs command needs to be run to create the filesystem.
Mounting the cluster filesystem.
Once mounted on one cluster node, it automatically appears mounted on all nodes.
Creating a resource group for the data service and activating it.
This is a multi-step process and the last thing that needs to get done.