Managing Your Oracle Solaris Cluster Environment
- Installing the Oracle Solaris OS on a Cluster Node
- Securing Your Solaris Operating System
- Solaris Cluster Software Installation
- Time Synchronization
- Cluster Management
- Cluster Monitoring
- Service-Level Management and Telemetry
- Patching and Upgrading Your Cluster
- Backing Up Your Cluster
- Creating New Resource Types
- Tuning and Troubleshooting
Installing the Oracle Solaris OS on a Cluster Node
You can install the Oracle Solaris OS in these ways:
- Interactively using the installer program
- With Solaris JumpStart to automate the process
- With public domain or third-party automation tools such as JumpStart Enterprise Toolkit [JetWiki]
If you choose an automated option, then you can perform the installation using collections of Solaris packages or a Solaris Flash archive file that is created using the flarcreate command from a previous template installation (see the flarcreate(1M) man page) [FlarInstall].
If your root disk is not already protected by some form of hardware RAID, you should mirror it to protect your node against disk failure. You can protect your root disk using the bundled Solaris Volume Manager software or with the Oracle Solaris ZFS file system. ZFS is available only when you use the Solaris 10 OS. If you have a software license for Veritas Volume Manager, you can also use that to protect your root disk.
Solaris Live Upgrade eases the process of upgrading your systems. Solaris Live Upgrade is available in all Solaris releases beginning with the Solaris 8 OS. However, the combination of the Solaris 10 OS, ZFS, and Solaris Live Upgrade provides the simplest and most flexible upgrade option for your systems.
When choosing hostnames for your cluster nodes, you must ensure that they comply with RFC 1123 [RFC1123]. If you intend to install Solaris Cluster Geographic Edition, you must not use an "_" (underscore) in the hostnames. Other applications might place additional constraints on the hostnames. Changing a hostname after a cluster has been installed is a complex process and should be avoided, if possible.
Although these sections on installing the Solaris Cluster software cover many important points, they do not provide all the steps you need to follow. Therefore, you must consult the latest versions of the Oracle Solaris OS [S10InstallGuide] and the Solaris Cluster software [SCInstallGuide] installation guides.
Root Disk Partition Requirement for the Solaris Cluster Software
You can use either UFS or ZFS as the root (/) file system for a Solaris Cluster node. The way you partition your root disks depends on which file system you choose and whether you also use Solaris Volume Manager.
After installation of the Solaris Cluster software, each node has a 512-megabyte /global/.devices/node@X file system that is mounted globally (the X represents the node number). To achieve this configuration, you can allow the scinstall program to create a lofi-based file system for you that works for both UFS and ZFS root (/) file systems. Alternatively, if you use UFS for the root (/) file system, you can create and mount a 512-megabyte file system on /globaldevices and allow the scinstall program to reuse it for the /global/.devices/node@X file system.
If you intend to use Solaris Volume Manager, then you must create the main state replica databases. These databases also require separate disk partitions, which should be 32 megabytes in size. This partition is usually placed on slice 7 of the root disk. This can pose a problem if you are using ZFS for the root disk because the standard Solaris installation uses the entire disk for the root zpool, unless you pre-partition your disk before the installation process begins. You can achieve a root disk layout where slice 7 is 32 megabytes in size, and use a ZFS root (/) file system, if you install your system using JumpStart Enterprise Toolkit (JET). However, you must not use ZFS volumes (zvols) to store the Solaris Volume Manager state replica databases, as this configuration is not supported.
Because Solaris Volume Manager relies on a state replica majority (see the section "Solaris Volume Manager's State Replica Majority" in Chapter 1, "Oracle Solaris Cluster: Overview") to maintain data integrity if a disk failure occurs, you should assign slices from three separate, nonshared disks. If you do not have three separate slices available, you can place all of the replicas on a single disk as shown in the following example.
Example 4.1. Creating the Solaris Volume Manager Root State Replica Databases
Use the metadb command to create the Solaris Volume Manager root state replica databases.
# metadb -afc 3 c1t0d0s7 # metadb flags first blk block count a m pc luo 16 8192 /dev/dsk/c1t0d0s7 a pc luo 8208 8192 /dev/dsk/c1t0d0s7 a pc luo 16400 8192 /dev/dsk/c1t0d0s7
If you use Veritas Volume Manager, you ideally use it only for shared disk management, and you use either Solaris Volume Manager or the Oracle Solaris ZFS file system to mirror your root disks.
Planning for Upgrades
After you have installed your cluster, you must maintain it through its lifecycle. Doing so inevitably requires that you install both Solaris and cluster patches. You might also perform more major changes such as upgrading to a new Solaris Cluster release. To minimize any disruption, you must plan ahead.
"Patching and Upgrading Your Cluster" describes in more detail the options for ongoing cluster maintenance. If you choose ZFS for your root disk, then the upgrade procedure is fairly straightforward because Solaris Live Upgrade can create an alternate boot environment on the existing root zpool. However, if you choose UFS, then you must have the same number of slices available as your existing root disk has, which usually means using a separate disk. However, if you have only a root partition, you can plan for future upgrades by setting aside a separate partition of the same size on the root disk.