- Enterprise Installation Services: Standard Installation Practices
- Hardware Configuration
- Solaris Configuration (clustadm)
- Install SUNWccon package on SC 3.0 Admin Workstation
- Patch Installation - Administration Workstation
- Configure Management Server for Administering Cluster Nodes
- Configure the Terminal Concentrator
- Configure Cluster Control Panel
- Configure Solaris OE (Each Cluster Node)
- Solaris OE —Post Installation and Configuration
Section 1.6: Configure Cluster Control Panel
During this section you will start the Cluster Control Panel by entering the ccp command for the cluster named "nhl". Please read this entire step before entering any commands. After starting the cluster control panel, you will double-click on the cconsole icon.
At this time, verify that each cluster node is accessible to the clustadm workstation by starting the Cluster Console Panel and accessing the cluster consoles for each cluster node.
If you are accessing the clustadm workstation from a remote system, execute the xhost + command, enabling remote display from the clustadm workstation to your local system.
When accessing the clustadm workstation remotely you must, also, set the DISPLAY environment variable on the clustadm workstation to point to your local system. For example, for csh users, setenv DISPLAY yoursystem:0.0.
This step can be performed when accessing the SunPlex platform from a remote workstation. It is often useful to access the Cluster Control Panel (ccp) remotely, as appropriate, or when configuring (administering) the Sun Cluster.
At this time, you must set the DISPLAY variable before invoking the CCP. First, on your local workstation (example only):
yoursystem# /usr/openwin/bin/xhost +clustadm
Next, on clustadm (note: replace yoursystem with local system name):
root@clustadm# setenv DISPLAY yoursystem:0.0
Enter the following commands, on the clustadm workstation:
root@clustadm# which ccp /opt/SUNWcluster/bin/ccp root@clustadm# ccp nhl &
When the ccp command is executed, the Cluster Control Panel window will appear. Verify that a menu bar and icon panel display all of the available tools, as listed:
Cluster Console, console mode
Cluster Console, rlogin mode
Cluster Console, telnet mode.
Example: Cluster Control Panel Window
Figure 2 Cluster Control Panel Window
Refer to the preceeding figure. Double-click the "Cluster Console (console mode)" icon (circled) to display the cluster console. An example cluster console shown in the following figure.
In this example, three windows are displayed: one small Cluster Console window, and two larger cconsole: host [name] windows. Note that each of the larger windows is associated with a specific host, or cluster node.
.Example: Cluster Console (console mode) and cconsole Windows
Figure 3 Cluster Console and cconsole Windows
The Cluster Console utility provides a method of entering commands into multiple cluster nodes simultaneously (or individually, as required). Always be aware of which window is active prior to entering commands. If a cconsole window does NOT appear for a cluster node, verify the following: From the Cluster Console window (console mode), select Hosts, followed by Select Hosts. Next, verify (insert) an entry for each cluster node (for example, clustnode1, clustnode2).
At this time, arrange each window for your own personal viewing preferences. Ungroup the Cluster Console window from the cconsole: host [name] windows. Select Options from the menu (Cluster Console window), and uncheck Group Term Windows.
For example, arrange the cconsole windows to be able to see each window clearly and at the same time by moving the Cluster Console window away from the other cluster node windows. This is done to ensure that commands are entered correctly into one, or both nodes, as required during these exercises (and to prevent entering commands into the wrong window).
It is NOT necessary to do so at this time, but when you wish to close the Cluster Console window, select Exit from the Cluster Console window "Hosts" menu.
It is NOT necessary to do so at this time, but if you need to issue a "Stop-A" command to each cluster node, simultaneously placing them in the OBP mode, use the following procedure, for the Annex Terminal Server. First, activate the Cluster Console window, then press the ^] ( Ctrl + ] ) keys. This will display a telnet> prompt, for each cluster node. At the telnet> prompt, enter the send brk command, which will issue a Stop-A to each cluster node (placing them at the OBP ok prompt).
Verify operations using the Cluster Console Panel (CCP) utility by logging in to each cluster node. Begin configuring each system which will operate as a cluster node.
Log in as root from the cconsole:host [hostname] window, on each cluster node:
clustnode1 console login:root Password: abc clustnode2 console login:root Password: abc
During many of the following steps, you will be required to enter commands simultaneously into the console window for each cluster node. Use the Cluster Control Panel windows for this purpose. Double-click the Cluster Console (console mode) icon, as described in the previous section, "Configuring the Cluster Control Panel."