Home > Articles > Operating Systems, Server > Solaris

  • Print
  • + Share This
Like this article? We recommend

Hardware Configuration

Figure 1 and Table 1 through Table 5 represent the Sun Cluster hardware configuration used for this module, which specifies two or more Sun servers that are connected by means of a private network. Each server can access the same application data using multi-ported (shared) disk storage and shared network resources, thereby enabling either cluster node to inherit an application after its primary server becomes unable to provide those services.

Refer to Figure 1, which describes the SC 3.0 lab hardware implementation and Table 1 through Table 5, which define each connection.

Final verification of the cluster hardware configuration will be confirmed only after the required software has been installed and configured, and failover operations have been tested successfully.

Cable Configuration

Figure 1Figure 1 Cable Configuration

NOTE

In the previous illustration: c1 = PCI3, c2 = PCI4; D1000s include t0, t1, t2, t8, t9, t10; Utilize spare Ethernet ports by configuring additional private interconnects (that is, use crossover cables between qfe3 and qfe7, as indicated).

Cable Connections

Table 1 through Table 5 list the required cable connections.

Table 1 Server-to-Storage Connections

From Device

From Location

To Device

To Location

Cable Label

E220R # 1

SCSI A (PCI3)

D1000 #1

SCSI A

C3/1 - C3/3A

E220R # 2

SCSI A (PCI3)

D1000 #1

SCSI B

C3/1 - C3/3B

E220R # 1

SCSI A (PCI4)

D1000 #2

SCSI A

C3/2 - C3/3A

E220R # 2

SCSI A (PCI4)

D1000 #2

SCSI B

C3/2 - C3/3B


Table 2 Private Network Connections

From Device

From Location

To Device

To Location

Cable Label

E220R # 1

qfe0

E220R # 2

qfe0

C3/1 - C3/2A

E220R # 2

qfe4

E220R # 2

qfe4

C3/1 - C3/2B


Table 3 Public Network Connections

From Device

From Location

To Device

To Location

Cable Label

E220R # 1

hme0

Hub # 00

Port #2

C3/1 - C3/5A

E220R # 2

qfe1

Hub # 01

Port #3

C3/1 - C3/6A

E220R # 1

hme0

Hub # 01

Port #2

C3/1 - C3/6A

E220R # 2

qfe1

Hub # 00

Port #3

C3/2 - C3/6A


Table 4 Terminal Concentrator Connections

From Device

From Location

To Device

To Location

Cable Label

E220R # 1

Serial Port A

Terminal Concentrator

Port #2

C3/1 - C3/4A

E220R # 2

Serial Port A

Terminal Concentrator

Port #3

C3/2 - C3/4A

Terminal Concentrator

Ethernet Port

Hub # 00

Port #1

C3/4 - C3/5A


Table 5 Administrative Workstation Connections

From Device

From Location

To Device

To Location

Cable Label

Administration Workstation

hme0

Hub # 00

Port #4

F2/1 - C3/5A

Administration Workstation

Serial Port A

Terminal Concentrator

Port #1 **

F2/1 - C3/5B


NOTE

The Cable Label column in Table 1 through Table 5 assumes the equipment is located in a specific grid location, for example C3. The number following the grid location identifies the stacking level for that piece of equipment with 1 being the lowest level. The letter at the end of the label tag indicates how many cables terminate at that level. For example, the letter A indicates one cable, B indicates two cables, and so on. Also, the label tag F2 is the grid location of the administrative workstation. The cable with "**" in the To Location column is only connected when configuring the terminal concentrator.

Architectural Limitations

The Sun Cluster 3.0 architecture is able to provide highest levels of availability for hardware, the operating system, and applications without compromising data integrity. The SunCluster environment (that is, hardware, operating environment, Sun Cluster framework, and API applications) can be customized to create highly available applications.

No Single Points of Failure

Multiple faults occurring within the same cluster platform (environment) can result in unplanned downtime. A SPOF can exist within, say, the software applications architecture. For the E220R, a SPOF for the single cluster node might be the embedded boot controller, or even a memory module.

  • The basic Sun Cluster configuration based on Sun Enterprise Server Model 220R can be configured as an entry-level platform, providing no SPOFs for the cluster pair.

Configuring Clusters for HA: Planning Considerations

The primary configuration/planning considerations for highly available applications and databases include identifying requirements for: software versions and features, boot environment, shared storage, and data services (and their agents).

Designing a production cluster environment for mission critical applications is a complex task, involving the careful selection of optimum components amid numerous options. We recommend that you work closely with a qualified consulting practice, such as Sun Professional Services, in making these selections.

An example of these choices are determining the optimum number and mix of database instances (services) per node, or ensuring no potential agent conflicts exists and that any service level conflicts are resolved.

Different cluster topologies require carefully prescribed setup procedures in relation to the following cluster components:

  • Data center requirements (hardened to environmental and power-loss conditions)

  • Number of logical hosts per node (including their agents, agent interoperability, and service level requirements)

  • Type of volume manager

  • Disk striping and layout

  • File systems versus raw device database storage

  • Performance (local storage versus GFS considerations)

  • Network infrastructure requirements and redundancy

  • Client failover strategy

  • Logical host failover method (manual vs. automatic)

  • Naming conventions such as host ID, disk label, disk groups, meta sets, and mount points.

  • Normal (sustaining) operations policies and procedures

  • Backup and recovery procedures for the SunPlex platform

  • + Share This
  • 🔖 Save To Your Account