System Controller Enhancements
When the system encounters a fatal hardware error that causes a domain to be error paused, the hardware fault is automatically diagnosed. The result of auto diagnosis (AD) is persistently stored in the component health status (CHS). During the auto restoration phase, POST consults the CHS and restores the domain with the fault isolated.
In addition to the preceding, if POST encounters a test failure, it diagnoses the faulty component and marks its CHS appropriately.
The SC has been enhanced to detect domain hangs and recover from such situations by resetting and rebooting the domain. Another SC enhancement is to run POST at increasing diagnostic levels when the domain panics repeatedly so that the system can identify and isolate any persistent hardware faults.
The SC monitors the domains for hardware faults. AD is automatically invoked on hardware faults, which cause a domain pause or data parity errors. On Sun Fire 6800/4810/4800/3800 systems the data path is protected by parity and ECC. Domain operation is not impacted if data parity errors occur. Domain pauses are fatal errors and stop domain operation. AD analyzes the following errors:
Interconnect port errors
Data parity errors
Internal ASIC errors
FIGURE 1 shows the AD phase, Steps 1 through 5. Depending on the fault, three types of diagnosis results are possible:
Fault diagnosed to a single component
Fault diagnosed to a set of components
Unresolved fault diagnosis
Note that when a fault is diagnosed to a set of components, it does not mean that all the components are faulty, just that the fault is located in a subset of these components (usually one).
FIGURE 1 Auto Diagnosis Process
Auto Diagnosis Recording and Reporting
After the fault has been diagnosed, AD records its diagnosis persistently in the CHS and reports it to the console and loghost as shown in FIGURE 2.
FIGURE 2 Auto Diagnosis Recording and Reporting
TABLE 1 TABLE 2, "Example 1," shows the AD result that is output to the domain console for a single FRU diagnosis.
TABLE 2 Example 1
[AD] Event: SF3800.ASIC.SDC.PAR_SGL_ERR.60111010
CSN: 124H58EE DomainID: A ADInfo: 1.SCAPP.15.0
Time: Thu Jan 23 20:47:11 PST 2003
FRU-List-Count: 1;FRU-PN:5014362;FRU-SN: 011600; FRU-LOC:/N0/SB0
Recommended-Action: Service action required
AD reports a unique event code for the failure type and the diagnostic time. A full description of the AD output format is in the Sun Fire 6800/4810/4800/ 3800 Systems Platform Administration Manual. In this example AD determined that the error is within CPU/Memory board at FRU-LOC:N0/SB0
The reported information enables your service provider to make a quick determination of the problem and initiate corrective service action.
CHS on a Sun Fire 6800/4810/4800/3800 is implemented for the following FRUs and components:
Since the CHS and diagnostic information is persistently stored on a component, it moves with the component, which prevents the recurrence of a fault even if the component is moved to a different location. Preventing the recurrence of a fault improves the availability characteristic of Sun Fire 6800/4810/4800/3800 systems. The diagnosis information is contained inside the component. This makes service and repair of these systems easier.
POST performs the domain auto restoration function. POST runs automatically after Auto Diagnosis or is manually started by issuing the setkey command on the SC. POST consults the CHS of the domain hardware and tries to reconfigure the domain to isolate the fault (FIGURE 3).
FIGURE 3 Auto Restoration
After the domain has been restored, you can run the showcomponent command to check which components have been disabled due to CHS.
If a FRU or component is disabled because of its CHS, immediate replacement is not necessary because the domain is restored with the fault isolated. Utilizing dynamic reconfiguration (DR), the FRU can be replaced at any given time with minimal impact to the Solaris OE and user applications. For more information about DR, see the Sun BluePrints OnLine article Sun Fire 3800-6800 Servers Dynamic Reconfiguration.
Domain Hang Recovery
This section describes the domain hang detection and automatic domain hang recovery. Available parameters and hang messages are discussed.
A situation in which a domain is not updating its heartbeats or is unreachable by using the console is categorized as domain hang. In this situation, the domain is typically providing no service to its users. There are various reasons why a domain could be hung.
On a Sun Fire 6800/4810/4800/3800 system, the SC acts as an external monitor for each domain. The SC automatically checks for a domain hang condition (FIGURE 4).
FIGURE 4 Domain Hang Restoration
The SC initiates an XIR domain reset if the domain heartbeat register is not updated within a maximum timeout limit. The default timeout value is three minutes. This default can be overridden by the watchdog_timeout_seconds parameter in the /etc/systems file of each domain. For additional details, refer to the system(4) man page. If watchdog_timeout_seconds is set to a value below three minutes, the SC defaults to three minutes. TABLE 2, "Example 2," shows the console output of a domain which was declared hung and reset by the SC.
TABLE 2 Example 2
Jan 22 17:02:06 sc0 Domain-A.SC: Domain watchdog timer expired.
Jan 22 17:02:06 sc0 Domain-A.SC: Using default hang-policy (RESET).
In addition to the heartbeat monitoring, the SC also checks if the domain is picking up the interrupts sent to it by the SC. The SC sends interrupts to the domain when, for example, characters are entered on the domain console. If on a second interrupt, the previous one has not been picked up by the domain, the SC waits for one minute before declaring the domain hung. TABLE 3, "Example 3," shows the console output of a domain that is hung because it has not been picking up its interrupts.
TABLE 3 Example 3
Jan 22 18:09:02 sc0 Domain-A.SC: Domain is not responding to interrupts.
Jan 22 18:09:02 sc0 Domain-A.SC: hang-policy is NOTIFY. Not resetting domain.
The hang-policy is set by the setupdomain command, to notify or reset. If set to notify, the SC reports the hang condition on the domain console and does not reset the domains (code example 3). If set to reset, the SC reports the hang condition on the domain console, and initiates a domain reset (code example 2). By default the hang-policy is set to reset. For more information about domain setup, refer to Sun Fire 6800/4810/4800/3800 Systems Platform Administration Manual.
By default, the domain is set up to dump core when it is reset. To identify the cause of the domain hang, consult your service provider while referring to the core file.
Hang conditions are automatically detected and the necessary steps are initiated. Hence the availability of Sun Fire 6800/4810/4800/3800 systems is enhanced.
Recovery From Repeated Domain Panics
Domain panics can be caused by software and by hardware. To prevent hardware faults from causing panic-reboot loops, the SC firmware has been enhanced to run POST diagnostics at increasing diagnostic levels on recurring panics.
On the first panic, the domain reboots and writes a core file, which can be used to analyze the problem. However, if further panics occur within a short time period, it is desirable to run POST automatically at a higher level as part of domain restoration. POST diagnostics verify the status of the hardware and could identify and isolate faulty components (if any). After identifying faulty components, POST updates the appropriate CHSs. With firmware release 5.14.0 and higher, the SC keeps track of the number of domain panics over time. A panic reboot of a domain has a unique register signature that differs from the normal reboot of a domain. If the domain is manually rebooted in the meantime the panic-reboot counter is reset.
On recurring panics, the domain POST diagnostic level is increased to the next higher level from diag-level quick. In increasing order, POST levels are init, quick, default, mem1, and mem2. The domain is put into standby position if it continues to panic undetected by the user after the highest level of POST is run (FIGURE 5). For further analysis, consult your service provider while referring to the core file.
FIGURE 5 Domain Panic Restoration
This feature prevents a panic reboot loop of domains. If recurring panics are caused by a software bug, the increased POST level minimizes hardware as a possible cause. Downtime for running further POST diagnostics is not required because the system automatically takes the necessary measures.Hence the availability and serviceability of Sun Fire 6800/4810/4800/3800 system is enhanced.