Home > Articles > Home & Office Computing > Mac OS X

  • Print
  • + Share This
This chapter is from the book Untitled Document

23.3 Relevance of z/OS and Subsystems

In order to provide reliability and other qualities of service (QOS), z/OS has many subsystems, and the WebSphere workload often requires modifications to the configurations of these subsystems. An excellent source of information on this tuning can be found in chapters 9 and 10 of the WebSphere Application Server for z/OS V5 Operations and Administration Manual at the following URL: ftp://ftp.software.ibm.com/software/webserver/appserv/zos_os390/v5/bos5b1001.pdf. Below is a brief mention of several of the key subsystems and their roles in keeping the WebSphere container running well. All of them should be kept current on maintenance, and component (and system) tracing should be turned off or minimized.

23.3.1 Transmission Control Protocol (TCP)

Transmission Control Protocol (TCP) and the network are critical to the response time perceived by the customer when doing work on z/OS. It is best to have a fast network adapter (e.g., Gigabit OSA) and to periodically monitor its performance. If tuned well, TCP should account for only a small amount of the total response time (less than 2 percent) experienced by the customer. Firewalls are required in any production environment, but, again, tuning these is critical.


The resource utilization of the TCP address spaces can be determined with WLM and RMF. It may also be a TCP issue if the controller address space is taking more than 1 percent CPU. To understand the low level details of TCP flows, your TCP Systems Programmers can run a filtered packet trace.

23.3.2 UNIX System Services (USS)

USS is the environment where WebSphere runs. System traces can reveal a great deal about the behavior of the application and the J2EE server. If misused, however, they can more than double the path-length of some requests. A good reference for USS tuning can be found at


23.3.3 Resource Recovery Services (RRS)

Resource Recovery Services are used extensively by WebSphere on z/OS to handle transactional behaviors and access to all recoverable resources. There are numerous configuration options here that can impact the operation of RRS. One key option is using a Coupling Facility (CF) logstream to avoid disk IO. Optimizations in this area have made database two-phase commit processing on z/OS notably faster than on any other platform. A few key items to watch in RRS include

  • Sizing the RRS logs so that they are not offloading too often or overflowing the CF

  • Keep Main and Delayed logs in the CF

  • Only use the archive log if absolutely needed (to do PD on an unstable system)


The most common RRS related issue is log size. WebSphere can increase the amount of data logged and create a need for larger logs. Issues here can generally be seen in the z/OS System log with an increase in log switching and archiving. If this is occurring, then the RRS people may need to resize the logs. System Logger accounting data can be found in SMF Type 88 records. Sample program IXGRPT1 in Sys1.SAMPLIB shows how to produce a report using SMF Type 88 records.

23.3.4 Cross-System Coupling Facility (XCF)

The Cross-System Coupling Facility is less important to WebSphere V5 than to WebSphere V4. In V4, servers with instances on different images within a SysPlex had to use a shared HFS, which can cause intensive XCF activity. V5 no longer has this requirement. If shared HFSs are being used, however, it is important that the owning image be the one that does the most IO (especially output) to the HFS.


While Shared HFSs can be a great means of communication and centralization of control, improper use can have a severe performance penalty. Using them for logging can be a particularly expensive practice.


While the first z/OS image (LPAR) to mount a shared hfs becomes the owner, there are dynamic means of changing ownership. This is discussed in the UNIX System Services Command Reference (SA22-7802).


A view of the XCF service class in RMF can help to isolate excessive CPU time being used by XCF. If the XCF address space is taking more than 4 percent of a CPU engine, a USS Systems Programmer should look more closely at XCF (it may or may not be WebSphere related).

23.3.5 Workload Manager (WLM)

The z/OS Workload Manager differentiates WebSphere z/OS from the distributed platforms. On WebSphere for z/OS, a server consists of a Controller address space and one or more Servant address spaces (discussed later in the Topology section). WLM can be used to control the number of Servants to meet goals defined for the various workloads. WLM is discussed in chapter 19, Workload Management Overview: z/OS.

WLM can also balance workloads across clustered servers in certain situations (although HTTP workload distribution is generally driven by the front-end mechanisms [such as the IHS plug-in or Edge Servers] and EJB calls, in an optimal scenario, will not leave the server).

In the section on RMF and WLM, we discuss classifying work and creating reporting classes so that the total cost of handling transactions can easily be determined.


It is advisable to limit the number of different classes of work routed through an individual server. A Servant address space can service only one class of work at any time. This means that many different classes of work will either result in many Servant address spaces, or work queuing up for existing Servants to be reclassified.

23.3.6 Miscellaneous Considerations

There are miscellaneous items to keep in mind, including:

  • In Resource Access Control Facility (RACF), set BPX.SAFFASTPATH on. This makes security checks in the HFS quicker.

  • In WebSphere V5, we recommend that you configure the Link Pack Area (LPA) as recommended during install. This will allow all WebSphere address spaces to share the 40 Mb of load modules instead of each address space loading it into private memory. This is a bit more difficult if you are coexisting with WebSphere V4. Only one of the versions can use LPA, and the other must use STEPLIB DD statements.

  • + Share This
  • 🔖 Save To Your Account