Home > Articles > Home & Office Computing > Mac OS X

  • Print
  • + Share This
This chapter is from the book

23.5 The Java Virtual Machine (JVM)

Each server has a JVM in the Controller and one in each Servant. The focus here is on the JVM in the Servant where the application code executes. This JVM is used by the EJB container and the Web container in the Servant. Understanding the behavior of the JVM as your application executes is critical. If all goes well:

  1. Garbage collection (GC) should take 1 to 5 percent of the time

  2. There should be no long waits (more than 6 seconds) for stopping the threads.

  3. An Allocation Failure (AF) should never cause multiple GCs.

  4. Compaction actions should be occurring on less than half of the GCs.

  5. The percent free memory should not be reduced over time.

  6. Appropriate methods should be Just In Time Compiled (JITted).

23.5.1 Garbage Collection

Garbage collection is the process the JVM uses to clean up objects that are no longer referenced. Garbage Collections are started when the JVM receives Allocation Failures or System.gc() calls. An allocation failure is a normal occurrence when there is insufficient free space in the JVM heap to allocate a new object. This causes the JVM to do garbage collection to free up the memory utilized by obsolete objects (objects with no references to them, or garbage).


As a programming best practice, explicit System.gc() calls should be avoided. The JVM has complex algorithms to optimize garbage collection. Placing System.gc() calls in the code causes extra garbage collections that interfere with the JVMs ability to maintain itself. In addition, code may execute more often than expected causing requests for garbage collection to queue up. As this occurs, it can prevent the JVM from getting any real work done. Understanding verboseGC Output

The simplest way to get an understanding of the health of the JVM is to turn on verbose garbage collection (VerboseGC). This can be set via a checkbox in the Administrative Console by navigating to Servers Application Server ServerName Process definition Java Virtual Machine. Leaving VerboseGC running does not impose much overhead (less than 2 percent) on the system. We recommend having it on for all test systems and even in production when new code is running. The verboseGC output varies from one release of the JVM to the next, but resembles what is seen in Listing 23.1.

Listing 23.1 Normal Garbage Collection output.

<AF[24]: Allocation Failure. need 68376 bytes, 88964 ms since last AF>
<AF[24]: managing allocation failure, action=2 (559688/268368384)>
<GC(30): GC cycle started Wed Jul 30 14:40:55 2003
<GC(30): freed 175851936 bytes, 65%% free (176411624/268368384), in 1642 ms>
  <GC(30): mark: 1447 ms, sweep: 195 ms, compact: 0 ms>
  <GC(30): refs: soft 0 (age >= 32), weak 0, final 89, phantom 0>
 <AF[24]: completed in 1652 ms>

Table 23.1 shows the most value from this listing.

Table 23.1 Breakdown of Garbage Collection Output




Twenty-Fourth Allocation Failure since the JVM was started.

Need 68,376

Size of the request for memory that caused the failure.

88964 ms

Number of milliseconds since last Allocation Failure.


Free heap vs. total heap before GC. Nearly 600K available, but no chunk big enough to alloc 68k bytes.


Thirtieth GC since JVM started (since the twenty-fourth Allocation Failure is causing the thirtieth t garbage collection, then either System.gc() is being called or large allocations are causing problems for the JVM (the case here). Either of these conditions can raise a red flag.

Freed 39334152

Number of bytes freed.


Percent of free heap after GC. Memory leaks will cause this number to get progressively smaller and compactions will have less positive impact.


Free heap vs. total heap after GC.

1642 ms

Milliseconds that the GC process took (in a healthy JVM, these can be predominantly sub-second).

Compact 0 ms

Compact time (no heap compaction was needed this GC). Compaction occurs when the free space in the heap is fragmented.

Refs: soft 0

No soft references cleaned up here.

Age >= 32

Soft references NOT referenced in 32 GCs will be GC'd.

Completed in 1652 ms

Total time taken to resolve allocation failure (usually a small amount, larger than GC time).

So, in this case it had been 89 seconds since the last allocation failure, and it took 1.65 seconds to resolve this failure. Percent of time in GC is 1652 * 100 / (88964 + 1652), or 1.8 percent. Any time this number climbs above 3 percent, it should be reviewed. It may be expected behavior for some applications to require intense GC processing. Application characteristics that result in GC burning a greater amount of CPU cycles include

  1. System.gc() calls

  2. Intensive object use and discarding. This can be reduced by:

    1. Use of StringBuffer instead of String

    2. Object reuse and pooling

    3. Allocating arrays and vectors more accurately

  3. Allocation of extremely large objects

  4. Memory Leaks

A less healthy GC might look like Listing 23.2.

Listing 23.2 Problem Garbage Collection.

<AF[52]: Allocation Failure. need 11825976 bytes, 9216 ms since last AF>
<AF[52]: managing allocation failure, action=2 (74646856/268368384)>
<GC(64): GC cycle started Wed Jul 30 14:50:26 2003
<GC(64): freed 68031744 bytes, 53%% free (142678600/268368384), in 4060 ms>
  <GC(64): mark: 1631 ms, sweep: 188 ms, compact: 2241 ms>
  <GC(64): refs: soft 0 (age >= 32), weak 0, final 14, phantom 0>
  <GC(64): moved 129329 objects, 30644680 bytes, reason=1, used 24 more bytes>
 <GC(64): stop threads time: 2306, start threads time: 2>
<AF[52]: managing allocation failure, action=3 (142678600/268368384)>
<AF[52]: managing allocation failure, action=4 (142678600/268368384)>
<AF[52]: clearing all remaining soft refs>
<GC(65): GC cycle started Wed Jul 30 14:50:28 2003
<GC(65): freed 12000 bytes, 53%% free (142690600/268368384), in 1717 ms>
  <GC(65): mark: 1581 ms, sweep: 136 ms, compact: 0 ms>
  <GC(65): refs: soft 8 (age >= 32), weak 0, final 0, phantom 0>
<GC(66): GC cycle started Wed Jul 30 14:50:32 2003
<GC(66): freed 1504 bytes, 53%% free (142692104/268368384), in 3724 ms>
  <GC(66): mark: 1563 ms, sweep: 606 ms, compact: 1555 ms>
  <GC(66): refs: soft 0 (age >= 32), weak 0, final 0, phantom 0>
  <GC(66): moved 6613 objects, 1296464 bytes, reason=1>
<AF[52]: managing allocation failure, action=6 (142692104/268368384)>
<AF[52]: totally out of heap space>
<AF[52]: completed in 11989 ms>

Note that it took three GC actions to find space for a 12M object to be allocated, including about four seconds in GC and two seconds in stopping threads.


Any allocation > 1M should be reviewed in the application, as it can be difficult to find these very large contiguous spaces in memory. This resulted in almost 57 percent of the last 21 seconds having been spent in GC. Soft References

Some applications need to cache large amounts of data. This impacts the size of the heap and the behavior of garbage collection. In some scenarios, the data that needs to be cached are difficult to determine, and the application errs on the side of caching too much data. The best way to do this is to use the java.lang.ref.SoftReference class to reference data in the cache. This allows for data that are not used over a certain number of Garbage Collection cycles to be freed up automatically. Memory Leaks

The verboseGC output can be used as a first step in determining if a memory leak exists. As the Garbage Collection cycles proceed, watch for a pattern in the percent free value at the end of each cycle. This may vary from cycle to cycle, but if it is consistently getting lower, it is highly likely a result of a memory leak in the application.


If a memory leak is occurring in an application, there may be little choice but to recycle a server. It may be worthwhile, however, to wait for java.lang.OutOfMemory exceptions to be thrown. It may be that the JVM is avoiding doing a compression of the heap during Garbage Collection (this is an expensive component of garbage collection) until it is absolutely necessary. Thus, the free heap may go down for 10 or 20 consecutive Garbage Collections, then may go back up as the JVM decides to do a more complete Garbage Collection with compaction and possible elimination of soft references.


If a java.lang.OutOfMemory exception is thrown, take a z/OS SVCDUMP. The IBM support team has tools and can use the dump to analyze the contents of the heap. In addition, both the WSAM and Introscope products contain powerful leak detection capabilities.

Heap Settings

Ideal heap settings vary by application and environment. A smaller heap may require more frequent Garbage Collection, and a larger heap may have a longer cycle of garbage collection.


The "rule of thumb" is to keep your heap on the small side so that the cycles are short. This allows for more servers to be brought up as well. This is, of course, a balancing act. This is discussed in chapter 22, WebSphere Performance Tuning. We recommend starting with a 256M heap and modifying as needed based on the behavior of your workload under stress.

23.5.2 Just In Time (JIT) Compiler

The Just In Time Compiler (JIT) is essential to high performance Java applications. Its primary function is to take frequently executed methods, and compile the byte-code into machine code, so that it runs much faster. In nearly all environments, the settings for the JIT should not be modified. The JVM does a great job determining which methods to compile when based on how many times a method has been executed since the JVM was started. In the current JDK 1.31 on z/OS, a method gets jitted after 2,000 executions.


One big problem with modifying the jit threshold is that rarely executed methods can get jitted (i.e., methods run just a few times at initialization of the J2EE server). Each jitted method takes space in memory that could be used for other purposes.

23.5.3 Java Tracing

WebSphere V5 for z/OS is the first version to allow dynamic Java tracing. This is fully configurable from the Administrative Console just as on the distributed platforms.


Now that it is dynamic, Java traces can be collected on production systems. You must remember, though, that this does add overhead. So, it is best to turn it on for short periods of time and hope to re-create the problem while it is on.


Some of the java trace commands we find most helpful on z/OS

		F <WasProc>,DISPLAY,TRACE,JAVA (Display current settings)
		F <WasProc>,TRACEJAVA='*=all=enabled' (Turn on all java tracing)
		F <WasProc>,TRACEJAVA='*=all=disabled' (Turn off all java tracing)

And for more specific traces (i.e., jdbc, orb, transaction handling, ...), simply use the adminconsole to set the traces, and note what the javatracespec is. It can then be set dynamically with this TRACEJAVA='<javatracespec>' command.

23.5.4 Java Stack Traces

The JVM in the Servant operates with a configurable number of "worker threads," along with some number of threads used by the JVM and the container for housekeeping. When applications run slowly, it is often helpful to get a "snap-shot" of the current call stack on each of the worker threads. You can extract call-stack information from a console dump. IBM support has several tools for this purpose such as TBack and SvcDump, but it can be done manually as well. Listing 23.3 provides a set of instructions for the viewing thread call-stack information from the Servant region with ASIDx 002C.

Listing 23.3 Instructions for taking a console dump and getting thread stack traces.

/dump comm=(WebSphereStackTrace) (dump console command)
  (this is a WTOR where xx is the message#, may need to be broken up)
After the dump completes, bring it into IPCS

This will result in a display of the call-stack of all threads. All worker thread stacks will contain the Entry SR_ExecutionRoutine. Look for patterns in the top of the worker thread stacks. IBM Support has some additional tools (tback and svcdump) for viewing the threads in the stack. At the time of this writing, there are still some issues with being able to create a Java dump with WebSphere and the current JVM (kill -3 <pid>). When this is resolved, there may be an ability to use Thread Analyzer (an IBM tool that is freely available to analyze Java dumps from WebSphere J2EE servers as discussed in Appendix B, WebSphere Tooling Reference) against z/OS Java dumps (and the dumps will be simpler and quicker to take).


When looking at the worker threads, focus on frequently occurring call-stacks, and look down from the top to find the first recognizable method. This may be difficult if the server has not been up long, as non-jitted methods names do not show up in a dump.

  • + Share This
  • 🔖 Save To Your Account