Home > Store > Programming > Java

Performance Analysis for Java™ Websites

Register your product to gain access to bonus material or receive a coupon.

Performance Analysis for Java™ Websites


  • Your Price: $43.99
  • List Price: $54.99
  • Usually ships in 24 hours.


  • Copyright 2003
  • Dimensions: 7-3/8x9-1/4
  • Pages: 464
  • Edition: 1st
  • Book
  • ISBN-10: 0-201-84454-0
  • ISBN-13: 978-0-201-84454-2

How well a Web site performs while receiving heavy user traffic is an essential factor in an organization's overall success. How can you be sure your site will hold up under pressure?

Performance Analysis for Java™ Web Sites is an information-packed guide to maximizing the performance of Java-based Web sites. It approaches these sites as systems, and considers how the various components involved, such as networks, Java™ Virtual Machines, and backend systems, potentially impact overall performance. This book provides detailed best practices for designing and developing high-performance Java Web applications, and instructions for building and executing relevant performance tests to gauge your site's ability to handle customer traffic. Also included is information on how to use the results of performance testing to generate accurate capacity plans.

Readers will find easy-to-understand explanations of fundamental performance principles and terminology. The book runs through performance profiles for common types of Web sites, including e-commerce, B2B, financial, and information exchange. Numerous case studies illustrate important ideas and techniques. Practical throughout, the book also offers a discussion on selecting the right test tools and troubleshooting common bottlenecks frequently revealed by testing.

Other specific topics include:

  • Performance best practices for servlets, JavaServer Pages™, and Enterprise JavaBeans™
  • The impact of servlets, threads, and queuing on performance
  • The frozen Web site danger
  • Java™ Virtual Machine garbage collection and multithreading issues
  • The performance impact of routers, firewalls, proxy servers, and NICs
  • Test scenario and script building
  • Test execution and monitoring, including potential pitfalls
  • Tuning the Web site environment
  • Component monitoring (servers, Java™ Virtual Machines, and networks)
  • Symptoms and solutions of common bottleneck issues
  • Analysis and review of performance test results
  • Performance Analysis for Java™ Web Sites not only provides clear explanations and expert practical guidance, it also serves as a reference, with extensive appendixes that include worksheets for capacity planning, checklists to help you prepare for different stages of performance testing, and a list of performance-test tool vendors.


    Sample Content

    Online Sample Chapters

    Basic Java Performance Lingo

    Java Test Environment Construction and Tuning

    Downloadable Sample Chapter

    Click below for Sample Chapter(s) related to this title:
    Sample Chapter 1

    Sample Chapter 9

    Table of Contents




    1. Basic Performance Lingo.

    Measurement Terminology.

    Load: Customers Using Your Web Site.

    Throughput: Customers Served over Time.

    Response Time: Time to Serve the Customer.

    Optimization Terminology.

    Path Length: The Steps to Service a Request.

    Bottleneck: Resource Contention under Load.

    Scaling: Adding Resources to Improve Performance.


    2. Java Application Server Performance.

    Web Content Types.

    Web Application Basics.

    The Model-View-Controller Design Pattern.


    JavaServer Pages (JSPs).

    Assorted Application Server Tuning Tips 56Beyond the Basics.

    HTTP Sessions.

    Enterprise JavaBeans (EJBs).

    Database Connection Pool Management.

    Web Services.

    Other Features.

    Built-in HTTP Servers.

    General Object Pool Management.

    Multiple Instances: Clones.


    3. The Performance Roles of Key Web Site Components.

    Network Components.



    Proxy Servers.

    Network Interface Cards (NICs).

    Load Balancers.

    Affinity Routing.

    HTTP Servers.

    Threads or Processes (Listeners).




    Operating System Settings.



    Application Servers.


    Databases and Other Back-End Resources.


    Web Site Topologies.

    Vertical Scaling.

    Horizontal Scaling.

    Choosing between a Few Big Machines or Many Smaller Machines.

    Best Practices.


    4. Java Specifics.

    The Java Virtual Machine.

    Heap Management.

    Garbage Collection.

    Java Coding Techniques.

    Minimizing Object Creation.

    Multi-Threading Issues.


    5. Performance Profiles of Common Web Sites.

    Financial Sites.

    Caching Potential.

    Special Considerations.

    Performance Testing Considerations.

    B2B (Business-to-Business) Sites.

    Caching Potential.

    Special Considerations.

    Performance Testing a B2B Site.

    e-Commerce Sites.

    Caching Potential.

    Special Considerations.

    Performance Testing an e-Commerce Site.

    Portal Sites.

    Caching Potential.

    Special Considerations: Traffic Patterns.

    Performance Testing a Portal Site.

    Information Sites.

    Caching Potential.

    Special Considerations: Traffic Patterns.

    Performance Testing an Information Site.

    Pervasive Client Device Support.

    Caching Potential.

    Special Considerations.

    Performance Testing Sites That Support Pervasive Devices.

    Web Services.


    6. Developing a Performance Test Plan.

    Test Goals.

    Peak Load.

    Throughput Estimates.

    Response Time Measurements.

    Defining the Test Scope.

    Building the Test.

    Scalability Testing.

    Building the Performance Team.


    7. Test Scripts.

    Getting Started.

    Pet Store Overview.

    Determining User Behavior.

    A Typical Test Script.

    Test Scripts Basics.

    Model the Real Users.

    Develop Multiple, Short Scripts.

    Write Atomic Scripts.

    Develop Primitive Scripts.

    Making Test Scripts Dynamic.

    Support Dynamic Decisions.

    Dynamically Created Web Pages.

    Dynamic Data Entry.

    Provide Sufficient Data.

    Building Test Scenarios.

    Putting Scripts Together.

    Use Weighted Testing.

    Exercise the Whole Web Site.

    Common Pitfalls.


    Hard-Coded Cookies.

    Unsuitable Think Times.

    No Parameterization.

    Idealized Users.

    Oversimplified Scripts.

    Myopic Scripts.


    8. Selecting the Right Test Tools.

    Production Simulation Requirements.



    Automation and Centralized Control.

    Pricing and Licensing.

    Tool Requirements for Reproducible Results.


    Verification of Results.

    Real-Time Server Machine Test Monitoring.

    Buy versus Build.


    9. Test Environment Construction and Tuning.

    The Network.

    Network Isolation.

    Network Capacity.

    e-Commerce Network Capacity Planning Example.

    Network Components.

    Network Protocol Analyzers and Network Monitoring.

    The Servers.

    Application Server Machines.

    Database Servers.

    Legacy Servers.

    The Load Generators.

    Master/Slave Configurations.

    After the Performance Test.

    Hardware and Test Planning.


    10. Case Study: Preparing to Test.

    Case Study Assumptions.

    Fictional Customer: TriMont Mountain Outfitters.

    An Introduction to the TriMont Web Site.

    Site Requirements.

    Initial Assessment.

    Next Steps.

    Detailed TriMont Web Site Planning Estimates.

    Calculating Throughput (Page Rate and Request Rate).

    Network Analysis.

    HTTP Session Pressure.

    Test Scenarios.

    Moving Ahead.


    11. Executing a Successful Test.

    Testing Overview.

    Test Analysis and Tuning Process.

    Test and Measure.




    Test Phases.

    Phase 1: Simple, Single-User Paths.

    Phase 2: User Ramp-Up.

    Test Environment Configurations.

    Start Simple.

    Add Complexity.


    12. Collecting Useful Data.

    CPU Utilization.

    Monitoring CPU on UNIX Systems.

    Monitoring CPU on Windows Systems.

    Monitoring CPU with a Test Tool.

    Java Monitoring.

    Verbose Garbage Collection.

    Thread Trace.

    Other Performance Monitors.

    Network Monitoring.

    Software Logs.

    Java Application Server Monitors.


    13. Common Bottleneck Symptoms.


    Insufficient Network Capacity.

    Application Serialization.

    Insufficient Resources.

    Insufficient Test Client Resource.

    Scalability Problem.

    Bursty Utilization.

    Application Synchronization.

    Client Synchronization.

    Back-End Systems.

    Garbage Collection.

    Timeout Issues.

    Network Issues.

    High CPU Utilization

    High User CPU.

    High System CPU.

    High Wait CPU.

    Uneven Cluster Loading.

    Network Issues.

    Routing Issues.


    14. Case Study: During the Test.


    Test Environment.

    Hardware Configuration.

    Input Data.

    Calculating Hardware Requirement Estimate (Pre-Test).

    HTTP Session Pressure.

    Testing Underway.



    Next Steps.


    15. Capacity Planning and Site Growth.

    Review Plan Requirements.

    Review Load, Throughput, and Response Time Objectives.

    Incorporate Headroom.

    Review Performance Test Results.

    Single-Server User Ramp-Up.

    Scalability Data.

    Processor Utilization.

    Projecting Performance.

    Projecting Application Server Requirements.

    Projecting Hardware Capacity.

    Scaling Assumptions.

    Case Study: Capacity Planning.

    Review Plan Requirements.

    Review Performance Test Results.

    Project Capacity.

    Ongoing Capacity Planning.

    Collecting Production Data.

    Analyzing Production Data.


    Appendix A. Planning Worksheets.

    Capacity Sizing Worksheet.

    Input Data.

    Calculating Peak Load (Concurrent Users).

    Calculating Throughput (Page Rate and Request Rate).

    Network Capacity Sizing Worksheet.

    Input Data.

    Calculating Network Requirements.

    Network Sizing.

    JVM Memory HTTP Session Sizing Worksheet.

    Input Data.

    Calculating HTTP Session Memory Requirement.

    Hardware Sizing Worksheet.

    Input Data.

    Calculating Hardware Requirement Estimate (Pre-Test).

    Capacity Planning Worksheet.

    Part 1: Requirements Summary.

    Part 2: Performance Results Summary.

    Part 3: Capacity Planning Estimates.

    Appendix B. Pre-Test Checklists.

    Web Application Checklist.


    Java Server Pages.



    Static Content.


    HTTP Session.

    Enterprise JavaBeans.

    Web Services.

    Database Connection.

    Object Pools.

    Garbage Collection.

    Component Checklist.



    Proxy Servers.

    Network Interface Cards.

    Operating System.

    HTTP Servers.

    Web Container.

    Thread Pools.

    Enterprise JavaBean Container.

    JVM Heap.

    Application Server Clones.

    Database Server.

    Legacy Systems.

    Test Team Checklist.

    Test Team.

    Support Team.

    Web Application Developers.

    Leadership and Management Team.

    Test Environment Checklist.

    Controlled Environment.



    Prerequisite Software.

    Application Code.


    Test Simulation and Tooling Checklist.

    Performance Test Tool Resources.

    Test Scripts and Scenarios.


    Appendix C. Test Tools.

    Performance Analysis and Test Tool Sources.

    @BHEADS = Java Profilers.

    Performance Test Tools.

    Java Application Performance Monitoring.

    Database Performance Analysis.

    Network Protocol Analyzers.

    Product Capabilities.

    Production Monitoring Solutions.

    Load Driver Checklist.

    Sample LoadRunner Script.

    LoadRunner Initialization Section.

    LoadRunner Action1 Section.

    LoadRunner End Section.

    Sample SilkPerformer Script.

    Sign-in, Browse, and Purchase Script.

    Search Script.

    New Account Script.

    Appendix D. Performance Test Checklists and Worksheets.

    Performance Test Results Worksheet.

    Results Verification Checklist.

    Tuning Settings Worksheet.


    Operating System.

    HTTP Server.

    Application Server.


    Application Parameters.

    Database Server.

    Bottleneck Removal Checklist.


    Bursty Utilization.

    High CPU Utilization.

    Uneven Cluster Loading.

    Summary Test Results Graph.


    Index. 0201844540T08282002


    Does your website have enough capacity to handle its busiest days? Will you lose potential customers because your web application is too slow? Are you concerned about your e-business becoming the next cautionary tale highlighted on the evening news?

    The authors of this book combine their experiences with hundreds of public and private websites worldwide to help you conduct an effective performance analysis of your website. Learn from the experts how to design performance tests tailored to your website's content and customer usage patterns.

    In addition to designing better tests, the book provides helpful advice for monitoring tests and analyzing the data collected. Are you adding load, but not seeing increased throughput? Do some machines in your environment work much harder than the others? Use the common symptom reference to isolate bottlenecks and improve performance.

    Since many sites use a Java application server to power their web applications, the authors discuss the special considerations (garbage collecting, threading, heap management, to name a few) unique to the Java environment. Also, the book covers the special performance requirements of sites supporting handheld devices, as well as sites using Enterprise Java Beans (EJBs).

    Designed to benefit those with a little or a lot of performance testing background, this book helps you get the most from your performance analysis investment. Learn how to determine the best your site will do under the worst of conditions.



    Untitled Document Foreword

    About a year ago I was sent out to a large Fortune 500 WebSphere customer to solve a critical "WebSphere performance" problem. The customer was close to putting a WebSphere application into production, and believed they had discovered-with less than a week to go before the application was to go into production-that WebSphere "did not perform well."

    No one seemed to have many details about the problem, but we were assured by the highest levels of management at both the customer's company and IBM that this was indeed a critical situation. So I dropped everything and headed out the next morning on a 6:30am flight. At the company I met with the customer representative, who showed me an impressive graph (the output of a popular load-testing tool) that demonstrated that their application reached a performance plateau at five simultaneous users, and that response times increased dramatically as more load was placed on the system.

    I asked if they could run the test while I watched so that I could see the numbers myself. I was told no-the hardware they were using for performance testing was also being used for user-acceptance testing. It would not be available until after 4pm that day. So I asked if I could see the test scripts themselves. to see how they were testing the application. Again the answer was no. The fellow who wrote the scripts wouldn't return until 5pm, and no one else knew where he kept them.

    Not wanting to seem like I was wasting time, I next asked for the source code for the application. They were able to provide it, and I spent the next eight hours reading through it and making notes about possible bottlenecks. When the script author returned at 5pm, we reconfigured the test machine and ran the script. Sure enough, the performance curve looked like what the test had caught the previous night. I asked him to walk me through the code of the test script. He showed me what each test did, and how the results were captured. I then asked him about one particular line of code in the middle of the script: "So, here you seem to be hard-coding a particular user ID and password into the test. You neverm vary it, regardless of the number of simultaneous users the load testing tool simulates?"

    He said that this was true and asked if that could be a problem. I explained to him that their test setup used a third-party security library, and that one of the "features" of this library was that it restricted users with the same user ID and password from logging in twice. In fact, it "held" requests for the second login until the first user using that login has logged out. I had picked up on this fact by reading the code that morning. I then asked if he could rewrite the script to use more than one login ID. In fact, if they wanted to test up to a hundred simultaneous logins, could he rewrite the script so that it used a hundred different login IDs? He ended up doing just that, and the next night, after a few more such adventures, we reran the modified test.

    This time WebSphere performed like a champ. There was no performance bottleneck, and the performance curve that we now saw looked more like what I had expected in the first place. There were still some minor delays, but the response times were much more in line with other, untuned customer applications I had seen.

    So what was wrong here? Why did this company have to spend an enormous amount of money on an expensive IBM consultant just to point out that their tests weren't measuring what they thought they measured? And why were we working under such stressful, difficult circumstances, at the last possible moment, with a vendor relationship on the line?

    What it came down to was a matter of process. Our customer did not have a proper process in place for performance testing. They did not know how to go about discovering performance problems so that they could be eliminated. The value that this company placed on performance testing was demonstrated by the fact that the performance tests were scheduled for after hours, and were done on borrowed hardware. Also, the fact that this problem was not discovered until less than a week before the planned deployment date of the application showed the priority that performance testing had among other development activities; it was an "afterthought," not a critical, ongoing part of development.

    I have repeatedly seen large, expensive systems fail-and thousands or millions of dollars lost-because of this attitude. As a wise man once said "failing to plan is planning to fail." The book you hold in your hand can help you to avoid such failures. It offers concise, easy to follow explanations of the different kinds of performance problems that large-scale web applications face. More important, it provides you with a process and methodology for testing your systems in order to detect and fix such problems before they become project-killers.

    The authors of this book are all respected IBM consultants and developers, with years of collective experience in helping solve customer problems. They've dealt with the foibles of application servers, customer application code, network configuration issues, and a myriad of other performance-stealing problems. They convey their experiences and recommendations in a laid-back, easy to understand way that doesn't require that you to have a Ph.D. in stochastic modeling to understand. I believe their greatest contribution to the world is a process for injecting performance testing into all stages of the development process-making it, appropriately, a key part of web site development.

    If you are building a large web site using J2EE technologies-or even just a small, departmental application-buy this book. Performance problems can creep in to all sizes of applications, and the time that you will save by following the advice given here will easily repay the purchase price of this book many times over. I've come to rely on the authors for advice in this area, and I'm sure you will too.

    -Kyle Brown
    Senior Technical Staff Member
    IBM Software Services for WebSphere


    Click below to download the Index file related to this title:


    Submit Errata

    More Information

    Unlimited one-month access with your purchase
    Free Safari Membership