Performance and Load-Testing of Axis with Various Web Services Styles
- Performance Summary for RPC-Style Web Service Exchange
- Performance Summary for Document-literal Web Service with Low Payload
- Performance Summary for Document-literal Web Service with Medium Payload
- Performance Summary for Document-Literal Web Service with High Payload
- Implementation Issues with Axis Document-Literal Messaging
Before continuing with our discussion of web services architecture and framework in this series, it's important to evaluate performance and load-testing characteristics of web services. The service-oriented architecture also introduces dependency of various software modules, including its performance and ability to handle transactions under load. This article presents the results of our tests performed for RPC-style, doc-literal, and attachment-style web services using the Apache Axis web engine.
We ran a complete load-test exercise to determine the Axis web service engine's performance under heavy payloads and under simultaneous transactions for RPC-style and document-literal services, and services with attachments. Here are the overall results:
RPC-style web services performed very poorly, with many failed transactions and response times of several seconds for even a moderate-sized (50KB) payload, even with very few simultaneous users.
Document-literal web services and web services with attachments performed much better and could actually handle a production environment load with an impressive small roundtrip overhead (500 milliseconds) at the web services infrastructure level.
Our results indicated no transaction failures with document-literal web services, even with a high payload (200KB per transaction) and many concurrent users (500 users).
Clearly, RPC-style web services are not fit for production, even with moderate-sized payload exchanges. Document-literal and attachments-style web services are a sure winner given their performance and reliability.
Dual-processor Linux machines with Tomcat servers were used to test the web services infrastructure overhead. The time measured is the total roundtrip time taken from the client making the request through the web services client stub until it received a response. (The web services business module was stubbed out to avoid including business module processing time.) The client application and the web service were hosted on the same machine to reduce network delay.
We varied two parameters to determine the performance impact and stability of the system:
Payload size. This measurement is the total amount of bytes exchanged in each transaction. We categorized into three different sizes:
Low payload (2KB per transaction)
Medium payload (50KB per transaction)
High payload (200KB per transaction)
Simultaneous users accessing the system. This measurement provides insight into scalability of the system.
Exit Strategy To Stop a Test
The tests were set up to keep the payload constant for all users. We ramped up the simultaneous user base until we started seeing transaction errors or the CPU usage hit 2530% during our testing process.
These are the graphs and metrics relevant to our discussion (we'll get to these shortly):
RPC-style web service under moderate load
Document-literal web service under light load
Document-literal web service under medium load
Document-literal web service under heavy load
Due to a setup error, our initial startup time didn't reflect the true web services overhead and thus needed to be discarded. Ignore the initial spike on the left side of each graph.