Home > Articles

  • Print
  • + Share This
This chapter is from the book

9.2 Real-World Low-Level Technology Stack Test Input and Mixes

Although we have looked at a number of input data and test mixes at the highest application levels, the insight only made possible from real-world low-level examples should help lay to rest any lingering questions as to the value of your input data in executing a stress test. The next few sections detail specific stress tests from an input data perspective, identifying goals and how specific stress-test tools help achieve those goals along the way.

9.2.1 Input Required for Testing Server Hardware

Many testing tools designed to validate the performance of your server platforms require little if any formal input. CheckIt requires nothing other than installation on the local machine, for example. But if your goal is to compare one Windows-based SAP server platform to another in terms of CPU performance, CheckIt allows you to "save" the testing results yielded from one platform's test as comparison or baseline data against which other platforms may be measured and compared. Thus, the output of one test acts as input, in a manner of speaking, for subsequent tests.

Other tools offer a bit more granular control, however, and not surprisingly, support additional data input metrics. If your goal is to measure how well your memory subsystem handles operations of different sizes, or differing random or sequential accesses, Nbench is an easy solution. It features the ability to customize some of your input, specifically the following:

  • The size of various operations. Thus, you can test operations that reflect what your SAP server does (or will do) in production, rather than rely on arbitrary hard-coded values assumed by other tools. You might test your DB server for large values, for example, while you focus on smaller values for applications and Web servers.

  • Execution thread count. Again, this gives you flexibility to emulate the expected level of multiprocessing that occurs in (or is expected of) your unique SAP environment, specifically regarding the disk subsystem.

  • Values associated with integer, floating point, and memory operations.

Other input values unlikely to require customization include the access width of the memory bus (tested at a number of different levels), timing/control information, and the performance of both random and sequential operations. Other mainstay general hardware test tools, like IOzone, even go so far as to set processor cache size and line size to particular default values, though both of these settings and much more may be changed through command-line or switch settings.

9.2.2 Disk Subsystem Test Mixes

Disk subsystem test mixes, like those associated with using SQLIO, Iometer, IOzone, and others have quite a few things in common. First, the period of time a test will be executed is controllable through switch settings or manually through the standard "control-c abort" sequence. Second, the size and location of one or more data files (representing one or more "database" files) is configurable as well, as is the mix of reads to writes and ratio of sequential operations to random/direct operations. These tools also allow you to control the number of processes or threads utilized by the OS to execute the test, in effect allowing you to control the disk queue lengths that must eventually be processed by the OS and its underlying disk controllers and drives. Finally, this and the settings for many other switches can be saved in a single "input" configuration file, useful in ensuring consistency between iterative tests executed against different hardware or software configurations.

Of course, if we step back and analyze the need for input at all levels, the fact that a particular disk stress-test utility may only support a specific OS version, patch level, or similar operating environment factor reflects core input data as well. And if the output is only provided in a particular format, the installation of special readers or a specific version of Microsoft Word or Excel may be warranted, too. Along the same lines, if a particular subset of output data is also desired—like the CPU and system utilization performance metrics that can be captured and shared via SQLIO—the appropriate switch needs to be manually set (in this case, the "-Up" switch).

9.2.3 OS Testing and Tuning

Performance testing the configuration options available to a particular OS often boils down to making specific utility– or command-line–driven changes and executing before and after test scenarios that indeed help quantify any change in performance. For example, I've done extensive testing in the past on pagefile sizing for SAP R/3 Windows-based database and application servers, ranging from release 3.0F through 6.20. SAP's recommended guidelines changed quite a bit in the days just before Basis release 4.0 was made available, and continued to change somewhat over the last two years as well. My goal was to determine which configurations made the most sense from both financial and performance perspectives. To this end, I leveraged different hardware configurations (e.g., disk controllers, use of RAID 1 versus RAID 5, use of multiple disk spindles versus a pair of drives) as well as different pagefile sizes, distribution models, and so forth. At the lowest levels, I used the Compaq System Stress-Test tool (once known as the Thrasher Test Utility) to force paging operations and therefore establish a baseline reflecting the relationship of I/Os per second for a particular memory range to the level of Windows paging that resulted. In a similar way, I also tested the impact that the "maximize throughput for network settings" Windows setting had on the memory subsystem and OS in general. In both cases, the tools I used for application-layer testing were nothing more complicated than custom-developed AT1 scripts. To create and drive a repeatable and consistent application-layer load, I simulated 100 users with minimal think times executing a suite of simple though typical R/3 transactions (e.g., MM03, VA03, FD32, and others that only required a single SAPGUI input screen). I could then compare the low-level thrasher results with the high-level SAP application-layer results, and extrapolate how different workloads would impact memory management in general and paging in particular.

9.2.4 Test Mixes and Database Tuning

At the simplest levels, a database stress test begins with read-only queries that preferably execute against a copy of your actual production database (though a copy of a development or test client or even a small sample database may suffice, depending on your goals). Unless you want to bring in the entire application layer of your SAP system, you should consider a number of testing alternatives. For instance, tools like Microsoft's SQL Profiler allow SQL 7 and SQL 2000 database transactions to be "captured" at a low level, and then replayed against a point-in-time database snapshot at a later date, all without the need for SAP application servers, an SAP Central Instance, or anything else that is SAP-specific. This type of approach is perfect for stress tests where an organization assumes (and rightly so most of the time) that potential application-layer performance issues can generally be solved either by adding more application servers or by beefing up the existing servers. That is, performance at the application issue is moot and can therefore be pushed out of scope simply because SAP supports a robust horizontal scalability model in this regard.

So the test mix for a pure DB-based stress-test scenario simply involves the use of record and playback tools—database-specific tools capable of capturing the SQL statements executed during a certain timeframe by a representative group of end users. You might choose to capture the busiest day of the season, for example, or maybe the 4 hours during which the heaviest batch job load is being processed. The queries, table scans, joins, and so on captured during this time period become your repeatable set of input data, to be played back for the DB server—you need not concern yourself with your SAP instances.

These types of tools are necessarily database-specific, of course. That is, Informix administration tools simply cannot support Microsoft SQL Server, nor do SQL Server–based tools support Oracle. But the value of these record/playback tools is unquestionable—they're generally quite easy to learn and easy to use. And, because they tend to support the ability to play back the captured transactions in the exact timeframe they were recorded, or compress or stretch out the timeframe as you see fit, their value in terms of capacity planning and what-if analysis is great.

Case in point, as I mentioned before I used SQL Profiler to record the real-world transactions of one of my large R/3 customers that was preparing for an acquisition that would double the number of its online and concurrent users. The company was comfortable with the scalability of the application server tier of their current solution, but needed to better understand the impact on the DB server. To be sure that this SQL Profiler approach to testing would be suitable, I first went through a sizing exercise, analyzing the current mix of users (via transactions ST07 and AL08) in terms of the total number of users as well as the functional areas in which they were heaviest. I then analyzed similar data provided by the organization to be merged into the first, and found that the mix was close enough to warrant no further analysis—within plus or minus 10%, each organization supported about the same number of SD, MM, PM, FI, and CO users. I used these real-customer data to tweak my SAP sizing tools, to better reflect the weight of their as-configured functional areas (rather than relying on default weights provided by the tool), using data gleaned from Basis transactions ST02, ST03, ST04, and ST06. This allowed me to size a new DB server, which I then procured from our seed unit pool (a pool of gear used for the express purpose of customer demonstrations and proof-of-concept engagements). After loading the OS and database, configuring the server, and then working through a restore of the customer's 400GB SAP R/3 database, I was finally in the position to do some testing.

It was at this point that the value of SQL Profiler really sank in—because the number of users would double in the new environment but the mix of users would stay pretty much the same, I simply sped up the playback of my previously recorded transactions 2× and then sat back and observed how well my newly architected and deployed demonstration DB server handled the load. In this way, I was not only able to nicely simulate the customer's actual expected load, but I was able to validate our user-based sizing as well. In the end, after making some calculated processor and RAM upgrades to the currently deployed application servers (to account for the additional logged-in users), and installing a new DB server identical to the one I tested back in the lab, the customer's acquisition went off without a hitch. This database-centric approach to testing and tuning should appeal to any SAP test team focused on saving time and money when the other tiers of a technology stack can be ruled out and it's determined that full-blown end-to-end stress testing is simply not required.

Beyond these basic database-level test mixes, where only the raw SQL code is executed on a DB server, lies the ability to execute application-driven SAP transactions. Such transactions must be commenced on an SAP front-end and executed by an SAP Application Server, of course. But this incremental complexity gives you the advantage of testing the impact that your test mix places on your end-to-end solution. And, by varying this test mix, which might range from light-impact financial and material-based transactions to multicomponent and truly solution-intensive monster batch transactions, you can exercise various solution stack layers with near pinpoint accuracy. Detailed test mixes and the challenges a team faces in identifying and using them are covered in detail near the end of this chapter.

  • + Share This
  • 🔖 Save To Your Account