a/d trends: New Java Server Benchmark - jBOB

There's a new benchmark in town, one we'll be hearing more of in the coming future. This benchmark is interesting to the Java developer - especially one building server side Java (tm) applications.

Performance of such server-based Java applications currently have no properly defined transaction measurements and benchmark rates. We do however see benchmarks which have been institutionalized by the Java development community and typically quality the relative performance of client side workstations running Java e/g. CaffeineMark(tm). Also, the performance of just such enterprise/serverJava applications has been the object of criticism in recent times, so this provides an opportunity to set the record straight and compare Java server environments apples to apples.

This benchmark has been developed by IBM and is named "Business Object Benchmark for Java" or "jBOB". It is designed to quantify the performance of simple transactional server workloads written in Java. It models a typical electronic order entry business scenario. To provide high level of credibility to the benchmark, IBM has patterned it directly after the TPC-C workload, similar also to CPW workloads. JBOB is based on the business model used by TPC Benchmark C (tm), Standard Specification, Revision 3.3, April 8, 1997. In It is a multi-threaded, multi-user, database intensive application which stresses Java's threading, synchronization, JDBC and garbage collection support. The jBOB benchmark results are then audited and certified by Client/Sever Labs of Atlanta, GA. Client/Server Labs web site is: http://www.cslinc.com

The workload used by IBM's Business Object Benchmark for Java accordance with the Transaction Processing Council's (TPC) fair use policy, jBOB deviates from the TPC-C specification and is not comparable to any official TPC result. First and most importantly, jBOB is a specific executable code set and not a benchmark specification of work to be completed. It was written attempting to emulate common coding practices, not necessarily practices providing the best possible performance. Furthermore code is written to maximize platform independence and portability - thus 100 percent pure Java. There are other notable exceptions: menu response times are not measured; think time is implemented as a constant; and none of the screen I/O is done on the clients as required by the TPC specification.

Hardware configurations for jBOB are at the discretion of the tester. IBM has done it with "typical" AS/400 configurations. To achieve "apples to apples" comparisons, customers should look the number of SMP processors, amount of memory and size of number of disks/size of disks. As is typical for the AS/400, MHz will not be a particularly strong indicator of performance. At this point, jBOB does not seem to require configuration costs, thus there is no $/transaction cost. I theorize that this is because of the difficulty in obtaining reliable and current configuration costs for the multitude of Intel-based servers people use.

JBOB results are reported as the number of jBOB transactions per second (jtps) executed during a valid measurement interval. During the measurement, each completed transaction is logged into a database file which stores the response time and type of the transaction. After each run is complete, a Java application processes the transaction log and automatically calculates the jtps rating. A valid interval is one which meets 90th percentile response time requirements as described in the TPC-C specification. Each measurement is forty minutes in length.

In addition to finding the peak throughput rating of the system, measurements were made to compare users vs. throughput results on the platform. During these measurements, only the total number of users was changed, by increasing or decreasing the number of threads per client JVM parameter. Points measured below the peak are always valid results, but points above the peak are invalid results. This number can be summarized as the number of threats running simultaneously.

Thus, one number (jtps) gives the benchmark observer a relative ranking of the muscularity of the Java server environment (how many transactions it is capable of at peak) and the other number (threads) gives us an idea of how optimized the environment if we look at the processor peak. This is a relative indicator of the efficiency of the threading, garbage collection and other JVM activities and overheads.

Not many results are in yet, but the early comparisons of AS/400 to Compaq makes the AS/400 look strong on both counts. On relatively similar configurations the AS/400 both handles more transactions and more threads than the others. A good start.

What is different about this benchmark is that it does not require the tester to implement the code according to a specification. So this "load and go" benchmark could potentially be pervasive, because it is inexpensive and simple. We expect to hear ordering details form IBM soon.

To become pervasive in the market, vendors will want to familiarize themselves with the source code, and see for themselves that the code is written with portability and fairness in mind. The whole benchmark would come to a halt if its code basis were biased to an AS/400 platform. IBM expresses willingness to work with any critique of this benchmark, which is only fair.

As an open, 100 percent pure Java Server benchmark… this one looks good!

Mark Buchner is president and founder of Astech Solutions Inc. (Aurora, Ontario), which applies technology to the practical needs of the AS/400 market. mbuchner@astech.com.

Must Read Articles