Microsoft Boasts Its SQL Server 2000 Benchmarks

During its glitzy Windows 2000 launch, Microsoft Corp. publicized impressive benchmarks for the pending SQL Server 2000 running on Windows 2000.

The TPC-C results were the fastest to date: An eight-node configuration ran the Transaction Processing Council’s (www.tpc.org) TPC-C benchmark at 152,207 tpmC. Even more impressive were the last minute results emerging from a 12-node configuration that blazed through at 227,079 tpmC.

The scalability gradient in moving from an eight-node cluster to a 12-node cluster ostensibly demonstrates almost 100 percent linear scalability. For instance, the difference between the TPC-C result for the 12-node benchmark and the TPC-C result for the eight-node benchmark -- 74,872 tpmC -- corresponds to the projected TPC-C result for four-nodes.

On the surface, such numbers tend to refute once and for all the claim that Microsoft is years away from demonstrating horizontal scalability through clustering.

As a point of contrast, the Oracle Parallel Server (OPS) is regarded as one of the most scalable, high-performance RDBMS platforms available for hosting clustered transaction processing.

The highest TPC-C result to date for OPS -- 135,461 tpmC -- was recorded by an Enterprise 6500 cluster from Sun Microsystems Inc. (www.sun.com), which consisted of four clustered 24-CPU nodes for a total of 96 CPUs. At this rate, not only does SQL Server 2000 outperform OPS on an equivalent number of total processors, but also bests OPS in clustered configurations of only two-thirds of its total processor complement.

But Microsoft (www.microsoft.com) and Compaq Computer Corp.'s (www.compaq.com) spectacular TPC-C results weren’t achieved through a cluster in the traditional sense. It relied on database partition views (DPVs). DPV is a new feature native to SQL Server 2000 that database administrators can use to distribute or partition a database workload across multiple servers, resulting in performance benefits. A DPV warehouse file is then placed on each of the nodes in the database cluster, allowing applications written to take advantage of DPV to transparently access data without having to know on which server it is located.

"The definition of clustering is that you’re sharing disks across multiple systems," says Wayne Kernochan, senior vice president at Aberdeen Group Inc. (www.aberdeen.com). "Microsoft is using clustering in a sense it wasn’t used in the past."

Barry Goffe, Microsoft’s lead product manager for SQL Server, says the use of DPVs to distribute the database workload in the Microsoft-Compaq SQL Server 2000 test configuration qualifies it as a cluster in the traditional sense of the term.

"The simple definition of a cluster is when a workload is shared across machines, and this applies not only to databases, but also to the Web tier and to the middle tier of applications," Goffe explains. "What we are delivering in SQL Server 2000 is clustering in the general sense -- that we are able to distribute workload across multiple servers. At the same time, it is more of a federation of independent servers since the cluster is not managed as a single system image."

In this context the two RDBMS platforms can't be meaningfully compared. OPS is a solution engineered from the ground-up to provide high availability by way of a shared-everything clustering model, with additional scalability benefits as well. For its part, SQL Server 2000 provides limited support for shared-everything clustering, with two nodes on Windows 2000 Advanced Server and four nodes on Windows 2000 Datacenter Server, and offers no additional scalability benefits when deployed in Microsoft Cluster Server implementations.

Because it relies upon a partitioned database scheme, the SQL Server 2000 configuration that Microsoft and Compaq leveraged in their record-setting TPC-C benchmark makes no inherent provisions for high-availability. If a node goes down, the subset of the database partitioned on its storage subsystem is unavailable to the rest of the machines in the cluster.

"Microsoft’s focus was not on achieving high-availability, or high scalability in terms of the database size," Aberdeen’s Kernochan says. "Instead, they focused on achieving TPC-C price/performance numbers, with a concentration on performance."

But whether the configuration is by definition a true cluster, or even whether the disk is shared or not, doesn't affect performance enough to make a huge difference, Kernochan says.

"If Microsoft ran the benchmark with a shared-disk approach, which can be done with third-party tools, the performance penalty for sharing disks would be evident, but [the Windows 2000 and SQL Server 2000 configuration] would still break the record," he says.

Analysts, however, express concern over the way Microsoft built its database. Some database administrators believe that a virtual view of a partitioned database is not the optimum approach. It is easier, they say, to have the database as one piece that can be backed up as a single entity. Distributing the database also makes administration more complex.

=

[Infobox]A New Seat on the Bench

CompanySystemOSDatabasePerformance (tpmC)Cost(price/tpmC)Submitted
CompaqProLiantW2K ASSQL Server227,079$19.122/17/00
CompaqProLiantW2K ASSQL Server152,207$18.932/17/00
IBMRS6000AIXOracle135,815$52.7010/29/99
BullEscalaAIXOracle135,815$54.9411/5/99
SunEnterprise 6500 ClusterSolarisOracle135,461$97.109/24/99

Source: Transaction Processing Performance Council (www.tpc.org)

Must Read Articles