In-Depth

Tech Talk: DW Databases Scale Out and Up

MPP databases have to scale out and scale up. According to analytic database specialist Calpont, they don't. Most focus on MPP to the detriment of SMP.

One ironic upshot of the glut of high-performance analytic databases platforms is that vendors don't spend a lot of time talking about performance.

If they do, it's probably because they're new, they're trying to break into the market, and performance is all they have to talk about. This is in part what makes Calpont's pitch with its InfiniDB analytic database so intriguing.

Calpont isn't a new company, and InfiniDB isn't a new analytic database technology. Calpont, in fact, is about a decade old. It first started shipping InfiniDB in late-2009.

Like open source software (OSS) rival Infobright, InfiniDB marries a MySQL front-end to a columnar back-end. Like OSS rival Infobright -- and most other OSS business intelligence (BI) or data warehousing (DW) players -- Calpont offers a free "Community Edition" version of InfiniDB. Unlike Infobright, InifinDB has boasted massively parallel processing (MPP) support from the get-go.

Calpont unveiled its fourth release of InfiniDB, version 3.0, in late February. In addition to trumpeting the usual bells and whistles -- e.g., a new parallel loading facility, improved provisioning capabilities for Amazon Inc.'s EC2 cloud service -- officials spent a lot of time talking about performance -- with a twist.

Instead of focusing on InfiniDB's ability to scale out across multiple nodes in an MPP configuration, officials also emphasize its ability to scale up, i.e., linearly, in symmetric multiprocessing (SMP) configurations across dozens of processors.

Like most analytic database entrants, InfiniDB is a scale-out, MPP database. Although SMP isn't exactly a dirty word in DW circles, it's usually positioned as a kind of binary opposite of MPP. It's easy to think of it as an either/or proposition: warehouses can either scale out or scale up.

These days, almost all MPP warehouses do both: they run on SMP hardware -- systems with two or more processor cores -- and scale out over multiple nodes, but according to Calpont CTO Jim Tommaney, they don't do both well. Most focus on MPP to the detriment of SMP, he claims.

"[Competitors] aren't able to get the same level of parallelism that we are. For us, it's routine to parallelize the workload across all the cores on a single box. ... We can scale [effectively] over 96 cores on a single box, or 16 and 24 cores [per node] distributed across 64 nodes. That's something most [competitors] don't talk about. They talk scale-out but they don't talk scale-up," says Tommaney. "Three and four years ago, that was the mental picture: scale out, scale out, scale out. We built it that way at the time, but we also optimized for scaling up as well."

A Scale-up Future?

As Intel Corp. and Advanced Micro Devices (AMD) continue to pack more processor cores onto their chip dies, Tommaney claims, an ability to scale up will become an important differentiator. He cites the availability of commodity 32-core SMP Intel Xeon systems from Cisco Systems Inc., Hewlett-Packard Co. (HP), IBM Corp., and other OEMs. (HP and IBM both ship commodity 48- and 64-core AMD-based systems, too.)

"What's the most cost-effective way to get 48 cores [in a system image]?" Tommaney asks.

For high-end data warehousing workloads, this has traditionally been accomplished using MPP and clustering, he explains. There were practical reasons for this: running multiple jobs in parallel on the same system used to be a problem, thanks both to the limited scalability of commodity symmetric multiprocessing (SMP) hardware and the prohibitive cost of high-end SMP rigs. For example, when IBM Corp. introduced a 64-way Pentium III Xeon server in 2001, that system was populated with 64 discrete Pentium III Xeon chips and 64 discrete chip sockets. (It also wasn't technically a single system: IBM used several cabinets or chasses, connected via a high-speed backplane technology.) Back in 2001, two- and four-way SMP rigs were considered commodity, with eight-way comprising the high-end.

Nowadays, server rigs with two chip sockets are still considered the commodity standard, with the difference being that each of these sockets can accommodate a processor module that integrates 8, 12, or 16 cores, giving us instant 16-, 24-, or 32-way systems.

"It used to be you'd have to buy these million-dollar systems," Tommaney observes. "You'd have this Sun [UltraEnterprise] system with 96 cores that would cost you millions or tens of millions of dollars. Nowadays, you can get 32 cores and half a terabyte of memory and it will cost you $32,000 for the complete system. We're seeing a trend to significant levels of parallelism within even a single server."

Trouble is, few shops deploy their servers in large 16- or 32-processor system images. Today, shops can configure their rack-mounted or ultra-dense blade servers to run multiple operating system or application images, allocating processor, memory, and storage resources as needed. In the case of data warehousing, they can use virtualization to configure several MPP nodes on the same physical system.

A 16- or 32-core Intel-based system, for example, could be broken down into four, eight, or some other combination of MPP nodes. It might not necessarily be a good idea to do this, but it's one possible strategy as processor core densities increase. (The combinations and the possibilities get even more dizzying once DW is transposed into the cloud.)

Does Tommaney believe there's an advantage to running analytic workloads in a large SMP image? In some cases, yes. More to the point, he contends, a workload "doesn't have to be [running] on all of them [i.e., processors]. You could take that same 48-core [system] and run [a workload across] 16 of them."

CalPont designed InfiniDB so that it will scale more efficiently across those 16 processor cores, Tommaney claims. "Our secret sauce is really how we distribute our workloads. We focus on taking advantage of today's multicore environments and getting the most out of today's servers. How do we scale out and scale up and solve even larger data sets?" he points out. "We're going to scale better [than our competitors] on large [SMP] systems."

Au Contraire?

Mark Madsen, a principal with BI and DW consultancy Third Nature Inc., isn't quite convinced. In the 1990s, Madsen was part of an effort to port Oracle to run on NUMA-Q, an implementation of non-uniform memory architecture (NUMA) marketed by the former Sequent Computer Systems. NUMA was one strategy for scaling applications or workloads across large multiprocessor configurations.

The 64-way IBM system referenced above, for example, was a NUMA configuration.

Madsen has first-hand expertise with the problem of scaling database or data warehousing workloads in non-MPP environments. Then as now, he claims, it isn't so much a hardware issue as a software one. "Sequent was getting 32[-way] linear [scalability] with bus saturation as the problem circa 1995," he says. "The problem is the operating system and then the database. Oracle was the limiter."

Madsen agrees with Calpont's Tommaney that SMP scalability is fast becoming a salient issue but stresses that no vendor can bake in vertical scalability – or "scale-up-ability" -- overnight. After all, it took Madsen and his colleagues years to optimize Oracle for NUMA-Q. In this respect, he suggests, it's going to take time for the newer analytic database engines to rival the scale-up-ability of established DBMSes. SQL Server still doesn't scale as well as Oracle in large SMP configurations, he argues.

On the other hand, Madsen agrees, increasing core densities do pose challenges for BI and DW practitioners, and Calpont isn't the only vendor that's noticed this.

SAP, for example, has made scale-up SMP -- and a revival of NUMA -- a centerpiece of its HANA in-memory database effort. SAP claims to achieve both scale-out parallelization -- i.e., the characteristic attribute of MPP -- and linear scale-up-ability across dozens of processor cores on the same hardware or (via NUMA) in a system cluster.

Prakash Darji, vice president and general manager for SAP's HANA platform, says the challenge for software vendors is to effectively exploit the sheer density of compute resources. To do this, he claims, fundamentally different approaches -- such as a marriage of NUMA and MPP-like parallelization -- are needed.

"If you look at Intel's new Westmere processes, that's eight sockets [in a HANA-certified server], each with 10 cores -- for around $150,000," he points out. "Having access to that kind of compute power in a single server fundamentally changes the game."

HANA is a special case, inasmuch as it's an SAP-only play. (SAP likewise positions HANA against Oracle's beefy -- and costly -- Exadata platform.) For smaller DW implementations, Madsen predicts, the market will settle upon a "happy medium" -- i.e., a cost-effective and practicable trade-off in terms of performance/power-consumption -- that's a little less dense.

"If we have four eight- or 16-core single nodes, the performance of that node will matter," he says. "Boxes are reaching limits though: it becomes more costly to manage [a 32-way system] than to use two [smaller] boxes," he points out. "Nodes are increasing in size ... [but at some point] we'll settle on some core package size [that's] driven by market demand."

Must Read Articles