Aster Data Unveils New DW Appliances

Aster this week introduced two new DW appliance systems running on top of commodity hardware from Dell

Life goes on in the data warehousing (DW) segment, even as DW appliance pure-play Dataupia Inc. struggles to stay afloat in a brutal economic climate.

Aster this week introduced two new DW appliance systems -- i.e., pre-configured versions of nCluster, its version of the (increasingly ubiquitous) massively parallel processing (MPP) DW platform, running on top of commodity hardware from Dell Computer Corp. Aster's new appliances are sized to move: the company plans to pitch its small 1 TB-and-under Express systems to SME customers while offering a larger Enterprise configuration (which can scale to 1 PB) to large shops.

DW players such as Kognitio, ParAccel, and Vertica have spent a lot of time downplaying the classic -- or Netezza-esque -- appliance approach, which they like to deride as an inescapably proprietary proposition: i.e., proprietary blades in proprietary cabinets at proprietary prices. These vendors, along with Greenplum (which -- by signing an agreement with Sun Microsystems Inc. -- arguably beat them to the punch), helped popularize a hybrid DW model in which a customer buys their software and installs it on top of existing or third-party hardware and storage resources; buys their software pre-loaded on top of hardware and storage from a server OEM; or -- increasingly -- buys their software and runs it in a data warehousing-as-a-service- or cloud computing-like configuration.

Prior to the development of its new Dell-based appliances, Aster didn't have a ready-to-go commodity DW offering. How will the company differentiate its commodity DW appliance offerings from those of its competitors? Aster's differentiator is, of course, MapReduce, its in-database implementation of the same search algorithm that Google Inc. helped make famous. Aster, along with rival Greenplum, introduced in-database support for MapReduce last August. Both cite MapReduce's support an array of different programming paradigms -- including Java, C, C++, Perl, Python, and even Microsoft's .NET languages. The beauty of MapReduce, according to Mohit Aron, software architect with Aster Data, is that developers can craft their code in the language with which they're most comfortable and exploit the MapReduce API. "There are a couple of market plans that we are seeing which led us to start thinking about this appliance approach. For example, we have a Cloud Edition [of the nCluster platform], and we have a software-only edition, but the idea behind this [i.e., the appliance offering] was to cater to specific needs, one on the MapReduce side. We have large enterprises that are interested in MapReduce, but they want to have a low-cost way of getting their hands on it, if you will, and getting an end-to-end package, which is certainly low-cost, so that they can get this capability," observes Steve Wooledge, Aster's senior director of marketing.

The other need, according to Wooledge, concerns a commodity appliance offering from a branded server OEM. He contrasts this with the "proprietary" commodity appliance model popularized by both Teradata and Netezza (and embraced by struggling DW appliance player Dataupia, whose founder -- former Netezza principal Foster Hinshaw -- famously championed a "Tivo Test" to defend Dataupia's Satori Server approach). True, these systems are built on commodity hardware -- Netezza's snippet processing units (SPU) use commodity PowerPC chips from IBM Corp. -- Wooledge concedes, but they're sold as branded Teradata, Netezza, or Dataupia systems.

"In the whole data warehouse appliance segment, we've seen a lot of demand from people who want to have an appliance but who don't want to go the proprietary way. Customers who are looking at Netezza and Teradata; they like the ease of use, but they want to have an appliance that is commodity-based," he argues. "Think about it: with [a commodity-based] appliance, they won't have to throw away their existing investment. If they want, they can expand an appliance on their own without having to rely on the vendor [i.e., Netezza or Teradata] to provide proprietary hardware."

Wooledge also talks up the petite sizing of Aster's new Express appliances. DW players like to talk up petabyte levels of scale -- DW giant Teradata announced several "Petabyte Club" customers at its annual Partners conference last year, and several other DW specialists (including Greenplum) tout burgeoning soon-and-inevitably-to-exceed-1-PB implementations. The sweet spot, according to Wooledge, is in the 1 TB and under segment. "We actually got some market research data from Gartner … which says that up to 60 to 70 percent of data warehouses are still under one terabyte," he says.

Aster Data, like other DW players, tends to demur when pressed on the issue of pricing. Wooledge cites Aster's stated policy of not disclosing any information with respect to pricing discounts. Most players tend to open up, at least slightly, if competitive numbers are cited, and Aster Data is no exception.

"The way I would phrase it is the entry-level price for any MPP appliance is in the hundreds of thousands [of dollars] on up to a million dollars. That's for an entry-level system from Oracle or Teradata. What we've done is we've brought the entry-level price for any MPP appliance down to $50,000 [per-TB]: that's for up to 1 TB of user data, and then you can scale that from there," he indicates.

Wooledge's characterization of Teradata's MPP pricing is at odds with that of Teradata itself (which has a published price of $16,500-per-TB for its Extreme Data 1550 appliance); at $50,000-per-TB, Aster Data's cost is about on par with the (unpublished) per-TB pricing of competitors such as ParAccel and Vertica.

Questioning MapReduce

MapReduce isn't Aster's only differentiator. Aron, for example, claims that nCluster is simply faster than other DW platforms running on the same hardware. He concedes, however, that -- putting aside the always-tendentious issue of performance -- MapReduce is an indisputable differentiator for Aster Data.

"There are about 1,000 Java developers for every SQL developer out there. Data warehouses have [traditionally] only been available to people who are SQL developers. Really, this [MapReduce] opens up the data warehouse to anyone who develops in these standard programming languages," he argues.

There's a sense, however, in which, MapReduce's value proposition -- as a common API that lets non-SQL developers write programs for the data warehouse -- could amount to a double-edged sword of sorts. Microsoft Corp., for example, touted much the same thing with the Common Language Runtime (CLR) facility that it introduced with SQL Server 2005. Prior to SQL Server 2005's release, skeptics -- including both SQL Server programmers and data management (DM) purists -- were incensed about Microsoft's strategy, which they said could wreak havoc in DBMS-dom.

The core of the issue, according to skeptics, was that .NET languages like C# and Visual Basic .NET are procedural beasts, while SQL Server's language-franca -- Transact-SQL (T-SQL) is a set-based programming language. What works for one doesn't necessarily work for the other, and -- while it's possible to use set-based languages to accomplish tasks in the non-relational world -- it often isn't advisable. The same is true for using procedural languages like .NET in relational space.

"In procedural coding, you break open your piggy bank and count the pennies one at a time," said SQL consultant Joe Celko, author of Joe Celko's SQL Programming Style, at the time. "In set-oriented coding, you break open your piggy bank and weigh the pennies as a whole."

That obviously didn't happen. SQL Server DM practices are thriving in the post-SQL Server 2005 application development market. Aron concedes that the same criticisms could be leveled at MapReduce, but argues that Aster's implementation can prevent a Java- or .NET-based application from taking control of (or in some cases, taking down) the entire warehouse.

"When Microsoft announced this capability, the SQL developers and the DBAs said you're opening [SQL Server] up too much, none of the other developers will know how to treat the data with the sanctity [with which] it needs to be treated. That is exactly the understanding that we had when we introduced MapReduce way back. We wanted to make sure that with this capability, we provide the isolation for running MapReduce inside the database, so that it does not violate the sanctity of the database," he comments.

"If you look at our implementation, … in terms of execution, in terms of resource isolation, these are big concerns, but our MapReduce [implementation] is such that an administrator can set resources for a MapReduce job, so he can give 30 percent to MapReduce jobs … [and an arrant program] isn't going to run away and hog all of the system resources or slow down the traditional reporting you might be doing on your data warehouse."

Must Read Articles