Greenplum, Aster Data Tout Google-Like Analytic Capabilities
In-database MapReduce helps workers tackle terabytes of information. What's ahead for BI professionals?
The Google-fi-cation of analysis. It reads well, rolls right off the tongue -- but how well does it translate into practice? We're about to find out.
Recently, a pair of Big Data warehousing players -- Greenplum Inc. and Aster Data Inc. -- announced in-database implementations of the MapReduce API for their data warehouse engines. Google pioneered the use of MapReduce as a means to efficiently query petabytes of data.
MapReduce helps facilitate the distribution of parallel computations across large (let's call them massively parallel) clustered configurations. What's more, proponents argue, it comprises a meaningful alternative to (or extension of) SQL for a variety of tasks -- such as sorting, logic, or Boolean operations.
The beauty of massively parallel processing (MPP) systems from Teradata Inc., Greenplum, Aster Data, Dataupia Inc., and others is that they abstract the underlying complexity of the parallel model. That makes it possible for programmers or power users to hammer away at these systems with SQL.
In-database MapReduce, as touted by both Greenplum and Aster Data, achieves much the same thing -- with support for a bevy of different programming languages. "[MapReduce] provide[s] a trivially parallelizable framework so that even novice developers [for example, interns] could write programs in a variety of languages [such as Java, C, C++, Perl, or Python] to analyze data independent of scale," writes Aster Data CEO and co-founder Mayank Bawa, on his blog.
MapReduce is an ideal solution for SQL and its practical limitations according to Bawa, who specifically cites both SQL's lack of expressive power and its use of a cost-based optimizer that -- in the real world -- isn't always predictable. "These problems become worse at scale, where even minor weaknesses result in longer run-times," he continues. "Most developers … are much more familiar with programming in Java/C/C++/Perl/Python than in SQL."
There's a sense, too, in which MapReduce seems tailor-made for very large data warehousing (VLDW) configurations, in which workers are grappling with hundreds of terabytes or even petabytes of structured, semi-structured, and -- increasingly -- unstructured data. For these reasons and others, Greenplum and Aster Data both champion MapReduce.
Can the rest of the industry be long in following? Greenplum co-founder and CEO Scott Yara doesn't think so. He positions in-database MapReduce as a no-brainer -- citing, for example, Aster Data's near-simultaneous announcement of its nCluster in-database capability.
"The question for us was how do we continue to allow people to take advantage of the parallel capabilities of the database to tackle other types of analytical jobs beyond just the things that you can do with SQL," he comments. "Once we started looking at it, the answer [i.e., MapReduce] seemed obvious."
The question itself is particularly topical, Yara argues. "You're seeing this explosion of all kinds of data. We have one customer [that is] generating on the order of over 10 terabytes of event data every day, so you start to want to offer lots of different ways to program against that data," he explains.
Veteran data warehouse architect Mark Madsen, author of Clickstream Data Warehousing, thinks it likely that a few other players (if by no means the entire industry) will follow suit.
"[Teradata] will probably do something with code execution, and I think HP may be working on this. Databases sort of do this already with user-defined functions and the like, but they are not quite the same thing," Madsen explains.
"The idea is to enable functions at the level of data retrieval rather than after the query is done and the data is assembled -- assuming the function can be pushed to the lowest level, so you can shove a function out and get the results back, sans SQL. I would not expect anything like this from Dataupia."
Madsen points to Teradata's own efforts in this regard -- chiefly in tandem with SAS (see http://www.tdwi.org/News/display.aspx?id=8813). "Given that Teradata is doing a bunch of work to marry SAS code into the kernel data access … they could potentially do something similar," he points out.
Greenplum's Yara, for his part, thinks in-database MapReduce is just the beginning. He, too, cites SAS' work with Teradata, which he says is proof positive of a rising trend: the importance of embedded (or in-database) analytics.
"This whole notion of analytics inside the engine is a big thing for us," Yara says. "We're very keen in partnering with companies -- like SAS, for example -- so that we can push some [analytic] optimizations down into the database [engine]."
Greenplum isn't currently partnering with SAS, Yara stresses, but the notion -- similar to what SAS is doing with Teradata (see http://www.tdwi.org/News/display.aspx?ID=8813) -- of embedding analytic functionality inside the database engine is an idea that's caught fire, he maintains. In-database MapReduce is the most compelling case in kind, he concludes.
"There hasn't ever been a standard language for analytical processing. MapReduce has a real opportunity to be that standard, and we expect other commercial vendors to follow suit."