In-Depth
Q&A: "Big Data" Challenges Call for Big Solutions
Dissecting Big Data and Hadoop.
- By Linda L. Briggs
- 07/13/2011
With the industry deep in discussions of the challenges of handling really big data sets, BI This Week asked thought leader and evangelist David Loshin to explain what sorts of issues arise around big data -- and how frameworks such as Hadoop purport to address them.
Loshin is president of Knowledge Integrity, Inc. and the author of numerous articles and books on data management, including the best-selling Master Data Management. His most recent book is Practitioner's Guide to Data Quality Improvement. He is a frequent speaker at conferences and other events and can be reached at [email protected]
BI This Week: We're seeing lots of discussion in the data warehousing and BI arenas lately about so-called "Big Data." What sorts of issues arise that are unique to really large data sets?
David Loshin: Big data is a concept that seems to have many facets, some trending towards performance and others toward flexibility. Everything centers, however, on addressing the information explosion. Not only is the amount of data growing at a tremendous rate, that growth rate continues to accelerate, encompassing both structured and unstructured data. To get a handle on extracting actionable knowledge from huge mounds of information, there is a need for a high-performance framework that enables analysis of unstructured data yet links it to our established analytic and reporting platforms.
The types of issues that arise, then, are associated with text analysis, so that information can be distilled from free-form text; social media analysis; massive parallelism to deal with simultaneous processing of the data; programming models for implementing analytical algorithms; integration with data warehouse appliance platforms to connect application results with existing data warehouse models; high-bandwidth networking; linear scalability; as well as scaling out existing ideas such as virtualized access and parallel ETL.
Are there available and adequate technology solutions to address huge data sets, or are vendors (and companies) scrambling, without any real solutions in sight?
The frameworks for solutions have been around for many years; I worked on data-parallel computing 20 years ago, and it wasn't really new then, either, but back then, the focus was on scientific programming, with the beginnings of intuition regarding high-performance business applications.
Today, the barriers between performance computing and data management are really starting to crumble; the grid computing hoopla from a few years back and the popularization of programming models such as MapReduce and Hadoop are at least demonstrating some thought in moving in the right direction.
Hadoop is often mentioned when big data is discussed. Can you explain what Hadoop is?
At a high level, Hadoop is an open-source framework of software components that have been brought together to support alternative "big data" programming and data management. For the most part, it incorporates a programming model that is based on Google's MapReduce for analysis, along with a file or storage framework for managing access to large data sets.
What is Hadoop's relationship to MapReduce? Are there different circumstances when one or the other is called for?
My understanding is that one of Hadoop's components is an implementation of the MapReduce programming framework. MapReduce "evolved" at Google, although it is certainly not the first parallel programming model. Similar data parallel approaches have been used for probably 30 years, such as the APL language developed at IBM, which had a number of parallel implementations, if I recall correctly.
Does Hadoop replace an existing database or other structure?
From what I understand, Hadoop is not really a database but incorporates data management and data access mechanisms. Those mechanisms not only distribute and then provide access to data sets to MapReduce programs, but will move data as part of a general approach to maintaining some level of data locality for performance purposes. Although many data accesses and movements are inherent in the MapReduce framework, I could see that it could be easy for a naïve programmer to create a lot of data or network bottlenecks. I expect that some of this is handled under the hood by Hadoop. It is not a database management system, although it does interface with other open source projects (such as Cassandra) that purport to be database systems.
Can a company's existing databases connect to Hadoop? How does it extract data?
Many vendors see the value in connecting their systems to a framework such as Hadoop, especially when Hadoop is used for analyses that cannot be done natively within the vendor system.
For example, a Hadoop application might be used to analyze huge amounts of customer transaction data. Then, however, the question becomes, "How are the results incorporated into our other applications?" The results of a customer transaction analysis might need to be linked to customer profile types for in-process profiling and for generating offers or for product placement on web pages.
To enable that, other analytics systems (largely appliance data warehouse vendors and business intelligence applications) are establishing connectivity to Hadoop -- moving data to the Hadoop file system, running invoking MapReduce programs, and then pulling results back into a data warehouse or an OLAP cube.
Hadoop itself is free software, but what about the costs of staff for running it? Is there Hadoop expertise out there yet?
As with any open source software, there is a trade-off. You need to be willing to exchange ease of use and general expertise for rapid development and flexibility in development. I think that there is still an opportunity for training in developing Hadoop applications. I have been told about "secret" projects that do all sorts of curious things, but most of the examples I have seen are used for aggregations over huge data sets. However, since the ideas have been around for a long time, I am confident that there is some expertise out there.
Are there other workable solutions available for handling big data in addition to Hadoop?
Although Hadoop provides a programming model for parallelism, it is a bit constrained in terms of what can be aptly programmed. I can also see that although a good programmer can write a good Hadoop application, a bad programmer is capable of writing a really low-performance Hadoop application -- there is an inherent need to understand memory access patterns and data access latency, especially with data distribution.
That being said, there are many parallel programming language paradigms that might be the logical "next step" for programmers really interested in building parallel applications that would map to a massively parallel platform. That includes languages such as Unified Parallel C, the Titanium dialect of Java. Also, we can review the details of grid computing for other approaches. The key, in my opinion, is understanding the confluence of two ideas: data parallelism and data latency.