Syncsort Seeks To Simplify Big Data on Mainframes
Syncsort, a specialist in mainframe software, said it upgraded its DMX-h data integration software to enable enterprises for the first time to process mainframe data in its native format in Big Data frameworks such as Apache Hadoop or Apache Spark.
The company says DMX-h is designed to ease the Big Data onramp for enterprises and help them move big workloads from data warehouses to less expensive Hadoop implementations.
Deployed as part of a Hadoop cluster, DMX-h helps companies prepare, blend, transform and distribute data with Hadoop, using a "design once, deploy anywhere" approach for adopting and developing Hadoop jobs. The product is also certified for Spark. "This means you can use the Spark API to access mainframe data and the associated COBOL COPYBOOKS, to understand, translate and securely load it directly into Spark," the company said.
"The largest organizations want to leverage the scalability and cost benefits of Big Data platforms like Apache Hadoop and Apache Spark to drive real-time insights from previously unattainable mainframe data, but they have faced significant challenges around accessing that data and adhering to compliance requirements," said Tendü Yoğurtçu, general manager of the company's Big Data business.
"Our customers tell us we have delivered a solution that will allow them to do things that were previously impossible," Yoğurtçu continued. "Not only do we simplify and secure the process of accessing and integrating mainframe data with Big Data platforms, but we also help organizations who need to maintain data lineage when loading mainframe data into Hadoop."
The company also introduced DMX Data Funnel, designed to populate enterprise data hubs more quickly by instantly ingesting hundreds of DB2 database tables, for example.
Another new feature added to the software is support for Fujitsu NetCOBOL in Fujitsu mainframes and also in IBM z Systems, answering demand from customers in Asia Pacific and other overseas regions.
Company exec Arnie Farrelly discussed the company's big push into mainframe Big Data in a company interview.
"Customers want more of their data accessible so they can do more analytics and Hadoop is the most cost-effective platform to do this," Farrelly said. "Historically, mainframe data storage has been very expensive. Tape storage was good for storing large amounts of data, but it wasn't accessible. Data was moved to tape just because it was too expensive to store it all in normal DASD storage. What we're seeing now is people wanting to move even the tape stored data into Hadoop. The economics of storing that data on Hadoop is excellent, and it's also far more reliable. Over time, tape deteriorates. And most of all, this means that data is accessible in Hadoop where they can do analytics. Of course, getting that data into Hadoop to unlock the value in the data is difficult."
The product was lauded by Wikibon analyst George Gilbert for addressing the highly publicized Big Data skills shortage. "Syncsort's new features don't require hard-to-find skills that companies don't want to spend money and time to acquire," Gilbert said.
About the Author
David Ramel is an editor and writer for Converge360.