In-Depth

Texas Memory’s SSD: Ultimate Database Speed at a High Price (For Now)

More speed, lower failure rates, better encryption—sounds too good to be true

One part of the spectrum of storage alternatives that seems to get short shrift in industry publications is solid state disk (SSD). Even though Bill Gates has commented about a future computer platform in which memory chips, rather than electro-mechanical storage devices, will provide the storage component, SSD has always been relegated to the backwaters of storage integration.

Simple explanation: memory-based storage is much more expensive than conventional magnetic media—costing up to 50 times as much as platters of spinning rust for the same capacity. SSD remains a tough sell.

Still, Woody Hutsell, long-time Marketing Director for Houston, TX-based Texas Memory Systems, seems to take it in stride. He was laughing the other day as he told the story about how a review of Samsung’s 32GB flash memory drive included a criticism that, at $2,000 to $3,000 per unit, the drive was very expensive. His own products, such as the latest RamSan-300, with 32 GB capacity, lists at $39,000 per unit, while the RamSan-400 at 128 GB has an MSRP greater than most high end SUV’s at $159,000.

At those prices, Hutsell doesn’t expect to replace all of the arrays in a customer’s shop. What he does propose to justify the sticker price of his gear is to solve discrete problems in a manner that makes business sense under certain well-defined circumstances. A niche that Texas Memory has carved out for itself is database acceleration.

Supporting both direct attachment to servers and connection via Fibre Channel, Texas Memory Systems’ RamSan SSDs can expedite database reads and writes by hosting transaction, or redo logs in write-intensive environments, or serving as temporary space, or hosting frequently accessed tables in read-intensive environments. In some cases, the SSD hosts an entire database. At the end of the day, he says, application performance provides the payoff for the solution.

Of course, the product won’t magically rectify every problem in a database, he warns. Some upfront analysis needs to be done to determine whether I/O binding is a function of I/O or poor database design. In the case of Oracle environments, he says, Texas Memory helps the customer review his STATSPACK reports to identify events that are causing latency. If the customer is using Windows SQL Server, PERFMON logs are consulted—especially those covering performance under peak loads.

“We look for average disk queues, disk bytes per second, and processor utilization for tell-tales that might suggest that our product would help,” he says. Typically, when processor cycles flatten and disk I/O increases at the same time as disk queuing accelerates, Hutsell observes, the likelihood is good that I/O acceleration using his product will help.

“Still, it’s a try-before-you-buy scenario with over 50 percent of purchases,” he reports. Even in the face of documented evidence of I/O chokepoints, most of his customers want to try the RamSan product to see whether it will deliver a measurable performance improvement before they cut a purchase order for the gear. The company’s success over the past 28 years is testimony to the outcome of most evaluations.

Texas Memory has identified a niche market that SSD is uniquely qualified to serve. It is a far cry from replacing magnetics altogether, but it is a profitable one. That said, there is another lesser known story of SSD, and one that just might help propel it to the forefront of architectural thinking in storage for the future.

Adding SSD actually establishes the tiered-storage environment in open systems that today exists only in the mainframe world, where solid state memory typically comprises Tier 1 of a three (or more) tier system. Without the high-speed SSD storage tier, contemporary discussions of tiered storage in open systems environments are more marketecture than architecture.

In mainframe shops, Tier-1 silicon-based storage was capable of capturing data just as fast as it is produced by the most nimble applications. System memory in the mainframe was golden and needed to be freed up quickly. So, magnetic disk direct access storage devices (DASD) provided a second not-quite-as-expensive tier to which data was migrated, usually very quickly, once captured in memory. Tape, a very inexpensive media then as now, provided long-term storage of near-on-line, archival, and backup data. In this classic mainframe model, true tiered storage existed—manifesting dramatically different speeds and feeds, and costs, at each tier.

Despite all the talk about “storage tiering” in open systems, the performance variations and price point differentials between Fibre Channel disk, S-ATA disk and tape or optical simply don’t justify the verbiage. So, from an architectural standpoint, introducing a solid state tier might enhance the storage tiering story, provided you have applications that require it.

Hutsell also observes that solutions such as storage clustering, which purport to increase I/O speeds by increasing the number of disk spindles to which data is written, might be unnecessary in many cases with the application of SSD technology to more conventional server-storage configurations. I suspect he is correct with this observation. In many cases of clustering technology, the speeds of individual storage arrays outstrip the performance of the same arrays when placed in a cluster. More spindles or not, clustering introduces overhead that can diminish array performance by as much as 30 to 60 percent. Why throw more hardware at a performance problem if you can surmount it with a judicious application of SSD technology that works out to a lower overall price?

Moreover, storage clusters typically entail a significant degree of management and downtime costs. There are more things to break down, and annual failure rates accelerate with the more hardware you add. On this point, SSD shines. Not only have the developers of SSD technology improved the resiliency of their components to electromagnetic and RF interference, they have also added technologies like IBM’s ChipKill that protect against individual memory chip failures and soft error correction that ensures that scrub bit errors accrued to transients in data transmission. Texas Memory also adds additional layers of resiliency to its products, including disk drives that actively back up the contents of memory chips and additional power protection features.

Building SSD into existing Fibre Channel plants is actually very easy. No product in the market is more versatile or standards-compliant than SSD. The products tout a universal interoperability with every FC switch on the market and with virtually all array controllers. Four GB per second FC is fully supported on Texas Memory gear.

Finally, SSD offers what most magnetic media do not: a full encryption and data-deletion story. If your data requires super secure encryption, SSD provides a platform for encrypting data on write. And, if complete deletion, without possibility of forensic rescue, is a requirement, SSD provides the answer.

Bottom line: I enjoyed revisiting SSD with Hutsell this week. I am still waiting for the announcement of SSD for the rest of us—based possibly on developments in the world of plastic chip technologies in which wafers of lithographically-etched conductive polymer (plastic that conducts electricity) are layered between non-conductive plastic wafers. Once perfected, this technology could see memory chips produced for pennies, instead of tens of dollars per copy, causing a fundamental shift in the costs of solid state devices.

Until that happens, contemporary SSD is still worth a look. Your comments are welcome: jtoigo@toigopartners.com.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles