In-Depth

Sharing Files: Going With What Works

Efficient data sharing that doesn't require disruption to your infrastructure

There are a lot of ways to solve the problem of efficient data sharing around a geographically disbursed end user community. After all, it’s a problem that has been around since the dawn of distributed computing.

Some solutions involve the wholesale replacement of proprietary file systems with more universal constructs. Others seek to manage metadata (if not data itself) from a centralized server repository. Still others seek simply to copy data over and over again to different repositories, each of which located in closer geographic proximity to end users, then depending on some sort of ingenious synchronization scheme to update all repositories whenever one copy is updated.

It reminds you of the old complaint about engineers: ask an engineer how to fix any problem and you will get an engineer’s solution – rip and replace the entire infrastructure with something new. The difficulty which such solutions is that they tend to be disruptive: users can’t find their files, or they end up overwriting each other’s changes to files, or the whole thing ends up requiring a lot more storage than was originally planned because of all the data replication.

Every once in awhile, however, a story emerges about a data sharing solution that actually works and does not require a wholesale retraining of the user community. Tacit Networks recently offered me such a story, involving their client, Brenntag Group, one of Europe’s largest chemical conglomerates.

To account for the time difference, I awakened as early as my wife (for a change), and while she got the kids off to school, enjoyed a hot cup of fresh ground coffee (which tasted a lot like they serve in the better coffee shops on the Continent) and a telephone chat with Michael Langborg, a managing director with Brenntag. Langborg spoke to me from his office in the United Kingdom.

He explained that his part of the chemicals giant was responsible for supporting the work of sales operatives in 12 locations, spread over four countries in Northern Europe. Many of the employees, about a third of his 350 end users, were mobile sales folk who were having trouble accessing files that they needed to do their work.

The origin of the problem traced back to a consolidation effort undertaken by the company a year or two before. To hear Langborg tell it, the company wanted to reduce its IT administration costs by consolidating applications and servers into a single data center leveraging IBM’s new port of Linux to its iSeries platform. On paper, such a consolidation made sense, what with the platform providing native support for Web services and a natural fit for Notes and Domino, the messaging and groupware preferred by the company.

Langborg is quick to admit that what seems good on paper often fails to translate into value in real business life. No sooner had the consolidation been initiated than users in Denmark began complaining that they could not open their documents. The new system made file access intolerably slow.

Langborg’s first thought was to add bandwidth. A bigger communications pipe, he reasoned, would allow more data traversal and open the access throttle. His IBM service guys, who helped install the iSeries pilot, took a contrarian view. They said theproblem wasn’t the pipe, it was the protocol.

When a Windows user tried to access and open a meg-and-a-half file at a remote location using a network file system protocol such as Redmond’s SMB (Common Internet File System re-monikered), the protocol breaks up the file into pieces and waits for confirmation of the successful delivery of each piece. Chatty protocols are the bane of any network, regardless of the size of the pipe, IBM correctly noted.

They went on about a new product from Tacit Networks that would improve data sharing speeds and feeds through intelligent, network-based, file caching. Langborg says that he didn’t seize on the alternative immediately. The proprietary nature of Tacit’s own network protocols for file caching and synchronization were initially a turn-off, especially given the problems that the company had already experienced with Microsoft’s SMB protocol.

When the only alternative proposed to him, by consultants he engaged to review the situation, consisted of deploying more Windows servers at his 12 offices and replicating data at each site, he balked. It was like stepping backwards from his consolidation effort. He decided to give the Tacit solution a try.

He traveled to IBM’s test facility in Copenhagen, Denmark, to have a look at the technology. There he learned about Wellspring Architecture, Tacit’s Wide Area File Services strategy, and got a first hand look at the vendor’s I-SHARED Server and Remote Appliance. The decision was made to set up two locations with the product and to test the efficacy of speeding up file access via the Tacit approach.

In his words, “A typical 1.5 MB file took about 27 seconds to open using our existing setup and SMB. With Tacit, files took about 3 seconds to open in a cold state (meaning that they had not been previously accessed and stored in the cache of the local Tacit appliance installed in the requestor’s office LAN). The response time was sub-second for files that were already cached, or in a warm state, on the Tacit box.”

File changes are handled using a distributed locking mechanism ensuring that files opened concurrently and edited by individual end users are properly closed without losing each user’s changes and edits.

The performance of the solution was so good, in fact, that Langborg cut the proof-of-concept testing short after just two to three weeks and began installing the Tacit boxes in every one of his 12 offices and in his two data centers. He maintains that he would have preferred to solve his problem with an open, standards-based protocol, but he simply couldn’t argue with success.

Time has proven his decision to be a good one. He proudly asserts that his Tacit solution is nearing its two-year anniversary. Brenntag was an early adopter of Tacit technology and continues to upgrade it as the product undergoes changes and refinements.

He continues to watch the product expand its capabilities with new features and functions, including “stackable services” announced on March 1. Stackable services, which currently include print services, Web caching, e-mail consolidation and acceleration, remote management and security services, are intended to add value to the product by tackling issues beyond file access and synchronization. The word from Tacit is that they do not want to be regarded as a “Johnny-one-note”— a file access accelerator. With the new services, they will be able to help solve such knotty issues as slow mail-system access by road warriors and more efficient patch and desktop management from a central office.

Tacit Networks, as we have said before in this column, is worth a look. Your insights are welcome: jtoigo@toigopartners.com.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles