Best Practices: Windows File Server Consolidation (Part 2 of 2)
While Windows file server consolidation is seen as a top priority by most IT directors, such projects can be tricky to undertake. In the second part of our two-part discussion, we explore the best practices that will help you manage the risks and problems of such projects.
Windows file server consolidation is a complex project, but following best practices will help you manage risks and problems, and define a process that is best suited for your environment. Appropriate tools and techniques will help ease the implementation pain by automating labor-intensive tasks and minimizing downtime.
There are many solutions designed for upgrading and consolidating homogenous environments from one Windows server version to another. For example, data centers are often looking at high-end NAS file servers as Windows server alternatives; these NAS file servers are designed to scale, provide data-center-class data-protection facilities, and can be used to integrate Unix and Windows storage. An effective solution must be able to move data from and to these NAS file servers. Network-based solutions that support heterogeneous IP-based storage environments offer more versatility.
Solutions that require server agent deployment tend to be problematic. When you have a large number of Windows servers to consolidate, agents result in increasing problems of administration, management, and control. Legacy NT servers are often underpowered and unreliable, and taxing them with the additional overhead of running agents is not desirable. When the consolidated environment is a proprietary NAS file server that cannot install software, a proxy server needs to be set up to push and pull data remotely. Deploying an appliance on the network eliminates the administrative hassles associated with agents.
Making modifications to your existing environment in order to integrate the solutions into the infrastructure (such as changing the IP address of file servers and reconfiguring the namespace on clients) should be avoided. These changes are at odds with the implementation goal of minimizing user disruptions. Solutions should be non-intrusive, including the ability to throttle bandwidth and filter file servers and shares from being monitored.
- Heterogeneous file
system and storage
- Agentless architecture
- No client or server
- Fail-safe design
- Transparency while
- Task-based automation
- Open interface
- Ongoing capacity
Solutions should be fail-safe, ensuring data integrity and recoverability. A transaction-based system protects against errors. A well-designed system should properly handle data integrity, data consistency, and data-access issues. Cluster configuration should be available to eliminate single points of failure.
Solutions should be able to provide a transparent user experience. Solutions must support synchronous mirroring of the source shares content to the destination while files are open and updated by clients; all shares content—including data, security, and file and folder attributes—must be mirrored. Furthermore, solutions must support a grace period for users to switch from accessing the source server to accessing the destination server, so that users do not need to be disconnected immediately.
The solution you choose should offer task-based wizards that simplify multi-step processes. For example, a solution should automate the key steps involved in migrating a share: Creating the destination share, copying the share’s content, resetting permissions when security translation is necessary, and updating the DFS link.
Customizable and extensible solutions are essential for automating tasks that meet site-specific requirements. Best practices should be developed through an open interface and integrated with other processes.
One of the major objectives of consolidation is to improve storage utilization. Changing the architecture a centralized shared-storage model will drive up utilization. Post-consolidation, issues and strategies for effective capacity management will be different. Ongoing storage management needs will require the ability to load balance capacity, allocate capacity on demand, and move data as it ages to secondary storage to free up space without user disruptions. Solutions that can be used beyond a one-time migration project should be considered to maximize investments.