It is becoming more important to protect mission-critical servers and storage resources against any problems that...
threaten an organization's ability to provide access to its key data. As a result, it has become essential for continuous business operations to have failover. Here are a few ways administrators can have their storage resources provide high availability of data.
Whole file replication can provide access to an alternate copy of important data. However, this only happens automatically if the company is using a technology like Windows' Distributed File System (DFS), which abstracts the user from the actual location of their data. The primary problem with file-based replication is that the data is as old as the last scheduled replication push.
Application-centric replication behaves similarly, but the client software would most likely have to be manually redirected to the alternate application server. This is why whole file- and application-based replication solutions are not typically satisfactory high-availabiity solutions; they make failover visible to clients.
Other technologies are better suited to high availability designs.
Although clustering is designed exclusively for high availability, a typical cluster still has a single point of failure: its shared storage subsystem. By definition, a highly redundant system should not have single points of failure; hence the need for hardware mirroring or software replication as a supplement.
Hardware mirroring involves making exact copies of the data; the storage controller already abstracts the servers from the storage. As long as the server is functional (or can be rebuilt or repaired), it can simply access the redundant array transparently, without concern about which replica it is actually using. Of course, if the server cannot quickly be restored, hardware mirroring doesn't help. This is why it is usually deployed in conjunction with clustering.
In some companies, software replication can play a key role. While a company's most critical servers might already be clustered or protected with hardware replication, the remaining key servers, which may not be perceived as needing to be highly available, can be protected by replicating their data. In many corporate environments today, file servers tend to be unprotected, even though software replication provides an easy way to copy many servers' data to a single replication target.
About the author: Rahul Shah currently works at a software firm in India, where he is a systems administrator maintaining Windows servers. He has also worked for various software firms in testing and analytics, and also has experiences deploying client/server applications in different Windows configurations.
More information on this topic:
- Tip: DHCP servers raise level of fault tolerance
- Topics: High availability and fault tolerance
- RSS: Sign up for our RSS feed to receive expert advice every day.