News Stay informed about the latest enterprise technology news and product updates.

Dos and don'ts of choosing, managing clusters

Do you expect a cluster to do something it can't? Cluster technology expert David Flawn reveals the misconceptions about the capabilities of clusters and offers tips for effective cluster deployment.

Any IT manager who thinks that clustering technology will abolish system downtime is sadly deluded. There's a big difference between "always available" and "continuously available" servers, said David Flawn, vice president of Worldwide Business Development for Stratus Technologies Inc., a server vendor in Maynard, MA. Clusters offer the latter, but they only deliver that continuous availability if they're deployed and managed correctly.

As one of Stratus' leading experts on high availability server technologies, Flawn understands the need for continuous uptime in e-commerce environments. To help IT managers understand when a cluster is appropriate and how to make a cluster work effectively, he offered the following cluster management dos and don'ts.

Don't choose a cluster if you are running lights-out operations or require the utmost in simplicity. Clusters often require extra management hands in terms of wiring, failover scripts, and on-going care. If you are running a site without an administrator, you may consider alternative systems that are "self-managing" or have remote service capabilities.

Don't choose a cluster if you require near-perfect availability and reliability in the instance of, say, an Emergency-911 operation. While clusters can deliver very high levels of uptime, it's important to not confuse highly available with continuously available. Failover processes between clustered systems often result in several minutes of downtime. If your system absolutely can not go down -- even for a minute -- a fault-tolerant server platform may be the better choice.

Don't assume that adding more systems to a cluster will enhance system availability. Clusters do a good job at recovering from failure, but they do not prevent failure as other alternative approaches can. So while adding more systems will surely bolster scalability and processing power, it may also introduce more points of failure. If the highest level of availability is what you need, you need to consider architectural system design at the individual system level -- not the size of your cluster.

Do choose a cluster if you require massive scalability in the instance of very compute-intensive applications, like life sciences or EDA applications, for example.

Do choose a cluster if your application requires both high (but not continuous) availability and the ability to handle extremely heavy computing loads -- a Web front end, for example, where very small lapses of downtime are either invisible or mildly tolerable to customers.

Do consider disaster recovery mechanisms, no matter what type of computing infrastructure you have. There are certain natural disasters -- earthquakes, floods, and the like -- which can bring down even the most robust computing infrastructure by taking away power supplies. If your applications are truly mission-critical and you can't afford the loss of an iota of data, make sure you have the right recovery processes in place -- and as much back-up for back-up as your business needs dictate.


searchWindowsManageability News & Analysis: Distributed computing: dos and don'ts

searchWindowsManageability Best Web Links: Windows management dos and don'ts

Dig Deeper on Windows client management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.