By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Previously in this series, I introduced you to Microsoft's Cluster Diagnostics (ClusDiag) tool, a free download that allows you to verify, report on and troubleshoot Windows server clusters. As mentioned in part one, the verification functionality of ClusDiag has been largely replaced by ClusPrep, a new tool released last year.
Despite having part of its functionality superseded, ClusDiag still has fantastic reporting capabilities and handy troubleshooting functionality. This article will focus on the reporting capabilities of ClusDiag, with details on how to capture logs and view reports. We will then conclude the series by taking a look at how ClusDiag can be used to troubleshoot server cluster issues.
Before viewing the reports -- graphical or text-based -- you first need to capture the logs from the cluster using ClusDiag's online mode. After invoking ClusDiag, you will be prompted to select the mode using a radial button. Once you select Online, type in or use the browse button to locate the cluster name you want. See Figure 1 for an example of the initial ClusDiag dialog box.
After connecting to the cluster, ClusDiag may take a few moments to initialize (you'll see progress boxes in the lower right status bar). Use the Tools pull-down menu to select Capture Logs. This option will only be available with online mode, as it is grayed when working in offline mode. Leave the default setting for Capture Type on "Full Capture" as this will include the XML data needed for the disk and network configuration reports. Figure 2 illustrates the Capture Logs dialog box.
Hitting the Start button on the Capture Logs dialog box will initiate the log capture process. A progress window (shown in Figure 3) will appear listing the details and progress of the tasks. A message will appear when the collection is complete. Another way to capture the data is to perform a verification test using the Tools pull-down menu to select Run Test. However, as previously mentioned, the ClusDiag test functionality has been superseded by ClusPrep.
The data is stored by default in the location where ClusDiag was installed, under a folder called Logs. Under this, a folder is created with the date of the collection as the name. Each cluster will then have a separate folder and under that, each node will have its own subfolder. The folder structure looks similar to Figure 4.
The collection includes event logs, cluster logs, IP configuration and cluster registry hives. When the Full Capture type is selected, XML data is generated for disk and network configurations, which is used to create disk and network reports.
Once you capture the logs, ClusDiag will automatically be placed in offline mode for viewing the reports. Use the "View" pull-down menu to select the type of graphical view you want. Your options are Disk View, Network View or D.A.G View (directed acyclic graph of cluster resources and their dependencies).
The Disk View is very handy for documenting the cluster storage configuration. As you can see below in Figure 5, the Disk View displays all the local and shared drives across the cluster. The quorum drive is highlighted in red, and shared SAN-based drives are shown with the blue sharing-hand icon. In this example, there is a three-node cluster (Arcticwolf, Nordikwolf and Timberwolf). Each node has access to the shared SAN disks (Q:, R:, S: and T:). By hovering the mouse pointer over a particular drive, ClusDiag will explode the details for the disk including type, signature, target ID, path ID and LUN information.
In addition to the graphical view of the cluster's disk configuration, you can also view a text-based HTML copy of the report. Use the "Reports" pull-down menu to select the type of report you want to view. Saving these reports can provide great baseline documentation for your cluster's storage configuration, including all the disk signatures should they ever need to be reestablished.
The Network View is also a great way to document your cluster's network configuration. Again, use the "View" pull-down menu to select the Network View option. This will display a network topology of the cluster members and their network interfaces. Once again, when you hover the mouse pointer over a particular NIC icon, ClusDiag will explode the information for the NIC including IP address, subnet mask, DNS servers, WINS server and so on. See Figure 6 for an example of the Network View.
Just like the Disk report, a corresponding text-based HTML report also exists for the network configuration. Use the "Reports" pull-down menu and select Network Statistics to view the text-based network report. Once again, it is a good idea to save these reports to provide baseline documentation for your cluster's network configuration.
Finally, perhaps the handiest of all reports is the D.A.G. View, which illustrates cluster resource dependencies. In fact, I often use ClusDiag for this one feature alone, especially to analyze complex resource dependencies such as Exchange Server resources. Like the Disk and Network views, use the "View" pull-down menu to select D.A.G, and then use the submenu to select the cluster group you want to view. Figure 7 illustrates a D.A.G View for the Exchange cluster group.
By hovering the mouse pointer over a particular resource, ClusDiag will explode the details displaying the resource name, resource type, dependencies and any other resource private settings. A red line illustrates the resource's dependency tree and makes it simple to visually inspect complex resource dependencies.
As you can see, ClusDiag has some powerful reporting capabilities. These reports can be used to document your server cluster's baseline configuration in case things ever change unknowingly in the future. The next and final part of this series will describe how you can use ClusDiag's offline mode for troubleshooting cluster-wide issues, leveraging its many built-in features.
ABOUT THE AUTHOR
Bruce Mackenzie-Low, MCSE/MCSA, is a systems software engineer with HP providing third-level worldwide support on Microsoft Windows-based products including Clusters and Crash Dump Analysis. With more than 20 years of computing experience at Digital, Compaq and HP, Bruce is a well known resource for resolving highly complex problems involving clusters, SANs, networking and internals.