Please let us know how useful you find this tip by rating it below. Do you have a useful Windows tip, timesaver...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
or workaround to share? Submit it to our tip contest and you could win a prize!
Heat is a notorious destroyer of disks and electronic components, but as component densities go up, it is getting harder to keep heat at bay.
Part of the problem is that the original American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) cooling specifications for data centers were set before the days of high-density storage systems, compact rack mounts and high power (and high heat producing) processors.
According to American Power Conversion Corp. (APC), the average power consumption of an enclosure in a data center is about 1.4 kW. But a fully populated enclosure with modern high-density components can consume 10 kW or more while producing a proportional amount of heat. A rack full of small form factor disks doesn't produce as much heat as a rack full of blade servers, but it produces a lot more than 1.4 kW.
The problem is compounded because the traditional method of sizing cooling for a data center is by the room. That is, the designer adds up the power consumption (and hence heat production) of all the devices in the room and uses that number to determine the size of the cooling load.
With high-density storage and other components, however, the important figure is not the total cooling supplied to the room, it is the inlet temperature at each individual enclosure. Even if the thermometer on the wall is in the specified range, the disk arrays may be much hotter because the air being sucked into the cases is pre-heated by nearby components.
One of the most useful tools for beating the heat is a simple thermometer with a flexible probe. The important point is not the overall temperature of the room; it is the air inlet temperature for each array or other device.
Here are three actions you can take that may help you lower the temperature in your data center:
- Rearrange racks to separate high-heat components and encourage more airflow. Often something as simple as building deflectors out of cardboard and duct tape can make a big difference in cooling parts. Use your thermometer to test changes as you make them to be sure you're actually helping the problem.
- Dust is an excellent insulator. Keep the components clean and vacuum out or blow out dust regularly.
- Do not run components without their cases in an effort to keep them cool. In modern storage arrays the case serves a vital function in directing the airflow where it is needed. Taking off the case can actually make critical components run hotter.
Ultimately, as you move to high-density components, you will probably have to increase the cooling capacity of your data center to handle the increasing heat.
ASHRAE has a book on data center cooling: Thermal Guidelines for Data Processing Environments.
Check out the APC White Paper (#46) titled Power and Cooling for Ultra-High Density Racks and Blade Servers, available at its Web site.
In addition, most major manufacturers of storage systems, such as EMC Corp., now have reports available on the cooling requirements of their products.
Rick Cook has been writing about mass storage since the days when the term meant an 80 KB floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last twenty years, Rick has been a freelance writer specializing in storage and other computer issues.