Saturday, 28 December 2024

Achieving efficient Data Centre cooling

As a result of increased IT performance and density of electronic equipment, data centres are becoming more difficult to cool. Blanca Beato Arribas suggests ways to obviate the challenges while working within stipulated parameters of cooling.

  • By Content Team |
  • Published: June 18, 2013
  • Share This Article

As a result of increased IT performance and density of electronic equipment, data centres are becoming more difficult to cool. Blanca Beato Arribas suggests ways to obviate the challenges while working within stipulated parameters of cooling.

june2013-persp201

The sensitive nature of the equipment and the fact that some servers demand continuous operation 24/7 every day of the year, data centres demand 99.9% reliable cooling system and effective environmental control.

The ASHRAE thermal guidelines (2011) classify data centres into six different categories (A1 to A4, B and C) depending on the environmental specifications and grade of environmental control. ASHRAE recommends that the dry bulb temperature is between 18 to 27°C and the humidity between 5.5 DP (Dew point) 60% RH and 15 DP, although the allowances relax considerably depending on the data centre class, to allow for greater flexibility in the design and operation of the data centres. In any case, the relaxation in the standards does not mean that the cooling or power problems of the building will disappear. Even if IT equipment can operate in these ranges, there will be an impact on the energy use of the data centre

With regards to humidity, a range from 40% to 60% RH is considered acceptable. Although some manufactures claim that their components could work at a more relaxed range of 30% to 80%, we need to consider the possibility of static at very low humidity levels (less than 35% RH).

In addition to keeping the IT equipment under these conditions, T and RH should not vary more than 5°C per hour and five per cent per hour, respectively. In the event of cooling failure, response must be in place to avoid thermal shock damage to the equipment.

Cooling

CRAC units (computer room air conditioners) are the most popular form of refrigeration in data centres. The most typical ventilation design of a data centre is the supply of cold air through raised floor, with the racks divided into hot and cold aisles, the CRAC units in the perimeter of the data centre, and the hot air from the hot aisles returning to the CRAC units at high level.

Other data centre designs may see the CRAC units located beneath the floor, cold or hot aisle containment or the space being cooled by cooling cabinets in line with IT equipment and no false floor or CRAC units.

To maximise the performance of the CRAC units, they are sometimes located in the hot aisles, thus reducing the paths of both hot and cold air. The disadvantage of this kind of configuration is that the heat load capability of the CRAC units needs to be matched with the heat load installed in the racks.

Temperatures of 20 to 25°C in the space are considered acceptable with CRAC/CRAH return air temperatures around 10°C above this. Therefore, operating at higher chilled water flow and return temperatures could be considered.

Optimised cooling

To reduce carbon emissions and energy costs, the efficiency of external cooling has to increase. Energy savings can be achieved in a combination of IT equipment tolerating higher temperatures, combined with better airflow management.

Due to increase in heat load demand in data centres, the industry offers a series of solutions, such as air to liquid heat exchangers, self-cooling cabinets, cold aisle containment, etc.

Before choosing a novel solution, it might be a good idea to consider applying the following lower-cost solutions first: installing vapour retardants around the entire envelope (design stage), sealing cable and pipe entrances and tightly installing fit doors, following only F-R (Front to rear) airflow protocol for rack-mounted equipment and installing blanking plates in all unused racks to avoid recirculation.

It needs to be noted that the typical configuration of a data centre with cabinets placed front to front and back to back, creating cold and hot aisles in the space, maximises the delivery of cooled air and allows for the efficient extraction of the warmed air.

The relaxation of thermal conditions of the data centre immediately provides substantial energy savings, for example, by increasing the temperature in the space; which allows the vapour compression cycle to run with smaller temperature differential and increases the efficiency of chilled water plants. At the same time, the humidification load is decreased both by the standards (from 45% to 40%) and also by the decrease of the lower limit for temperature. Adopting higher temperature can reduce energy costs and carbon emissions. However, it can significantly reduce the period of successful operation following a cooling system or power failure.

In the case of air-based systems, higher heat densities tend to result in higher exhaust temperatures at the equipment. If this hot air is not allowed to mix with cool air, it can reach the cooling system and make it run at a higher temperature and increased efficiency and greater free cooling is possible.

Thermal cooling systems are modified either by adopting free cooling of higher chilled water or operating temperatures. The most significant increases in plant efficiency have typically been attributed to raising chilled water temperatures and to implementing variable speed control of prime movers, such as chiller compressors, fans and pumps.

Energy savings can be achieved by indirect free cooling using air to air heat exchangers – outside air with recirculated air from the data centre – or by using evaporative cooling. The latter uses the latent heat of evaporation to achieve lower air or water temperatures. It consists of spraying moisture onto non-saturated air and using the heat of the air to evaporate some of the moisture, thus reducing the temperature of the air. Increasing the RH of the data centre in the process can be avoided by using air to air heat exchangers, spraying on the non-data centre side or by spraying water onto a cooling coil, which will reduce the temperature of water of any other refrigerant.

A holistic double-ended approach

An increasing concern is the overall efficiency of the data centre PUE (Power Usage Effectiveness – the ratio of data centre energy to IT equipment measurement).

It needs to be recognised that over-zealous targeting of low PUE can compromise reliability and resilience. Failure to make appropriate design choices or adopting poor operating procedures or inefficient rack layout can have an impact on PUE and other measures and prevent optimal performance being achieved.

The writer is Senior Research Engineer at BSRIA. She can be reached at: blanca. beato-arribas@bsria. co.uk

Related News

You May Also Read