Monday, 25 November 2024

Keeping IT cool

As data centres try to keep pace with rapidly changing usage patterns, so do their cooling techniques. Industry experts share insights and updates on best practices and technologies.

  • By Content Team |
  • Published: October 13, 2015
  • Share This Article

Pierre Havenga, Managing Director at Emerson Network Power for the Middle East and Africa

Pierre Havenga, Managing Director at Emerson Network Power for the Middle East and Africa

To state the importance of data centres in the present day of quick connectivity and information dissemination is to state the obvious. As data centres try to keep pace with rapidly changing usage patterns, so do their cooling techniques. Pierre Havenga, Managing Director at Emerson Network Power for the Middle East and Africa region, gives an interesting analysis: “If we look at the telecom industry, a few years ago, approximately 90% was voice-centric and 10% was data-centric. Now, it’s approximately 70% data-centric and 30% voice-centric. This is mainly driven by applications on smartphones and people are spending more time downloading apps or text messaging, creating demand for storage of data. You don’t record all the voice communications between people on cell phones, but you need to record all the data. And that’s what’s driving the need for data centres.” In light of this, maintaining and cooling data centres has gained primacy.

A whitepaper by Emerson Network Power, a business of Emerson, reveals that cooling systems – comprising cooling and air movement equipment – account for 38% of energy consumption in data centres.¹ As Havenga puts it simply, “You have to reject heat from the data centre; servers generate heat and heat has to be rejected.”

Don’t lose your cool

Bart Holsters, Operations Manager at Cofely Besix Facility Management

Bart Holsters, Operations Manager at Cofely Besix Facility Management

Cooling failure is not an option for data centres. In Havenga’s view, the loss of revenue could amount to millions of dollars per day if a data centre is unavailable, with the losses being different for different industries. “For example, for the telecom industry, the losses are quite huge,” he says. However, that could be the least of the problems. As Bart Holsters, Operations Manager at Cofely Besix Facility Management, points out, a cooling failure will result in loss of uptime, with the servers eventually shutting down and the electronic equipment getting damaged.

[div class=”text-box text-box-right”]

Combating ‘waste’ heat

“IT, data centres and server equipment consume electricity and emit heat as a ‘waste product’,” says Ziad Youssef, Vice President of IT Business – UAE, Gulf Countries at Schneider Electric. “At such an enclosed environment with sensitive technology, the heat can be damaging.” As data centres experience exponential growth in the region, he believes that new solutions to curtail the simultaneous rise in energy costs will be essential. “One such solution is cooling – which is critical to the smooth functioning of a data centre, and to the maintenance of hardware carrying mission-critical enterprise data,” he says.

[end-div]

Mohammad Abusaa, a Business and Project Development Professional with HH Angus and Associates, and a veteran when it comes to data centre cooling, presenting a clear picture of the stakes involved in case of cooling failure in various sectors, says: “The critical nature of cooling for a data centre can be understood from the fact that in many cases, losing cooling for less than five minutes could cause the IT equipment to fail. In some high-density applications, the time could be less than two minutes. The criticality of IT systems’ failure is gauged by the function of the data centre. In other words, a temporary failure of an airport data centre is certainly much more critical than the temporary failure of Twitter’s data centre, though some may argue that.” The reason for such cooling failure, he says, can directly be related to the cooling system itself, such as the failure of pumps, fans or chillers, and, at times, indirectly related to the cooling system, such as power outages.

Mohammad Abusaa, a Business and Project Development Professional with HH Angus and Associates

Mohammad Abusaa, a Business and Project Development Professional with HH Angus and Associates

Abusaa elaborates that when failure occurs in the cooling system, standby equipment or paths are brought online to ensure continuous supply of cooling to the IT space. Therefore, attributes like redundancy and standby should be factored in at the design stage of cooling systems. When failure occurs in the power supply, an Uninterruptible Power Supply (UPS) device connected to the critical parts of a cooling system – usually the distribution components – will maintain the operation of the cooling distribution network, while the backup generators come online, thereby providing sufficient power to bring the cooling generation system back online within minutes of losing power. In Abusaa’s view, this is the usual contingency procedure in case of a cooling failure.

Cooling solutions

Håkan Lenjesson, Market Area Director at Systemair for the Middle East and Turkey

Håkan Lenjesson, Market Area Director at Systemair for the Middle East and Turkey

Since cooling is an imperative for data centres, ASHRAE has defined standards for their cooling requirements, which normally dictates the operating conditions.

Håkan Lenjesson, Market Area Director at Systemair for the Middle East and Turkey region, says that ASHRAE has been broadening the operating ranges and also recommending a very low Power Usage Effectiveness (PUE). Havenga adds: “Today, ASHRAE’s recommended conditions range from 18 degrees C to 27 degrees C. However, the allowable range can even go up to 35-40 degrees C, depending on the server technology.”

[div class=”text-box text-box-right”]

Flying north

Håkan Lenjesson says that most organisations continue to plan and design new computing facilities without much change or innovation. For example, first, they design a building and leave some portion for the data hall or whitespace. Then, they fill the whitespace with as many server racks as possible. In Lenjesson’s opinion, designing data centres in the traditional manner can create a wide range of problems. He explains, “For example, an undersized or oversized power and cooling infrastructure can limit operating capacity or increase capital expenses.”

He believes that large corporations are looking for some extra free cooling, while keeping the PUE as low as possible.

“Companies like Google, Facebook, etc., are building their new data centres in the very north places of the planet, for example, to north of Sweden, Finland, Canada, etc.,” he reveals. He elaborates: “Instead of under- or over-provisioning their new facility’s power and cooling resources, companies are installing the optimal infrastructure for the precise array of hardware and enclosures they’ll be using. Instead of improvising solutions for efficiency-sapping structural defects, they’re preventing those defects from occurring in the first place. The end-result is a data centre that’s not only less costly to cool and maintain but also more reliable and better suited for business requirements. They can also allow for significantly increased intensity usage. ROI is a very important factor, so doing it right from the beginning is essential; this is where modular systems are coming into place.”

[end-div]

With the ranges and requirements defined, the next step is to decide on a cooling solution, as there are several options available in the market. Havenga says that most data centres currently adopt the traditional direct expansion technology, which is applicable everywhere in the world. “Then there is free cooling, where you have fresh air coming directly from outside,” he points out, and adds, “Or indirect cooling, where you are cooling a medium, typically water. Even further, there are adiabatic solutions, which are an enhancement of the cooling capacity of the chiller. It increases the free cooling capacity.”

Havenga reveals that the latest technology in the market is evaporative free cooling. “This is a way of cooling a data centre without using a compressor; so basically, all-year-round cooling,” he says. He believes that the main driver behind this is energy savings, adding that energy is the single biggest cost incurred by a data centre, which has led to various advancements in technology, InRow cooling being one of them. Havenga explains that InRow cooling is a type of air conditioning system commonly used in data centres, in which the cooling unit is placed between the server cabinets in a row for providing cool air to the server equipment more effectively.

Abusaa puts in a nutshell a few of the current trending innovations in the market: “There is modularisation – the main drivers behind this are quality control, cost and delivery schedule. Then there is Direct Liquid Cooling, followed by a continuous development of IT hardware systems that run at higher temperatures and humidity levels and, finally, the on-site Combined Heat and Power (CHP) systems.”

Making the server serve

Amidst these cooling options, there lie several challenges, with availability and uptime being the primary ones. “You have to make a redundant solution, no matter where you are in the world,” Havenga says. “This is required so that you will not lose production, because if you lose production, you will lose millions every day.”

[div class=”text-box text-box-left”]

“We don’t call it cooling anymore”

A walk through Emerson Network Power’s Customer Experience Centre, Dubai, will make you realise that data centre cooling is a serious business. Climate Control Middle East visited the facility where Pierre Havenga demonstrated that cooling in data centres was more about managing the heat rather than cooling. He explained: “Five years ago, you allowed a certain amount of cool air into your data centre, irrespective of whether it was required or not. Nowadays, with Electronically Commutated (EC) fans, software and wireless monitoring, we can manage the amount of cool air based on what is required by the data centre. So, you don’t need to provide cool air if there is no heat.“So our other solution is that you can even switch off servers. If the server fan is running, then the fan consumes power, and the power generates heat. So, it’s getting to that level of managing your heat levels. That’s why we call it Thermal Management. We don’t call it cooling anymore, because now you manage the thermal side of your data centre. We manage the temperature requirement from the rack back to the chiller.”

[end-div]

Abusaa points out that cooling load or capacity management is one of the most critical challenges in designing a data centre cooling system. He stresses that capacity management is related to the phasing of data centre construction, expansion or phasing out of IT loads within the data centre, variation in the cooling load profile within the day/month/year and variation in the cooling load requirements within the same data hall or even at the server rack level.

The other challenges are maintenance-related. Abusaa stresses that it is crucial to understand that a facility that is running 24/7 and has strict security access regulations will have its design and operation challenges when it comes to maintenance. He says, “For example, while having a Computer Room Air Conditioning Unit (CRAC) installed in a specific location is the most efficient solution, for security and access reasons, the CRAC unit might need to be relocated to ensure that the maintenance personnel do not have access to the IT equipment, as there might be a possibility of accidently damaging the IT equipment while maintaining the CRAC unit.”

And then, there is always the issue related to humidity (See Figure 1) and air quality. “Similar to hospitals and other critical facilities, maintaining control of the Indoor Air Quality to avoid contamination is crucial,” Abusaa reveals, and adds, “This is not only achieved through filtration but also through the design of data centres and operation and maintenance guidelines.” He is emphatic that filtration, dehumidification, access control and other practices should address the air-quality issue.

FIGURE 1

Reference

  1. http://www.emersonnetworkpower.com/documentation/en-us/brands/liebert/documents/white%20papers/enterprise-data-center_24622.pdf

[div class=”row”]

[div class=”col-md-10 col-md-offset-1 content-sidebox”]

How Google does it

At Google data centres, the company often uses water instead of chillers, as an energy-efficient way to cool…

“Hot Huts”

Google has designed custom cooling systems fir its server racks. The systems are called “Hot Huts”, because they serve as temporary homes for the hot air that leaves the servers – sealing it away from the rest of the data centre floor. Fans on top of each Hot Hut unit pull hot air from behind the servers through water-cooled coils. The chilled air leaving the Hot Hut returns to the ambient air in the data centre, where the servers can draw the chilled air in, cool them down and complete the cycle.

Image credits: Shutterstock

Image credits: Shutterstock

Evaporative cooling

As hot water from the data centre flows down the towers through a material that speeds evaporation, some of the water turns to vapour. A fan lifts this vapour, removing the excess heat in the process, and the tower sends the cooled water back into the data centre.

Using seawater

Google’s facility in Hamina, Finland, uses seawater to cool without chillers. The company has chosen Hamina for its cold climate and its location on the Gulf of Finland. The cooling system pumps cold water from the sea to the facility, transfers heat from the operations to the seawater through a heat exchanger, and, then, cools this water before returning it to the Gulf. Since this approach provides all the needed cooling year round, Google claims to not have installed any mechanical chillers.

(Information source: https://www.google.ae/about/datacenters/efficiency/internal/#water-and-cooling)

[end-div]

[end-div]

Related News

You May Also Read