Wednesday, 27 November 2024

Simple is better

…in data centre M&E design, says Rehan Shahid

  • By Content Team |
  • Published: August 4, 2021
  • Share This Article

Government agencies, financial institutions, educational bodies, telecommunication companies and retailers, among others, all generate and use data, therefore presenting a need for data centres on some level.

Not all data centres meet the operational and capacity requirements of their initial designs, though; in fact, it’s quite rare to see such occurrences. In data centre design, the principal goals are flexibility and scalability. In this article, I will focus primarily on Mechanical and Electrical (M&E) design, without delving too deep into the technical details.

Following are the critical M&E design elements that are required to be considered for a data centre:

  • Developing an energy-efficient, climate-controlled environment that has specific ranges of temperature, humidity and cleanliness
  • Appropriate redundancy (for example, N+1, 2N, 2N+1…)
  • The tier classification of the data centre
  • All of the equipment needs to be provided with the appropriate high-quality power
  • A comprehensive fire detection and suppression system for protecting life and property, as well as ensuring quick operational recovery, owing to the significant risk of electrical fires in a data centre
  • Designing and installing appropriate back up power systems with UPS, generators and substations

Generally speaking, no two data centres are the same. To produce a bespoke M&E design, a detailed analysis of current and future requirements is a must, and a one-solution-fits-all approach must be avoided.

It is not economically feasible to have all the electro-mechanical systems in place, particularly when taking into account future expansion. This expansion may happen sooner or later than expected, though the general trend skews sooner, rather than later. This has been especially true for the past few years, where the demand of building new data centres has soared. The rise in demand is perhaps due to radical changes made to the business model, combined with the work-from-home initiative and an increase in demand of streaming media.

Rehan Shahid

The objective is to have a flexible and scalable infrastructure; M&E systems should also ideally be able to expand without any downtime, which may make it appropriate to consider a modular design approach. If additional racks of blade servers are added, then the M&E systems should be able to handle the new requirement without a redesign – much like adding RAM to your laptop, when a new operating system demands more of it to run faster. In other words, there should be little fuss.

As the data centre is vitally dependent on electrical power, not just for the IT equipment but also to maintain and control the indoor environment, paramount importance should be given to the design of the electrical systems, quality of the power, alternative power source(s) and the system’s ability to operate under fault or maintenance conditions. The design should also have the ability to add UPS capacity to existing modules without an outage.

How effectively power is used can dramatically affect energy consumption and carbon emissions. One measure that has been adopted by the industry is known as the power usage effectiveness (PUE).

The ratio of power available to a data centre versus the power consumed by IT equipment is described as Power Usage Effectiveness (PUE). A high PUE means that your data centre is consuming too much power and could be more efficient. New centres should aim for 1.4 or less, according to Federal CIO targets and benchmarks. The goal is to get the PUE ratio down as near as possible to 1.0.

ESTABLISHING AN EFFICIENT COOLING STRATEGY

A Computational Fluid Dynamics (CFD) simulation of the airflow in a data centre should be considered to show its effectiveness and to mitigate risk, such as overheating resulting in less-than-intended design capacity. Separating the data centre’s hot and cold aisles to prevent hot spots and hot air recycling is one of the most effective methods of achieving consistent temperatures; the key is to ensure that exhaust air is not allowed to mix with the supply air. Therefore, modelling and simulating all aspects of equipment arrangement – for example, perforated floor/ceiling tiles, hot/cold aisle containment, in-rack cooling and underfloor/overhead plenum – is crucial in order to arrive at energy-saving solutions.

Data centres are designed considering future loads and towards meeting particular heat load demands – though they may not reach the projected level until sometime in the future. Due to this, further analysis must be performed to identify the most flexible and economical way to cool the racks that are operational, thus not wasting energy cooling the entire data hall.

If a higher air temperature range (ASHRAE recommended range is from 18 degrees C to 27 degrees C, allowable range is from 15 degrees C to 32 degrees C) 4 is being considered, then the risk of failure due to reduced thermal head-room should be studied following a cooling system or utility power failure scenario. Also, it needs to be ensured that all equipment in the data centre is suitable for the extended temperature and humidity ranges.

Having a reasonably airtight data hall and introducing an airlock – if possible – will prevent the ingress of dust, whilst keeping it under pressure and with comparatively less fresh air. This, in turn, will reduce the FAHU energy consumption and prolong the life of the air filtration system.

Finally, keeping track of energy use is a must, so it is necessary to employ energy and environmental monitoring systems.

Some of the design risks to consider are

  • Water leaks through the building fabric
  • External plant and equipment and the impact of vibration
  • Facilities management (FM) and appropriate levels of technical training
  • Ease of access, protection of fabric
  • Ability to provide future expansion capability
  • Reliability and redundancy
  • Facilities management and maintenance
  • Risk of system instability, both in steady state under dynamic conditions and low operation
  • Electro-magnetic interference (EMI), CCTV and lighting
  • Fire strategy
  • Emergency provisions
  • Health and Safety (H&S) risks during the project and thereafter

FOR WHAT IT’S WORTH

Data centres are critical buildings that demand planning and the designer’s profound understanding of the requirements.

So, it makes sound sense to keep the design simple and flexible, for the following reasons:

  • Complex design entails more equipment, which translates into more failure points
  • Complex designs are inherently expensive and attract higher O&M costs
  • It’s a misconception that more systems equal less failures; increased complexity of design doesn’t guarantee enhanced reliability A modular and flexible design is the key to a successful data centre.

The best way to mitigate risk and future-proof a data centre is to design it using technology that has proven itself over time. Please do remember, high-density equipment – blades, in particular – cannot function without cooling for more than a few seconds before going into self-protective thermal shutdown.

Amidst all this, try not to lose sight of the four elements that are fundamentally intertwined – performance, external dependencies, CapEx and OpEx.

References:

  1. Source: Public domain
  2. Data centres: An introduction to concepts and design – CIBSE
  3. One of P&T Architects and Engineers Data Centre projects
  4. HVAC Applications – 2019 ASHRAE Handbook

Rehan Shahid is Director, P&T Architects and Engineers Limited. He may be contacted at rehan@ptdubai.ae.

Related News

You May Also Read