Common DC Mistakes with Layla

I had a chance to chat with Layla Monajemi, an energy engineer working for EMCOR Energy Services.  Layla specializes in mission critical facilities such as data centers but also has experience with commercial buildings, labs, and campus environments.  Layla was the first female to receive the Data Center Certified Energy Practitioner (DC-CEP) recognition from the Department of Energy.

Dan: In terms of energy efficiency, what are the three most common mistakes that you see in data centers?

Layla:  The most obvious mistake that I see is cooling the hot aisle.  More often than you would think, operators feel the need to place perforated tiles in the hot aisles because to them, hot is bad.  Hot is bad in the cold aisle, hot is ok in the hot aisle.

Dan: Why do operators think this way?

Layla:  Data center operators are  still of  the old mindset that the exhaust air temperature coming from the back of the IT equipment matters.  In reality the critical temperature is the inlet to the IT equipment.  The hot aisle is supposed to be hot.

DanWhat other mistakes do you see?

Layla: I rarely see the covering or blocking of  empty spaces within the server racks.  Usually there are gaping holes in racks that allow for a lot of mixing between the hot and cold aisles.  You avoid this mixing by installing blanking panels –which are both cheap and easy to install.

Dan:  Why is mixing air between the hot and cold aisles bad?

Layla:  Data centers are typically designed in a cold aisle/hot aisle layout, which means that every row is flipped so that the racks facie each other.  The advantage of this design is that the cold air delivered to the front of the IT equipment can be kept separate from the hot air being exhausted from the back.  Making sure that the cold and hot air stay separate and do not mix is important.  When you have lots of mixing, you’re essentially wasting cold air that the CRACs have spent energy to cool down.  In addition to a waste of energy, excessive mixing causes lots of hot spots that can become difficult to manage.  It’s like when you open your car window when the AC is on.  You know your mom would always tell you to shut it.  You want to keep the hot air outside and the cold air inside.

DanHa ha, yea, my mom always used to yell at me for that.  But IT equipment is being shifted around all the time.  Are you saying every time you move a server you need to move or install the blanking plates?

Layla:  Yes.  I know it sounds like a lot of work, but it is not difficult to do and it is important.  The real issue is a lack of communication between the IT folks and the facilities staff.   Improving the energy efficiency in a data center is going to require cooperation between the IT department and the facilities department.  This is something that companies are not used to, but we need to make this shift.

DanSo you’ve given me two mistakes you see on the data center floor.  Is there anything you can comment on about cooling units.

Layla:  Almost everyone controls their CRAC or CRAH units based off of return air temperature.  This is just wrong.  Again, this is due to the mindset  that return air temperature matters.  The correct strategy is to control the CRAC/CRAH units on supply air temperature.  It would be even better if you could control them based on rack inlet temperature, but you would need some kind of monitoring in place to do that.

Dan So, it sounds like a lot of these issues can be attributed to an inaccurate understanding of how a data center should be operated.  Why did these incorrect assumptions develop in the first place?

Layla:  During the technology boom, any downtime in IT equipment cost a company thousands of dollars per minute.  This figure is even more significant today because most of what we do takes place in the digital world.  Businesses were growing so quickly that no one cared about costs, they just cared about uptime.  This shaped the mindset of overdesign, over cool, excess redundancy, and increased safety factors.  All of those things contribute to the current mindset. But today, server load is 10 times what is used to be and energy costs have doubled.  The guy paying the electric bill is starting to say, “I am paying millions of dollars per month for my data center, is there any way we can cut costs a bit?”  The answer is a huge yes because all of these data centers are over designed and over cooled leading to enormous inefficiencies.  But when we start to identify these inefficiencies and begin to recommend changes, we clash with the operators who are set in their old ways.  I think this is the biggest challenge in the industry right now, how do we overcome this mindset and start to make changes.

One thought on “Common DC Mistakes with Layla

  1. I have a question rather than a comment:
    Is there a tendency towards hot aisle containment? I agree with Layla’s comment on cold containment, but I came across colleagues defending hot containment.
    Second question: Are we going towards using AHU instead of CRAC: AHUs are custom tailored and difficult to kontrol compared to CRAC. ??

Leave a Reply

Your email address will not be published. Required fields are marked *