More Data on Datacenter Air Side Economization

Two of the highest leverage datacenter efficiency improving techniques currently sweeping the industry are: 1) operating at higher ambient temperatures (http://perspectives.mvdirona.com/2011/02/27/ExploringTheLimitsOfDatacenterTemprature.aspx) and air-side economization with evaporative cooling (http://perspectives.mvdirona.com/2010/05/15/ComputerRoomEvaporativeCooling.aspx).

The American Society of Heating and Refrigeration, and Air-Conditioning Engineers (ASHRAE) currently recommends that servers not be operated at inlet temperatures beyond 81F. Its super common to hear that every 10C increase in temperatures leads to 2x the failure – some statements get repeated so frequently they become “true” and no longer get questioned. See Exploring the limits of Data Center Temperature for my argument that this rule of thumb doesn’t apply over the full range operating temperatures.

Another one of those “you can’t do that” statements is around air-side economization also referred to as Outside Air (OA) cooling. Stated simply air-side economization is essentially opening the window. Rather than taking 110F exhaust air and cooling it down and recirculation it back to cool the servers, dump the exhaust and take in outside air to cool the servers. If the outside air is cooler than the server exhaust, and it almost always will be, then air-side economization is a win.

The most frequently referenced document explaining why you shouldn’t do this is: Particulate and Gaseous Contamination Guidelines for Data Centers again published by ASHRAE. Even the document title sounds scary. Do you really want your servers operating in an environment of gaseous contamination? But, upon further reflection, is it really the case that servers really need better air quality than the people that use them? Really?

From the ASHRAE document:

The recent increase in the rate of hardware failures in data centers high in sulfur-bearing gases, highlighted by the number of recent publications on the subject, led to the need for this white paper that recommends that in addition to temperature-humidity control, dust and gaseous contamination should also be monitored and controlled. These additional environmental measures are especially important for data centers located near industries and/or other sources that pollute the environment.

Effects of airborne contaminations on data center equipment can be broken into three main categories: chemical effects, mechanical effects, and electrical effects. Two common chemical failure modes are copper creep corrosion on circuit boards and the corrosion of silver metallization in miniature surface-mounted components.

Mechanical effects include heat sink fouling, optical signal interference, increased friction, etc. Electrical effects include changes in circuit impedance, arcing, etc. It should be noted that the reduction of circuit board feature sizes and the miniaturization of components, necessary to improve hardware performance, also make the hardware more prone to attack by contamination in the data center environment, and manufacturers must continually struggle to maintain the reliability of their ever shrinking hardware.

It’s hard to read this document and not be concerned about the user of air-side economization. But, on the other hand, most leading operators are using it and experiencing no measure deleterious effects. Let’s go get some more data.

Digging deeper, the Data Center Efficiency Summit had a session on exactly this topic title: Particulate and Corrosive Gas Measurements of Data Center Airside Economization: Data From the Field – Customer Presented Case Studies and Analysis. The title is a bit of tongue twister but the content is useful. Selecting from the slides:

· From Jeff Stein of Taylor Engineering:

o Anecdotal evidence of failures in non-economizer data centers in extreme environments in India or China or industrial facilities

o Published data on corrosion in industrial environments

o No evidence of failures in US data centers or any connection to economizers

o Recommendations that gaseous contamination should be monitored and that gas phase filtration is necessary for US data centers are not supported

· From Arman Shehabi of the UC Berkeley Department of Civil and Environmental Engineering:

o Particle concerns should not dissuade economizer use

More particles during economizer use with MERV 7 filters than during non-economizer periods, but still below many IT guidelines

I/O ratios with MERV 14 filters and economizers were near (and often below!) levels using MERV 7 filters w/o economizers

o Energy savings from economizer use greatly outweighed the fan energy increase from improved filtration

MERV 14 filters increased fan power by about 10%, but the absolute increase (6 kW) was much smaller than the ~100 kW of chiller power savings during economizer use (in August!)

The fan power increase is constant throughout the year while chiller savings during economizer should increase during cooler period

If you are interested in much more detail that comes to the same conclusions that Air-Side Economization is a good technique, see the excellent paper Should Data Center Owners be Afraid of Air-Side Economization Use? – A review of ASHRAE TC 9.9 White Paper titled Gaseous and Particulate Contamination Guidelines for Data Centers.

I urge you to read the full LBNL paper but I excerpt from the conclusions:

The TC 9.9 white paper brings up what may be an important issue for IT equipment in harsh environments but the references do not shed light on IT equipment failures and their relationship to gaseous corrosion. While the equipment manufacturers are reporting an uptick in failures, they are not able to provide information on the types of failures, the rates of failures, or whether the equipment failures are in new equipment or equipment that may be pre-Rojas. Data center hardware failures are not documented in any of the references in the white paper. The only evidence for increased failures of electronic equipment in data centers is anecdotal and appears to be limited to aggressive environments such as in India, China, or severe industrial facilities. Failures that have been anecdotally September 2010 presented occurred in data centers that did not use air economizers. The white paper recommendation that gaseous contamination should be monitored and that gas phase filtration is necessary for data centers with high contamination levels is not supported.

We are concerned that data center owners will choose to eliminate air economizers (or not operate them if installed) based upon the ASHRAE white paper since there are implications that contamination could be worse if air economizers are used. This does not appear to be the case in practice, or from the information presented by the ASHRAE white paper authors.

I’ve never been particularly good at accepting “you can’t do that” and I’ve been frequently rewarded for challenging widely held beliefs. A good many of these hard and fast rules end up being somewhere between useful guidelines not applying in all conditions to merely opinions. There is a large and expanding body of data supporting the use of air-side economization.

–jrh

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

6 comments on “More Data on Datacenter Air Side Economization
  1. Yes, you are right the NEBS Telco standard does specify operation at much higher than standard datacenter operating temps. NEBS requires 40C steady state with 55C short term (96 hour). Unfortunately NEBS certified gear is usually more expensive — a nice exception is the Rackable Systems Cloudrack C2. But, most standard server gear is warranted to 35C. Overtime, customers will convince manufacturers to certify to 40C without extra cost.

    –jrh

  2. Chris says:

    Doesn’t most telecom gear use 45C filtered air? Or are the circuit boards and components in outdoor cabinets somehow "different" from servers?

  3. Sensible advice Andrew. thanks,

    –jrh

  4. Andrew S says:

    Here’s something to consider. How many of you run data centers with makeup air ventilation? I’m guessing many if not all of you have some means to keep the air fresh in there and replace that human-generated CO2 with a bit more O2. You will probably be replacing the whole air content of your facility each few hours. So what? Well – have a look at the filtration on that system and let me know how super-high tech it is! Do you have chemical scrubbers for NOX and SO2? Or Ammonia? Or sea salt? I’d hope you have a filter on it at least, but what’s the MERV rating? Is it full HEPA, or something much less?

    The fact is, without really thinking about running your DC on outside air, that’s exactly what you are doing (minus a bit of soot and pollen).

    The real story here, according to my engineering contacts in major IT manufacturers, is that these gases are usually very benign. They flow through the server and out the other side. In fact, the only time you should be really concerned is when you precipitate the gases into your server in terms of moisture (i.e. completely losing control of humidity). At that point, bad things can happen.

    The bottom line: use your common sense if there is a major pollution event (e.g. fire) outside, but in all normal situations, get yourself a decent MERV filter, keep the humidity below the condensation point, and you will be fine. If you want to be very safe (as we do in banking), then monitor air quality and corrosion rates as well (very commonly available meters).

  5. Great story and we need more of them. Thanks Rocky.

    –jrh

  6. Rocky says:

    Part of my journey to accepting aggressive air-side economization came in a simple but stunning Aha! moment.

    For decades, we operated commodity computer equipment in industrial environments and onboard ships, with no filtration, and in most cases no HVAC. We have not seen significantly higher failure rates, compared to our carefully controlled data centers.

    One location was 200 yards from a giant, busy pile of gypsum dust. The "server room" HVAC was an open window with a fan. One time, we wanted to upgrade the RAM on a Cisco router that had been running there for three years. We had to chip gypsum off the motherboard first! That router ran without problems for another four years until we moved away.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.