Datacenter Power Efficiency

Kushagra Vaid presented at Datacenter Power Efficiency at Hotpower ’10. The URL to the slides is below and my notes follow. Interesting across the board but most notable for all the detail on the four major Microsoft datacenters:

· Services speeds and feeds:

o Windows LiveID: More than 1B authentications/day

o Exchange Hosted Services: 2 to 4B email messages per day

o MSN: 550M unique visitors monthly

o Hotmail: 400M active users

o Messenger: 320M active users

o Bing: >2B queries per month

· Microsoft Datacenters: 141MW total, 2.25M sq ft

o Quincy:

§ 27MW

§ 500k sq ft total

§ 54W/sq ft

o Chicago:

§ 60MW

§ 707k sq ft total

§ 85W/sq ft

§ Over $500M invested

§ $8.30/W (assuming $500M cost)

§ 3,400 tons of steel

§ 190 miles of conduit

§ 2,400 tons of copper

§ 26,000 yards of concrete

§ 7.5 miles of chilled water piping

§ 1.5M man hours of labor

§ Two floors of IT equipment:

· Lower floor: medium reliability container bay (standard IOS containers supplied by Dell)

· Upper floor: high reliability traditional colo facility

§ Note the picture of the upper floor shows colo cages suggesting that Microsoft may not be using this facility for their internal workloads.

§ PUE: 1.2 to 1.5

§ 3,000 construction related jobs

o Dublin:

§ 27MW

§ 570k sq ft total

§ 47W/sq ft

o San Antonio:

§ 27MW

§ 477k sq ft total

§ 57W/sq ft

· The power densities range between 45 and 85W/sq ft range which is incredibly low for relatively new builds. I would have expected something in the 200W/sq ft to 225W/sq ft range and perhaps higher. I suspect the floor space numbers are gross floor space numbers including mechanical, electrical, and office space rather than raised floor.

· Kushagra reports typical build costs range from $10 to $15/W

o based upon the Chicago data point, the Microsoft build costs are around $8/sq ft. Perhaps a bit more in the other facilities in that Chicago isn’t fully redundant on the lower floor whereas the generator count at the other facilities suggest they are.

· Gen 4 design

o Modular design but the use of ISO standard containers has been discontinued. However, they modules are prefabricated and delivered by flatbed truck

o No mechanical cooling

§ Very low water consumption

o 30% to 50% lower costs

o PUE of 1.05 to 1.1

· Analysis of AC vs DC power distribution concluding that DC more efficient at low loads and AC at high loads

o Over all the best designs are within 1 to 2% of each other

· Recommends higher datacenter temperatures

· Key observation:

o The best price/performance and power/performance is often found in lower-power processors

· Kushagra found substantial power savings using C6 and showed server idle power can be dropped to 31% of full load power

o Good servers typically run in the 45 to 50% range and poor designs can be as bad as 80%

The slides:


James Hamilton



b: /

6 comments on “Datacenter Power Efficiency
  1. Thanks for all the data in your comment Anthony. I was much more excited than you by Kushagra’s talk than you were. What drives my excitement is that it challenges many of standard industry belief one of which you quoted: “A rule of thumb is that long-term electronics reliability is reduced by 50 percent for every increase of 18oF above 70F”. This oft quoted fact came from a very old military study that may have been correct. I suspect it has never been true but it absolutely is not true today. What I get out of this is that we need to challenge some of these beliefs and make sure they are still true and use data to drive the decisions.

    You correctly point out that there is almost nothing new and everything has been investigated in the past. Using server fans as the sole air movers, distributed UPS, warmer data centers, etc. I agree but I still want to read about what is working and what isn’t and learn from others so I appreciate this talk for laying out one set of views.

    I hear you on power density being fairly common and you are right. But, I just about guarantee that any datacenter msft is building right now is over 200 W/sq ft. I suspect we’re looking at gross floor space rather than raised floor.

    Returning to your point on NEBS. As you know, NEBS servers are expensive. You might argue that they need to be expensive to survive in 104F but just I don’t buy it. I think we can and will get commodity servers operating at or near NEBS tempratures. Just about every commercially available server has operating specs that allow operation of up to 95F. I’ve had server vendors ask me if I was nuts (fair question) and not know their own specs. Most servers are surprisingly high. Rackable Cloudrack is speced to 104F. I’m expecting improvements on this dimension across the industry.

    Thanks for your comments — that’s one of the reason I love talks like Kushagra’s. He’s challenging some of the status quo and sparking debate and discussion. Useful.

    James Hamilton

  2. Anthony Drew says:


    I don’t know if I should thank you posting the link to the Kushagra Vaid’s presentation:

    “Datacenter Power Efficiency: Separating Fact from Fiction”

    I must admit that when I read the title of this presentation and who it was from, I was expecting something very interesting, unfortunately I was very disappointed by the content.

    Power Density

    Your comment about the power density looking very low is nothing unusual. If you have a look at the rack layout in the Chicago DC you will see that they use a 10 tile rack spacing (2 for a rack and 3 for the aisles). I have started to see more of this type of configuration recently. Also Co-location customers often cannot make their racks high density as they must provision their entire network infrastructure as well, this is always very low density.

    There is also a new risk adverse approach used by major companies because they have the $ to do it. This is where it’s too scary to manage high density computing, so you spread it out and call it “Energy efficient computing”.

    Fan-Less Cooling

    Intel (and Chatsworth) has often pushed the fan-less cooling solution (where the server fans are used to force the hot air through the POD cooling coils). This does work, but it’s only a niche solution where all the devices in the POD must be the same make and model, typically blade chassis. While it does improve the PUE, it’s at the expense of server fan power, or to put it another way, “There’s no such thing as a free lunch”.

    Other ways to remove losses

    I assume that this is just Brain Storming and has not been seriously evaluated. But I would like to make comments on a number of points:

    Batteries in servers are nothing new, back in the 80’s they were all the rage for critical systems.
    The battery most commonly used was a Lead Acid GEL cell. As the battery this mounted in the server, they are subject to high discharge currents and high temperatures. These early designs also lacked any cell monitoring. This resulted in a battery life of typically 12 months or less and the only time you knew there was a dead cell was when it was needed.

    Even with a new design, including automatic cell monitoring and load test, the battery life would be limited, nowhere near the 10 to 15 year life you would expect from a large central UPS system. If you include the DC temperature limits suggested in this presentation the battery life would be brief. Also the maintenance costs and risk to the IT would be unacceptable.

    Expanded Environmental Range

    While it is true that you can increase the operating temperature range of servers there is a cost. The fan speed will need to increase (increasing power consumption of the server) to provide sufficient cooling or you de-clock the servers to prevent overheating. If you de-clock you may run out of processing capacity when you actually need it.

    “Higher temps → Lower fan speeds to cool servers → Improved PUE”

    I assume what is being suggested here is that you run the chips hotter in the server. But if I may quote the Uptime Institute “A rule of thumb is that long-term electronics reliability is reduced by 50 percent for every increase of 18oF above 70oF”. I would like to see a number of peer reviewed studies before I would go down this path.

    If this is a serious proposal I wonder why he just doesn’t recommend that everybody buy NEBS compliant hardware. This would let the DC temperature increase to 104oC, think of the power saving! Then again the doubling of the IT budget and the reduction in MIPS may be a couple of reasons.

    Power Capping

    This is an interesting idea but I would like to see some more detailed information in order to understand what the impact and benefit will be. I found the Power Spikes slide very confusing. I down loaded the reference but I couldn’t find the chart of the power spikes pictured. If these spikes are only 0.5 seconds in duration, why are we concerned? Are these spikes a function of the design of the power supply and/or the server?
    Once again sorry to bore you with comments on a presentation that you didn’t do. I find that there are so much hype and misinformation in this industry that I just have let my frustrations out to somebody.


    Anthony Drew

    PS: If I have misunderstood anything in the presentation I would be happy to be corrected.

  3. Guy, I don’t mean to be argumentative, but I’ve been inside a large number of Dell and Rackable containers and not a single one was all DC distribution. Rackable has used DC distribution within the rack but, to my knowledge, not through the entire container.

    I’ve seen two approaches used in the containers I’ve been in: 1) standard 480VAC 3p to 208VAC to each server and 2)480VAC 3p to the rack and 48VDC to each server. The later design I’ve looked at very carefully and can’t make it work economically due to the high cost of the top of rack rectifiers. Its a nice clean design but the low volume parts are pricey. I’ve not yet seen all DC distribution inside a container.

    I have come across all DC distribution as done at LBNL (partnered with Intel) and like it. I can only find low single digit gains but I still like it.

    If you can write up and send me your all-DC distribution design and show the losses at each stage. I’ll compare it to a current AC distribution and let you know what I see. I’ll be amazed if we can find the 28% you report above. Happy, but amazed. Even if there were 0 losses in electrical distribution, you wouldn’t get to 28% savings. That’s more than current designs lose.

    Send me what you have and I’ll dig deeper.

  4. Guy AlLee says:

    I’m not talking about what you hook up to the outside of the containerized data center, I’m talking about what they do inside the box. Oracle/Sun, Verari/Cirrascale, Rackable/SGI, etc. convert it to DC and take that all the way to the server. Moreover, if you have solar or fuelcell as your power source, you save an additional ~5-10% by avoiding the conversion losses from DC to AC to DC again.

    I have a server rack running at our facility in New Mexico and would be happy to demonstrate it running on either AC or DC and show you the 10-12% savings we see just from the PDU down.

  5. I like high voltage DC but I can’t find a fraction of the advantages you describe above. I’m always happy to learn but, each time I look closely, its low single digits.

    High voltage DC is definitely not the dominant power distribution system used in containerized data centers. Where did you see that point Guy?

    James Hamilton

  6. Guy AlLee says:

    So EPRI already responded to the errors in The Green Grid analysis, but, bottom line is there’s real efficiency in DC distribution over AC, and even the Green Grid admits that. That’s why DC is the dominant design in containerized data centers. It’s 28% more efficient than standard 208VAC practices (Lawrence Berkley National Labs demo, ETSI peer-review paper by Annabelle Pratt, et al)). Intel, HP (EYP/MC), and Emerson did a joint study and found that 380VDC (Called Direct 400VDC at the time) is still 7% more efficient than AC will ever be. Moreover it’s 15% less cost (fewer components), 33% less floor space, and 200% more reliable. It’s a pending ETSI standard. You can read all about it at DC – An idea whose time has come and gone?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.