Dileep Bhandarkar on Datacenter Energy Efficiency

Dileep Bhandarkar presented the keynote at Server design Summit last December. I can never find the time to attend trade shows so I often end up reading slides instead. This one had lots of interesting tidbits so I’m posting a pointer to the talk and my rough notes here.

Dileep’s talk: Watt Matters in Energy Efficiency

My notes:

· Microsoft Datacenter Capacities:

o Quincy WA: 550k sq ft, 27MW

o San Antonio Tx: 477k sq ft, 27MW

o Chicago, Il: 707k sq ft, 60MW

§ Containers on bottom floor with “medium reliability” (no generators) and standard rooms on top floor with full power redundancy

o Dublin, Ireland: 570k sq ft, 27MW

· Speeds and feeds from Microsoft Consumer Cloud Services:

o Windows Live: 500M IDs

o Live Hotmail: 355M Active Accounts

o Live Messenger: 303M users

o Bing: 4B Queries/month

o Xbox Live: 25M users

o adCenter: 14B Ads served/month

o Exchange Hosted Services: 2 to 4B emails/day

· Datacenter Construction Costs

o Land: <2%

o Shell: 5 to 9%

o Architectural: 4 to 7%

o Mechanical & Electrical: 70 to 85%

· Summarizing the above list, we get 80% of the costs scaling with power consumption and 10 to 20% scaling with floor space. Reflect on that number and you’ll understand why I think the industry is nuts to be focusing on density. See Why Blade Servers Aren’t the Answer to All Questions for more detail on this point – I think it’s a particularly important one.

· Reports that datacenter build costs are $10M to $20M per MW and server TCO is the biggest single category. I would use the low end of this spectrum for a cost estimator with inexpensive facilities in the $9M to $10M/MW range. See Overall Datacenter Costs.

· PUE is a good metric for evaluating datacenter infrastructure efficiency but Microsoft uses best server performance per watt per TCO$

o Optimize the server design and datacenter together

· Cost-Reduction Strategies:

o Server cost reduction:

§ Right size server: Low Power Processors often offer best performance/watt (see Very Low-Power Server Progress for more)

§ Eliminate unnecessary components (very small gain)

§ Use higher efficiency parts

§ Optimize for server performance/watt/$ (cheap and low power tend to win at scale)

o Infrastructure cost reduction:

§ Operate at higher temperatures

§ Use free air cooling & Eliminate chillers (More detail at: Chillerless Datacenter at 95F)

§ Use advanced power management with power capping to support power over-subscription with peak protection (see the 2007 paperhttp://research.google.com/pubs/pub32980.html for the early work on this topic)

· Custom Server Design:

o 2 socket, half-width server design (6.3”W x 16.7”L)

o 4x SATA HDD connectors

o 4x DIMM slots per CPU socket

o 2x 1GigE NIC

o 1x 16x PCIe slot

· Custom Rack Design:

o 480 VAC 3p power directly to the rack (higher voltage over a given conductor size reduces losses in distribution)

o Very tall 56 RU rack (over 98” overall height)

o 12VDC distribution within the rack from two combined power supplies with distributed UPS

o Power Supplies (PSU)

§ Input is 480VAC 3p

§ Output: 12V DC

§ Servers are 12VDC only boards

§ Each PSU is 4.5KW

§ 2 PSUs/rack so rack is 9.0KW max

o Distributed UPS

§ Each PSU includes an UPS made up of 4 groups of 4013.2V batteries

§ Overall 160 discrete batteries per UPS

§ Technology not specified but based upon form factor and rough power estimate, I suspect they may be Li-ion 18650. See Commodity Parts for a note on these units.

o By putting 2 PSUs per rack they avoid transporting low voltage (12VDC) further than 1/3 of a rack distance (under 1 yard) and only 4.5 KW is transported so moderately sized bus bars can be used.

o Rack Networking:

§ Rack networking is interesting with 2 to 4 tor switches per rack. We know the servers are 1 GigE connected and there are up to 96 per rack which yields two possibilities: 1) they are using 24 port GigE switches or 2) they are using 48 port GigE and fat tree network topology (see VL2: A Scalable and Flexible Datacenter Network for more on a fat tree). 24 port TORs are not particularly cost effective, so I would assume 48x1GigE TORs in a fat tree design which is nice work.

o Report that the CPUs can be lower power parts including Intel Atom, AMD Bobcat, and ARM (more on lower powered processors in server applications: http://perspectives.mvdirona.com/2011/01/16/NVIDIAProjectDenverARMPoweredServers.aspx)

· Power proportionality: Shows that server at 0% load consumes 32% of peak power (this is an amazingly good server – 45% to 60% is much closer to the norm)

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

6 comments on “Dileep Bhandarkar on Datacenter Energy Efficiency
  1. Pedro, you asked about the relative cost of raised floor, racks, and cabling. Microsoft doesn’t use raised floor in their new facilities. Racks are around $2k each. Cabling is also inexpensive relative to generators, air handlers, switch gear, etc.

  2. >Summarizing the above list, we get 80% of the costs scaling with power consumption and 10 to 20% scaling with floor space.

    Your list shows that 80% of cost is "Mechanical & Electrical", which seems to include everything that isn’t the basic building. I’m sure stuff like a UPS will scale by kW but all the racks, cabling, raised floor, etc, will scale almost directly by area/volume. I wonder what the capital breakdown between those two types of things looks like.

  3. Jigrati, you were asking if $10m to $20 per megawatt is a lot or a little? It really depends. For very capital intensive industries, like petroleum exploration, I suspect these numbers are fairly small by relative measures. For retailers with very small margins, the numbers might be a bigger fraction of overall costs. But, independent of the industry one is operating within, these costs are large enough to be relevant and all shareholders want good value. Google, for example, has a wonderful revenue producer in advertising. However, even Google Shareholders and stock analysts watch the infrastructure spending closely. These are big numbers.

    –jrh

  4. Jagrati says:

    Hi James,
    I would like to mark how much I love your blog. After a recent masters degree in computers, I was aware of all that’s on textbooks, and was itching for a place which gives me clues of actual large scale real world stuff. When I stumbled into this blog, I realized at once that this is what I have been looking for. This has interesting stuff like actual case studies of large systems at work, numbers giving a broad overview of things, pie charts which size things up,putting things in perspective etc. Thanks for it!

    I have read the article "Watt Matters in Energy Efficiency". One thing which still eludes me is large scale money numbers. When it says "Data Centers can cost between $10M and $20M per megawatt, I just don’t know how to make sense of it, how to react, what to feel etc. I mean, is it a large number or is it a small number in perspective of other money spends of big companies. Is it a major money hogging area for companies like Google,Microsoft,Amazon etc or its a small fraction?

    Thanks,
    Jagrati

  5. Thanks Chad and an important enough point that I’ll correct the main text. Thanks!

    –jrh

  6. Chad Harrington says:

    Hi James,
    Thanks for the great post. As usual, you do a great job of distilling the key issues. A couple of times in the post above, you used the word power when I believe you meant voltage.
    – "higher power on a given conductor size reduces losses in distribution" should be "higher voltage on a given conductor size reduces losses in distribution"
    – "avoid transporting low power" should be "avoid transporting low voltage"

    Power and voltage are orthogonal concepts; I don’t think you meant to confuse your readers.

    Best wishes and keep up the great work,

    Chad Harrington

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.