This IEEE Spectrum article was published in February but I’ve been busy and haven’t had a chance to blog it. The author, Randy Katz, is a UC Berkeley researcher and member of the Reliable Available Distributed Systems Lab. Katz was a coauthor on the recently published RAD Lab article on Cloud Computing: Berkeley Above the Clouds.
The IEEE Spectrum article focuses on data center infrastructure: Tech Titans Building Boom. In this article Katz, looks at the Google, Microsoft, Amazon, and Yahoo data center building boom. Some highlights from my read:
· Microsoft Quincy is 48MW total load with 48,600 sq m of space. 4.8 km of chiller pipe, 965 km of electrical wire, 92,900 m2 of drywall, and 1.5 metric tons of backup batteries.
· Yahoo Quincy, is somewhat smaller at 13,000 m2. This not yet complete facility will include free air cooling.
· Google Dalles is a two building facility on the Columbia river, each at 6,500 m2. I’ve been told that this facility does make use air-side economization but in carefully studying all pictures I’ve come across I can’t find air intakes or louvers so I’m skeptical. From the outside the facilities look fairly conventional.
· Google is also building in Pryor, Okla.; Council Bluffs, Iowa; Lenoir, N.C.; and Goose Creek, S.C.
· Arial picture of Google Dalles: http://www.spectrum.ieee.org/feb09/7327/2
· McKinsey estimates that the world has 44M servers and that they consume 0.5% of all electricity and produce 0.2% of all carbon dioxide. However, in a separate article McKinsey also speculates that Cloud Computing may be more expensive for enterprise customers, a claim that most of the community had trouble understanding or finding data to support.
· Google uses conventional multicore processors. To reduce the machines’ energy appetite, Google fitted them with high-efficiency power supplies and voltage regulators, variable-speed fans, and system boards stripped of all unnecessary components like graphics chips. Google has also experimented with a CPU power-management feature called dynamic voltage/frequency scaling. It reduces a processor’s voltage or frequency during certain periods (for example, when you don’t need the results of a computing task right away). The server executes its work more slowly, thus reducing power consumption. Google engineers have reported energy savings of around 20 percent on some of their tests. For more recently released data on Google’s servers, see Data Center Efficiency Summit (Posting #4).
· Katz reports that average data center is 14C and that newer centers are pushing to 27C. I’m interested in going to 35C and eliminating process based cooling: Data Center Efficiency Best Practices.
· Containers: T he most radical change taking place in some of today’s mega data centers is the adoption of containers to house servers. Instead of building raised-floor rooms, installing air-conditioning systems, and mounting rack after rack, wouldn’t it be great if you could expand your facility by simply adding identical building blocks that integrate computing, power, and cooling systems all in one module? That’s exactly what vendors like IBM, HP, Sun Microsystems, Rackable Systems, and Verari Systems have come up with. These modules consist of standard shipping containers, which can house some 3000 servers, or more than 10 times as many as a conventional data center could pack in the same space. Their main advantage is that they’re fast to deploy. You just roll these modules into the building, lower them to the floor, and power them up. And they also let you refresh your technology more easily—just truck them back to the vendor and wait for the upgraded version to arrive.
· Microsoft Chicago will have 200 containers in its lower floor (it’s a two floor facility) and it’s expected to be well over 45MW and will be 75MW if built out to the full 200 containers planned (First Containerized Data Center Announcement). The Chicago, Dublin, and Des Moines facilities have all been delayed by Microsoft presumably due to economic conditions: Microsoft Delays Chicago, Dublin, and Des Moines Data Centers.
Check out Tech Titans Building Boom: http://www.spectrum.ieee.org/feb09/7327.
–jrh
James Hamilton, Amazon Web Services
1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | james@amazon.com
H:mvdirona.com | W:mvdirona.com/jrh/work | blog:http://perspectives.mvdirona.com
One comment on “Randy Katz on High Scale Data Centers”