The services world is one built upon economies of scale. For example, networking costs for small and medium sized services can run nearly an order of magnitude more than large bandwidth consumers such as Google, Amazon, Microsoft and Yahoo pay. These economies of scale make it possible for services such as Amazon S3 to pass on some of the economies of scale they get on networking, for example, to those writing against their service platform while at the same profiting (S3 is currently pricing storage under their cost but that’s a business decision rather than a business model problem). These economies of scale enjoyed by large service providers extend beyond networking to server purchases, power costs, networking equipment, etc.
Ironically, even with these large economies of scale, it’s cheaper to compute at home than in the cloud. Let’s look at the details.
Infrastructure costs are incredibly high in the services world with a new 13.5 mega-watt data center costing over $200m before the upwards of 50,000 servers that fill the data center are purchased. Data centers are about the furthest thing from commodity parts and I have been arguing that we should be moving to modular data centers for years (there has been progress on that front as well: First Containerized Data Center Announcement). Modular designs take some of the power and mechanical system design from an upfront investment with 15 year life to a design that comes with each module and is on a three year or less amortization cycle and this helps increase the speed of innovation.
Modular data centers help but they still require central power, mechanical systems, and networking systems and these systems remain expensive, non-commodity components. How to move the entire datacenter to commodity components? Ken Church (http://research.microsoft.com/users/church/) makes a radical suggestion: rather than design and develop massive data centers with 15 year lives, let’s incrementally purchase condominiums (just-in-time) and place a small number of systems in each. Radical to be sure but condo’s are a commodity and, if this mechanism really was cheaper, it would be a wake-up call to all of us to start looking much more closely at current industry-wide costs and what’s driving them. That’s our point here.
Ken and I did a quick back of envelope of this approach below. Both configurations are designed for 54k servers and roughly 13.5MWs. Condos appear notably cheaper, particularly in terms of capital.
|
|
Large Tier II+ Data Center |
Condo Farm (1125 Condos) |
Specs |
Servers |
54k |
54k (= 48 servers/condo * 1125 Condos) |
|
Power (peak) |
13.5 MW (= 250 Watts/server * 54k servers) |
13.5MW (= 250 Watts/server * 54k servers = 12 KW/condo * 1125 Condos) |
|
|
|
|
Capital |
Building |
over $200M |
$112.5M (= $100k/condo * 1125 Condos) |
|
|
|
|
Annual Expense |
Power |
$3.5M/year (= $0.03 per kw/h * 24*356 hours/year * 13.5MW) |
$10.6M/year (= $0.09 per kw/h * 24*365 hours/year * 13.5MW) |
|
|
|
|
Annual Income |
Rental Income |
None |
$8.1M/year (= $1000/condo per month * 12 months/year * 1125 Condos less $200/condo per month condo fees. We conservatively assume 80% occupancy) |
In the quick calculation above, we have the condos at $100k each and all 1,125 of them at $112.5M whereas the purpose built data center would price in at over $200M. We have assumed an unusually low cost for power on the purpose built center with a 66% reduction over standard power rates. Deals this good are getting harder to negotiate but they still do exist. The condo must pay full residential power costs without discount which is far higher at $10.6M/year. However, offsetting this increased power cost, we rent the condo’s out at a low cost of $1,000/month and conservatively only account for 80% occupancy.
Looking at the totals, the condo’s are at 56% of the capital cost and annually they run $2.5M in operational costs whereas the data center power costs are higher at $3.5m. The condos operational costs are 71% of the purpose built design. Summarizing, the condo’s run just about ½ the cost of the purpose built data center both in capital and in annual operating costs.
Condos offer the option to buy/sell just-in-time. The power bill depends more on average usage than worst-case peak forecast. These options are valuable under a number of not-implausible scenarios:
· Long-Term demand is far from flat and certain; demand will probably increase, but anything could happen over the next 15 years
· Short-Term demand is far from flat and certain; power usage depends on many factors including time of day, day of week, seasonality, economic booms and busts. In all data centers we’ve looked at average power consumption is well below worst-case peak forecast.
How could condos compete or even approach the cost of a purpose built facility built where land is cheap and power is cheaper? One factor is that condos are built in large numbers and are effectively “commodity parts”. Another factor is that most data centers are over-engineered. They include redundancy such as uninterruptable power supplies that the condo solution doesn’t include. The condo solution gets it’s redundancy via many micro-data centers and being able to endure failures across the fabric. When some of the non-redundantly powered micro-centers are down, the others carry the load. (Clearly achieving this application-level redundancy requires additional application investment).
One particularly interesting factor is when you buy large quantities of power for a data center, it is delivered by the utility in high voltage form. These high voltage sources (usually in the 10 to 20 thousand volt range) need to be stepped down to lower working voltages which brings efficiency losses, distributed throughout the data center which again brings energy losses, and eventually delivered to the critical load at the working voltage (240VAC is common in North America with some devices using 120VAC). The power distribution system represents approximately 40% of total cost of the data center. Included in that number are the backup generators, step-down transformers, power distribution units, and uninterruptable power supplies. Ignore the UPS and generators since we’re comparing non-redundant power, and two interesting factors jump out: 1) the cost of the power distribution system ignoring power redundancy is 10 to 20% of the cost of the data center and 2) the power losses through distribution run 10 to 12% of the power brought into the center.
This is somewhat ironic in that a single family dwelling gets two-phase 120VAC (240VAC between the phases or 120VAC between either phase and ground) delivered directly to the home. All the power losses experienced through step down transformers (usually in the 92 to 96% efficiency range) and all the power lost through distribution (depends upon size and length of conductor) is paid for by the power company. But, if you buy huge quantities of power as we do in large data centers, the power company delivers high voltage lines to the property and you need to pay the substantial capital cost of step down transformers and, in addition, pay for the power distribution losses. Ironically, if you don’t buy much power, the infrastructure is free. If you buy huge amounts, you need to pay for the infrastructure. In the case of condos, the owners need to pay for the inside the building distribution so they are somewhere between single family dwellings and data centers in having to pay for part of the infrastructure but not as much as a DC.
Perhaps, the power companies have found a way to segment the market into consumer v. business. Businesses pay more because they are willing to pay more. Just as businesses pay more for telephone service and airplane travel, businesses also pay more for power. Despite great deals we’ve been reading about, data centers are actually paying more for power than consumers after factoring in the capital costs. Thus, it is a mistake to move computation from the home to the cloud because doing so moves the cost structure from consumer rates to business rates.
The condo solution might be pushing the limit a bit but whenever we see a crazy idea even within a factor of two of what we are doing today, something is wrong. Let’s go pick some low hanging fruit.
Ken Church & James Hamilton
{Church, JamesRH} @microsoft.com
Chicago is a step in the right direction in that half the center is containerized and it’s not built with the expensive redundancy normally found in large data centers. Unfortunately, Mike Manos hasn’t released the detailed specs behind this center beyond what was reported in his Data Center world presentation a couple of weeks back.
You were asking why build a large data center like Chicago rather than using many, redundant smaller centers. Essentially, you ask why not RAID C? The short answer is don’t let it’s bigness of the individual facilities confuse you. Microsoft and Google are building a LOT of data centers — they may be big individually but there are many of them. Although the public details are sparse, some RAID C ideas are being used.
–jrh
James, I haven’t had a chance to see if you or someone else answered my question from before your blog problem. So here it is again: with the concept of RAID C (Redundant Array of Inexpensive Data Centers) that we’ve already talked about what is your opinion on Microsoft’s decision to build super large data center (500K sq.ft each) instead of building 10 times more data centers that would each be 10 times smaller (but still large at 50K sq.ft). Can we still talk about economies of scale for a 500K sq.ft data center when you have to build 3 power sub-stations to provide close to 200 MW of power?
thank you
Michel Plante