Should we Shut Off Servers?

In a comment to the last blog entry, Cost of Power in Large-Scale Data Centers Doug Hellmann brought up a super interesting point

It looks like you’ve swapped the “years” values from the Facilities Amortization and Server Amortization lines. The Facilities Amortization line should say 15 years, and Server 3. The month values are correct, just the years are swapped.

I wonder if the origin of “power is the biggest cost” is someone dropping a word from “power is the biggest *manageable* cost”? If there is an estimated peak load, the server cost is fixed at the rate necessary to meet the load. But average load should be less than peak, meaning some of those servers could be turned off or running in a lower power consumption mode much (or most) of the time.

This comment has not been screened by an external service.

Doug Hellmann

Yes, you’re right the comment explaining the formula on amortization period in Cost of Power in Large-Scale Data Centers is incorrect. Thanks for you, Mark Verber, and Ken Church for catching this.

You brought up another important point that is worth digging deeper into. You point out that we need to buy enough servers to handle maximum load and argue that you should shut off those you are not using. This is another one of those points that I’ve heard frequently and am not fundamentally against but, as always, it’s more complex than it appears. There are two issues here: 1) you can actually move some workload from peak to the valley through a technique that I call Resource Consumption Shaping and 2) turning off isn’t necessarily the right mechanism to run more efficiently. Let’s look at each:

Resource Consumption Shaping is a technique that Dave Treadwell and I came up with last year. I’ve not blogged it in detail (I will in the near future), but the key concept is prioritizing workload into at least two groups: 1) customer waiting and 2) customer not waiting. For more detail, see page 22 of the talk Internet-Scale Service Efficiency from Large Scale Distributed Systems & Middleware (LADIS 2008). The “customer not waiting” class includes reports, log processing, re-indexing, and other admin tasks. Resource consumption shaping argues you should move “customer not waiting” workload from the peak load to off-peak times where you can process it effectively for free since you already have the servers and power. Resource consumption shaping builds upon Degraded Operations Mode.

The second issue is somewhat counter-intuitive. The industry is pretty much uniform in arguing that you should shut off servers during non-peak periods. I think Luiz Barroso was probably the first to argue NOT to shut off servers and we can use the data from Cost of Power in Large-Scale Data Centers to show that Luiz is correct. The short form of the argument goes like this: you have paid for the servers, the cooling, and the power distribution for the servers. Shutting them off only saves the power they would have consumed. So, it’s a mistake to shut them off unless you don’t have any workload to run with a marginal value above the cost of the power the server consumes since you have already paid for everything else. If you can’t come up with any workload to run worth more than the marginal cost of power, then I agree you should shut them off.

Albert Greenberg, Parveen Patel, Dave Maltz and I make a longer form of this argument against shutting servers off in an article to appear in the next issue of SIGCOM Computer Communications Review. We also looked more closely at networking issues in this paper.

–jrh

James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh | blog:http://perspectives.mvdirona.com

4 comments on “Should we Shut Off Servers?
  1. I agree you can easily shut off web-servers Doug. Another option I’m bringing up above that is nearly as easy to do, is to run different workloads on these servers when they are not needed for web serving. Instead of shutting them off, temporarily repurpose them.

    –jrh, jrh@mvdirona.com

  2. In a multi-tier configuration, it’s quite easy to turn off servers in some tiers while leaving others online for data replication. Web servers are an obvious choice, here, since they won’t hold much (or any) data locally that doesn’t already exist elsewhere. By using SAN, servers in the database tier can be taken offline as well, since the storage server would stay online.

  3. Steve, you are correct that, without state replication, you can’t shut off servers. It needed by replication at the disk block level though. You can replicate state at the block level, at the file level, at the table level, at the row level, even up at the request level. I agree you do need replicated state.

    In your comment you said “You can’t power of a server storing data unless you are sure that there are live replicas with at least one block of the same data, which is tricky enough to work out nobody (I know of) does that yet.” Fortunately there are many systems that implement this level of replication. The service world is full of them but so is the commercial software world. From the service world, Google GFS and Microsoft Cosmos both implement multiple redundancy and you pull the plug on any server at any time and not lose state. From the commercial world, Oracle RAC is a shared disk clustered database system that can continue operating without data loss with a node going down. SQL Server database mirroring when run in synchronous mode supports this. Oracle has a log shipping model that supports it as well. EMC has a block based replicator that support this model.

    In the services world, where we are using commodity servers and disk with somewhat elevated failure rates in very large numbers, you will almost always find redundancy models along these lines. Quite common.

    -jrh

  4. One thing that is a real barrier here is data. You can’t power of a server storing data unless you are sure that there are live replicas with at least one block of the same data, which is tricky enough to work out nobody (I know of) does that yet. But what you could do is spin back the HDDs, instead of running at 10K rpm, drop them down to 7200 rpm or less, and bring them all the way back up to speed when needed. Some of the IBM laptop disks did throttling this way based on their data queue.

    The other issue is who says you can’t throttle back your datacentre aircon?

Leave a Reply

Your email address will not be published. Required fields are marked *