Energy Efficiency of Cloud Computing

Most agree that cloud computing is inherently more efficient that on premise computing in each of several dimensions. Last November, I went after two of the easiest to argue gains: utilization and the ability to sell excess capacity (Datacenter Renewable Power Done Right):

Cloud computing is a fundamentally more efficiently way to operate compute infrastructure. The increases in efficiency driven by the cloud are many but a strong primary driver is increased utilization. All companies have to provision their compute infrastructure for peak usage. But, they only monetize the actual usage which goes up and down over time. What this leads to incredibly low average utilization levels with 30% being extraordinarily high and 10 to 20% the norm. Cloud computing gets an easy win on this dimension. When non-correlated workloads from a diverse set of customers are hosted in the cloud, the peak to average flattens dramatically. Immediately effective utilization sky rockets. Where 70 to 80% of the resources are usually wasted, this number climbs rapidly with scale in cloud computing services flattening the peak to average ratio.

To further increase the efficiency of the cloud, Amazon Web Services added an interesting innovation where they sell the remaining capacity not fully consumed by this natural flattening of the peak to average. These troughs are sold on a spot market and customers are often able to buy computing at less the amortized cost of the equipment they are using (Amazon EC2 Spot Instances). Customers get clear benefit. And, it turns out, it’s profitable to sell unused capacity at any price over the marginal cost of power. This means the provider gets clear benefit as well. And, with higher utilization, the environment gets clear benefit as well.

Back in June, Lawrence Berkeley National Labs released a study that went after the same question quantitatively and across a much broader set of dimensions. I first came across the report via coverage in Network Computing: Cloud Data Centers: Power Savings or Power drain? The paper was funded by Google which admittedly has an interest in cloud computing and high scale computing in general. But, even understanding that possible bias or influence, the paper is of interest. From Google’s summary of the findings (How Green is the Internet?):

Funded by Google, Lawrence Berkeley National Laboratory investigated the energy impact of cloud computing. Their research indicates that moving all office workers in the United States to the cloud could reduce the energy used by information technology by up to 87%.

These energy savings are mainly driven by increased data center efficiency when using cloud services (email, calendars, and more). The cloud supports many products at a time, so it can more efficiently distribute resources among many users. That means we can do more with less energy.

The paper attempts to quantify the gains achieved by moving workloads to the cloud by looking at all relevant dimensions of savings some fairly small and some quite substantial. From my perspective, there is room to debate any of the data one way or the other but the case is sufficiently clear that it’s hard to argue that there aren’t substantial environmental gains.

Now, what I would really love to see is an analysis of an inefficient, poorly utilized private datacenter who’s operators wants to be green and needs to compare the installation of a fossil fuel consuming fuel cell power system with dropping the same workload down onto one of the major cloud computing platforms :-).

The paper is in full available at: The Energy Efficiency Potential of Cloud-Based Software: A US Case Study.

James Hamilton
b: /

3 comments on “Energy Efficiency of Cloud Computing
  1. There are many beauties (that’s what i like to call them) of cloud computing and consolidation of IT resources,which has result in lesser number of IT equipment required and eventually leads to Energy Conservancy.

  2. Sambaran asked if its such a big win, why haven’t all companies done it. Even big wins require change and there is always a certain amount of inertia around the status quo. For startups, they have no install base, no inertia, so they are able to take the win and get the nimbleness and costs savings immediately. For larger enterprises, some jump right away because they see a potential strategic advantage. Some move more slowly. And, I suspect there will be some that haven’t done it 10 years from now just as there are still companies running IBM mainframes.

    Changing your IT deployment model is work and it’s not on the priority list of some companies even if they do gain in IT nimbleness and reduced costs. It’s just not yet a top 10 problem for them. And, of course, there are many suppliers used to enterprise profit margins offering "private clouds" or cloud-like, etc. The easy status quo is definitely appealing.

    Remember the move from internal ERP applications to SAP and it’s competitors. The gains where pretty substantial but even that transition took time.

    Generally all transitions take many years to fully take place. There will always be leaders that jump out front and those that hold back. The transitions that deliver the most value tend to happen much faster but none happen over night.

  3. Cloud-based-servers (AWS, et al) are better than private-datacenters in many aspects as you explained above. What are the reasons for people to stick with private-datacenters? I can think of a few:
    1. security-paranoia: My data in my location.
    2. operation robustness in case of calamities or competition.
    3. May be there are applications where there is a performance boost by having one’s own private datacenter.
    Are there more reasons why big corporations (not just startups) may like to use private-datacenters over cloud-servers?

Leave a Reply

Your email address will not be published. Required fields are marked *