I love solar power, but in reflecting carefully on a couple of high profile datacenter deployments of solar power, I’m really developing serious reservations that this is the path to reducing data center environmental impact. I just can’t make the math work and find myself wondering if these large solar farms are really somewhere between a bad idea and pure marketing, where the environmental impact is purely optical.
Facebook Prineville
The first of my two examples is the high profile installation of a large solar array at the Facebook Prineville Oregon Facility. The installation of 100 kilowatts of solar power was the culmination of the unfriend coal campaign run by Greenpeace. Many in the industry believe the campaign worked. In the purest sense, I suppose it did. But let’s look at the data more closely and make sure this really is environmental progress. What was installed in Prineville was a 100 kilowatt solar array at a more than 25 megawatt facility (Facebook Installs Solar Panels at new Data Center ). Even though this is actually a fairly large solar array, its only providing 0.4% of the overall facility power.
Unfortunately, the actually numbers are further negatively impacted by weather and high latitude. Solar arrays produce far less than their rated capacity due to night duration, cloud cover, and other negative impacts from weather. I really don’t want to screw up my Seattle recruiting pitch too much but let’s just say that occasionally there are clouds in the pacific northwest :-). Clearly there fewer clouds at 2,868’ elevation in the Oregon desert but, even at that altitude, the sun spends the bulk of the time poorly positioned for power generation.
Using this solar panel output estimator, we can see that the panels at this location and altitude, yield an effective output of 13.75%. That means that, on average, this array will only put out 13.75 killowatts. That would have this array contributing 0.055% of the facility power or, worded differently, it might run the lights in the datacenter but it has almost no measurable possible impact on the overall energy consumed. Although this is pointed to as an environmentally conscious decisions, it really has close to no influence on the overall environmental impact of this facility. As a point of comparison, this entire solar farm produces approximately as much output as one high density rack of servers consumes. Just one rack of servers is not success, it doesn’t measurably change the coal consumption, and almost certainly isn’t good price/performance.
Having said that the Facebook solar array is very close to purely marketing expense, I hasten to add that Facebook is one of the most power-efficient and environmentally-focused large datacenter operators. Ironically, they are in fact very good environmental stewards, but the solar array isn’t really a material contributor to what they are achieving.
Apple iDataCenter, Maiden, North Carolina
The second example I wanted to look at is Apple’s facility at Maiden, North Carolina, often referred as iDataCenter. In the Facebook example discussed above, the solar array was so small as to have nearly no impact on the composition or amount of power consumed by the facility. However, in this example, the solar farm deployed at the Apple Maiden facility is absolutely massive. In fact, this photo voltaic deployment is reported to be largest commercial deployment in the US at 20 megawatts. Given the scale of this deployment, it has a far better chance to work economically.
The Apple Maiden facility is reported to cost $1B for the 500,000 sq ft datacenter. Apple wisely chose not to publicly announce their power consumption numbers but estimates have been as high as 100 megawatts. If you conservatively assume that only 60% of the square footage is raised floor and they are averaging a fairly low 200W/sq ft, the critical load would still be 60MW (the same as the 700,000 sq ft Microsoft Chicago datacenter). At a moderate Power Usage Efficiency (PUE) of 1.3, Apple Maiden would be at 78MW of total power. Even using these fairly conservative numbers for a modern datacenter build, it would be 78MW total power, which is huge. The actual number is likely somewhat higher.
Apple elected to put in a 20MW solar array at this facility. Again, using the location and elevation data from Wikipedia and the solar array output model referenced above, we see that the Apple location is more solar friendly than Oregon. Using this model, we see that the 20MW photo voltaic deployment has an average output of 15.8% which yields 3.2MW.
The solar array requires 171 acres of land which is 7.4 million sq ft. What if we were to build an solar array large enough to power the entire facility using these solar and land consumption numbers? If the solar farm were to be able to supply all the power of the facility it would need to be 24.4 times larger. It would be a 488 megawatt capacity array requiring 4,172 acres which is 181 million sq ft. That means that a 500,000 sq ft facility would require 181 million sq ft of power generation or, converted to a ratio, each data center sq ft would require 362 sq ft of land.
Do we really want to give up that much space at each data center? Most data centers are in highly populated areas, where a ratio of 1 sq ft of datacenter floor space requiring 362 sq ft of power generation space is ridiculous on its own and made close to impossible by the power generation space needing to be un-shadowed. There isn’t enough roof top space across all of NY to take this approach. It is simply not possible in that venue.
Let’s focus instead on large datacenters in rural areas where the space can be found. Apple is reported to have cleared trees off of 171 acres of land in order to provide photo voltaic power for 4% of their overall estimate data center consumption. Is that gain worth clearing and consuming 171 acres? In Apple Planning Solar Array Near iDataCenter, the author Rich Miller of Data Center Knowledge quotes local North Carolina media reporting that “local residents are complaining about smoke in the area from fires to burn off cleared trees and debris on the Apple property.”
I’m personally not crazy about clearing 171 acres in order to supply only 4% of the power at this facility. There are many ways to radically reduce aggregate data center environmental impact without as much land consumption. Personally, I look first to increasing the efficiency of power distribution, cooling, storage, networking and server and increasing overall utilization and the best routes to lowering industry environmental impact.
Looking more deeply at the Solar Array at Apple Maiden, the panels are built by SunPower. Sunpower is reportedly carrying $820m in debt and has received a $1.2B federal government loan guarantee. The panels are built on taxpayer guarantees and installed using tax payer funded tax incentives. It might possibly be a win for the overall economy but, as I work through the numbers, it seems less clear. And, after the spectacular failure of solar cell producer Solyndra which failed in bankruptcy with a $535 million dollar federal loan guarantee, it’s obvious there are large costs being carried by tax payers in these deployments. Generally, as much as I like data centers, I’m not convinced that tax payers should by paying to power them.
As I work through the numbers from two of the most widely reported upon datacenter solar array deployments, they just don’t seem to balance out positively without tax incentives. I’m not convinced that having the tax base fund datacenter deployments is a scalable solution. And, even if it could be shown that this will eventually become tax neutral, I’m not convinced we want to see datacenter deployments consuming 100s of acres of land on power generation. And, when trees are taken down to allow the solar deployment, it’s even harder to feel good about it. From what I have seen so far, this is not heading in the right direction. If we had $x dollars to invest in lowering datacenter environmental impact and the marketing department was not involved in the decision, I’m not convinced the right next step will be solar.
James Hamilton
e: jrh@mvdirona.com
w: http://www.mvdirona.com
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
I got several notes from datacenter designers miffed that I had called the quality of their facilities into question. For those that read the article this way, it’s really not the case. I strongly suspect that the Apple facility is a good modern design and, having personally visited the Facebook center, I know without a doubt that it is an innovative and very environmentally efficient design. My summery of the Facebook facility from the article “Facebook is one of the most power-efficient and environmentally-focused large datacenter operators. Ironically, they are in fact very good environmental stewards, but the solar array isn’t really a material contributor to what they are achieving.”
These are well engineered facilities. In fact that’s one of my points. Most datacenters are not well engineered. The vast majority of the datacenter square footage are in small datacenters downtown where solar isn’t even a possibility. If you want to positively impact overall datacenter power and environmental footprint, solar doesn’t look like the answer.
My final paragraph form the original note probably best summarizes my thinking: As I work through the numbers from two of the most widely reported upon datacenter solar array deployments, they just don’t seem to balance out positively without tax incentives. I’m not convinced that having the tax base fund datacenter deployments is a scalable solution. And, even if it could be shown that this will eventually become tax neutral, I’m not convinced we want to see datacenter deployments consuming 100s of acres of land on power generation. And, when trees are taken down to allow the solar deployment, it’s even harder to feel good about it. From what I have seen so far, this is not heading in the right direction. If we had $x dollars to invest in lowering datacenter environmental impact and the marketing department was not involved in the decision, I’m not convinced the right next step will be solar.
–jrh
Mike asked if a solar farm could provide grid stabilization. Its technically possible but it would take a very large aggregate farm to substantially reduce peak power level across the entire grid. But, as mentioned above, many utilities offer better rates for customers willing to shed load during peak periods. In areas where the peak periods are typically high sun periods, this could work and both lower costs and reduce the impact on the environment of building more peak load facilities most of which are fossil fuel powered.
I may not have understood this point "Could you effectively use the increased power produced in the solar array to offset the daily increase in power needs for cooling when you are trying to use outside air as the primary cooling method?" I think you mean,if you are running free air cooling, you may need power for the mechanical systems just during the hottest portions of the day which are usually sunny. On this model the utility would provide the power for the critical load (servers, storage, and networking), air moving, and power distribution loses. The solar farm would deliver the power for process based cooling on those hot days where needed. Yes, good suggestion. I could imagine this working in some geographies.
Mike’s final point is that the world would be better if datacenter innovations were broadly shared industry-wide. Possibly true. The counter argument is that the R&D is funded by shareholders who need to get value from the innovation they are funding. If there was no economic gain, there might be less R&D. Facebook has been by far the most open of all datacenter operators — impressively so — so clearly its possible to do this at least in some technology sectors with some business models.
–jrh
Great post, I agree that most of the time solar doesn’t pencil out as a viable alternative energy source for data centers. The intermittent nature and low efficiency of the current systems are difficult to deploy outside of a small range of geographies. That being said- there are other considerations. What is the local utility wanted a solar installation for grid stabilization or other system-wide needs and the data center owner had faster access to cash and the decision making ability to act quickly? Couldthey negotiate better rates on the rest of the electricity bil to make the investment worthwhile? Could you effectively use the increased power produced in the solar array to offset the daily increase in power needs for cooling when you are trying to use outside air as the primary cooling method? I think thia would be a great industry project to ferret out. The data center community, in my humble opinion, could be doing a lot more about sharing basic technology research in the way that the auto industry has done. The car makers obviously don’t disclose future market plans, or aesthetic designs, but they regularly collaborate on production methods and basic research on materials and tooling. Could the planet be a better place if the best practices in data center design were really shared?
Jim asks why not place datacenters in cold locations to save money on cooling. Climate is a factor in the location selection decision but there are lots of other factors: 1) nearness to local being served, 2) low cost power, 3) tax environment, 4) robust and low cost network availability, 5) construction cost and many others. Many if not most datacenters are small facilities built on the same site as the office space. This really makes no sense in that office space is expensive and its hard to get enough power density but that’s where most end up. Many datacenters that are in a dedicated location, are built to be accessible from the office space. Again, this isn’t necessary and it isn’t the best factor to use when locating facilities but its very common to want to build in the same city as HQ and often on the same location.
High scale facilities like those mentioned in the original article make the decision much more carefully. Weather ends up being a factor but not a dominant one. Air side economization (the use of outside air) works surprisingly well in even very hot areas. Some because they are very cold during a big part of the 24 hour period and some because they are low humidity which makes evaporation coolers quite effective.
Weather ends up being part of the decision but not dominant.
The next question you asked is why not transport green power from a solar friendly location. The short answer is that power transports poorly with large transmission losses and requiring very expensive infrastructure to move it. To transport power efficiently, it is transported on very high voltage transmission lines with large towers that require a right of way end to end. The towers aren’t pretty and some citizens on every route are concerned with the medical implications (unproven) of living near high voltage transmission.
Its neither easy nor cheap to transport power but good question.
–jrh
Interesting post, thanks James.
It seems to me there are a few core things here:
– solar works better in sunny places
– those sunny places are likely to be hot, ie using a fair bit of power to keep cool
– data centers need to be kept cool, so will use a lot of power to cool themselves, perhaps the difference is more than the solar array would generate (ac uses a lot of power, right?)
– why not build the data centers in a cold area, thereby using the local natural temperature to reduce power usage for cooling the servers, and build a solar generating station (or wind turbine or wave-power generator) in the best area for that.
Ie why does the generating area have to be in the same place? (or even the same country)
Like someone suggested, it would be better in the big picture to site green power generators in the optimal place for those generators, which is likely not the optimal place for a data center. I’m assuming if your data center uses x power in one part of the world, and you are greenly generating the same power somewhere else in the world, then your global environmental footprint is going to be tiny, which has to be the ultimate aim.
Assuming server companies don’t really want to get in the power generating business, why not just buy green energy to power the whole thing, from a green power company?
I guess you could argue that Apple is successful in the consumer devices market and therefore is a good designer of datacenters. But, it seems like a stretch to me. Lots of successful companies with huge profit margins make decisions I would prefer not to replicate.
–jrh
This sounds like the guy who says no one would pay 500 dollars for a cell phone with a 2 year contract! Remember Apple gets things done right.
Dmitry commented above that solar works well for datacenters with a highly skewed diurnal load. My response was that it important to avoid that situation and ensure that the datacenter and the equipment in it is performing at close to full utilization 24×7. This is important for the environment but also important for costs since you have to pay for servers, power distribution, shell, mechanical systems, networking, etc. you really need to use them 24×7.
But, an offline note from another reader reminded me to consider different power utility contracts. Most utilities, especially those in hot climates, run much higher loads in the middle of the day. If datacenter operators agree to load shed during these high utility load periods, they can get better power prices. This load shedding can be implemented in two ways: 1) reduce datacenter load during these periods, or 2) rely on alternative power sources.
I’ve argued that the first approach is not a good one from an environmental perspective and, because all the high capital cost equipment dwarfs the cost of power, shutting down the equipment is not a good economic decision. The second approach is the usual one chosen but the easiest implementation is tough on the environment: run the diesel generators. Solar is an alternative that could avoid running the generators on sunny days. This could be an excellent application for solar or other on-site clean energy sources.
–jrh
jjj brought up two main points: 1) I’m using sq ft to make the space consumption of solar arrays look worse than it really is, and 2) we should look at solar energy even if it doesn’t make sense as a pilot project that could evolve into great things in the future.
Both good points. On the first, you are right that there are more feet than there are acres or miles or other bigger measure in any facility. But, the ratios are unit free. Using the Apple datacenter example we computed that every unit of data center space whether miles, light years, hectors, or feet requires 362 units of solar. 1:362 is a tough ratio whether using inches, feet, acres or miles.
In this post, I’m asking is it really a good idea to take the space required by every data center and grow it by a factor of 362 in order to support the solar farm assuming you are in Maiden NC. The ratios are far worse in higher latitudes. I see lots of ways of improving the environment in the work I do around datacenters but I’m super uncomfortable with a 362:1 ratio (even if we are measuring the space consumption in miles).
The second point brought up by jjj is that solar it is sufficiently interesting to be worthy of research and perhaps pilot deployments. I agree but my first rule of research is that the ideas has to make sense on paper before going big. This is a good rule for the environment and a great rule for cost control. I love research and investigation and spend most of my time dong that. But I argue we should not go big until there is a clear win whether your yardstick is cost or reducing environmental impact or some combination thereof.
Research is vital but, I argue, we need to have a good research result before deploying in production.
–jrh
Dmitry said "A fair set of points, except for one: you can’t average the output of a solar array because load on the datacenter will be cyclical, with a peak generally somewhere at 1-2PM, i.e. when you expect the sun to be in full scorching mode (at least in the summer) and the output of the array to be near the max."
This is a super important point that is important to address Dmitry. You are right if the datacenter is busy during the day and less busy at night, Solar could be helpful. But the absolute most important environmental (and cost saving) step is to understand that the overall carbon footprint (and cost) of the facility, all the servers, the networking, the power distribution equipment, and the mechanical systems is large. The single most important step we can take to reduce environmental inmpact and cost is to ensure that all we install is used as close to 100% of the time as possible.
Any operator that is not fully utilizing the equipment at near full load 24×7 is doing a disservice to there customer and the environment. Consequently, the difference in load over a 24 hour day are very close to zero in a well run datacenter. If that isn’t the case, the operator should move to cloud computing or pool their resource with another customer with a non-correlated workload.
The net is most high scale operators don’t have large peak to average differences so additional power at the center of the day isn’t that helpful. The primary cost and environmental lever in any datacenter is utilization. If utilization is not high, go there first when working on either costs or minimizing environmental impact.
–jrh
Mark made a super interesting suggestion: if a customer in a difficult solar area like the Pacific Northwest wants to invest in solar, they should offer to by a panel on a facility in a solar friendly area like Phoenix. Interesting idea.
–jrh
Thanks for the comment Ken. As you say, there are applications where solar makes a lot of sense. I just hate to see 362 sq ft of land consumed for every sq ft of datacenter space. It just doesn’t feel like the right overall direction. I have huge number of technologies and techniques, some implemented and some near, that look like bigger positive environmental impact reductions.
–jrh
Amazon Silk is not much of a feature,yet Amazon is pushing it because it’s a start and it will grow into something more useful.Maybe it would be better to look at solar for datacenters as pilot programs,for now.The Gov puts money into solar to help the tech evolve,nothing new there.
Also,you can’t argue that it would be better to do other things to reduce power consumption,it would be better to do all you can do to reduce it.
"almost certainly isn’t good price/performance." – almost certainly is not very precise,actual math would help,as for land usage,the US has very low population density and it might not be that hard to find land that isn’t put to better use.
You also like to use sq ft to make it sound big but the FB facility is very small,300k sq ft is just 0.02787 square kilometers ,a football field is 57600 sq ft ,the Apple array, if it is 171 acres, it’s just 0.692012448 square kilometers.
A fair set of points, except for one: you can’t average the output of a solar array because load on the datacenter will be cyclical, with a peak generally somewhere at 1-2PM, i.e. when you expect the sun to be in full scorching mode (at least in the summer) and the output of the array to be near the max. At night, depending on the design, some servers could be power-throttled or even turned off, since they’ll be seeing less than a half the load. Plus, electricity could be more expensive during the day. If they do this right, the fraction of power needs covered by the array _could_ be higher. Still not quite high enough to justify the expense, though.
Great post and a good discussion. I’ve been saying for years now that I don’t want to see another solar panel in the Pacific NW US (at least west of the Cacades) until every roof in the hotter/sunnier climes is covered with solar panels (electricity or hot water – much better ROI BTW). If you do live where its cloudy, you’d be better off paying to install solar on someone else’s roof where it’s sunny. Else it is a huge waste of resources. Once every "sunny" roof is covered, the cost to cover the cloudy roofs should be a lot less as well.
As James took great care to point this out, large scale solar next to the DC makes no sense at all given the power densities involved. However, using vast tracts of desert for a hybrid solar thermal plant where it continues to generate power at night with natural gas seems to make a lot more sense. It also makes grid balancing a whole lot easier.
Saving on transmission losses (~6%) by generating locally is generally a bad trade-off if the power generation isn’t optimized for your geography. It’s purely symbolism and ultimately an irresponsible waste of resources.
Thanks for spurring the conversation!
I have to agree with you, James, and would posit that solar in industrial applications in general typically doesn’t pay off, unless the kW/sq ft demand is low, such as lights only. Solar pays off in reducing grid demand by homes (and RVs and boats). And doubles as a method to build consumer awareness of how much power we humans do take, and, hopefully, teaches how to conserve. The Apple DC analysis is enlightening to say the least. Thanks.
Anthony, I hear you and have seen some unbelievably bad power supplies in my career as well. Things are getting better. Occasionally, I’ll see PSU faults due to low build quality, occasionally I’ll see high harmonics, but generally they are getting better.
You correctly point out that 10 years ago 10 to 20% dynamic range from idle to full load was common. Good servers today can do 55%. You saw the next issue coming: power supply efficiency is terrible at low loads. Three solutions are being employed: 1) tune the efficiency sweet spot lower than 100% since the server is close to never there (put it where the server likely will run), 2) flatten out the curve such that efficiency is largely the same from 30% through 90%, and 3) run multiple supplies and turn them off and on such that the supplies are running at peak efficiency. The first technique is really just a patch and it was the first one employed. Although it is a patch, it actually was reasonably effective. The second patch is much more recent and it’s very effective: just make the supply efficiency sweet spot far wider. The final technique is used in shared infrastructure servers where there are many servers sharing power supplies, cooling, sometimes networking etc. In this designs M PSUs power N servers and the number of supplies is run up and down based upon power draw. This is a very nice technique that has been around for 3 or so years and is starting to get deployed more broadly.
Thanks for sharing your experience Anthony.
–jrh
Dave argues that solar makes sense for many applications but datacenters are not the best fit. I agree. The power density in a datacenter is 100 to 1000 times higher than residential and its very hard to rely on solar at these power densities without taking up vast tracts of land.
–jrh
Thanks James
My comments were meant to be taken globally, I was not picking on the US. A 1980’s DC in Australia had a PF 0.67 Inductive, there was no incentive to fix the problem, so they didn’t. In 08, when the utility said they cannot have any more power, they just installed PFC to mask the problem.
On the work that I did back in 07/08 the company used SUN M Series, HP AMD and Cisco as their main IT platforms. The SUN M Series had a power variation from idle to 100% load of 10% (This figure was confirmed by in-house testing).
With the HP server hardware the server came with processor wattages of between 60W to 120W. If the lower wattage processor was selected the work load power increase was greatly reduced, for a marginal clock speed reduction.
When a new system and network was installed (SUN, HP & Cisco) I had SNMP power rails installed in all of these racks as well. While the system was going through stress and volume testing I monitored the power at 10 second intervals. The entire variation from no activity to full load was only +15%. It looked like as one part of the system, was stressed other parts went idle. E.G. as they increased the number of users until the web server response times reduced, the App, DB all did less work. They were unable to find a way to max out utilization of all tiers of the platform.
If they do start to change servers so that power consumption is more responsive to workload, they will face problems with PS efficiency at low loads. While a lot has been done by 80Plus, there is still a long way to go.
The 100kWh that Facebook installed very cost-effective … as a marketing expense to, in the eyes of the public, "green-up" their datacenter. But solar is still pretty interesting.
+1 to the comments about "correlated load", and "it’s not a way to power the DC, it’s a what to do with the roof solution" — If you look at solar for the wrong use case, it will look very cost inefficient. However, in the right scenario, it can be economically viable..
The first and most obvious distinction is wholesale vs retail power pricing. If the electrical company is building out PV solar, then they still have the overhead costs of distribution and you are stacking solar against wholesale base load power at 2-3c / kWh. However, if solar is installed at the point of consumption, it is not adding load to transmission lines, nor suffering transmission losses and can be priced against retail power at 8, 10, or 15c per kWh. So solar is cheaper when it is consumed close to the point of origin, but this is in conflict with the traditional utility / consumer relationship as it put the capital expense on the consumer side.
Second is "load correlation". Matching load to generation is one of the trickiest and most expensive aspects of providing electricity. Coal drives most of the base load but takes 24 – 48 hours to spin up so can’t be used to adjust for daily demand cycles. Natural gas plants can vary their power output much more rapidly and are used to provide 20-50% duty cycle peak loads and because of the capital costs and low duty cycle, are ~4X (or more) as expensive per kWh vs coal base load. For occassional spikes like noon on an unusually hot day, low-capital equipment like plain old diesel generators are used and measured per kWh, their price is simply astronomical. One of the trickiest bits with renewables like solar and especially wind is that their power output varies and not necessarily in sync with customer demand. The rule-of-thumb that I’ve heard repeated is that integrating more than 20% renewables into the mix is simply not feasible without a radical rethink of how we do power.
However, with solar, it’s really interesting that there exist scenarios where solar generation is well correlated with load. Think homes, officies, and datacenters, places where hot summer days = increased cooling power. This means that solar is actually helping to shave peaks off the demand curve. Put another way, it means for those scenarios to be viable, solar wouldn’t need to beat base load coal, it would only need be competitive with the cost of generation from nat-gas peaker plants. — Though, note that not so many utility companies expose the price differential between peak and base load in a meaningful way.
Still, a datacenter is probably a sub-optimal use case. Besides the obvious disparity in power-density of solar generation vs some of the most power-dense buildings on earth, DC power consumption is on such a scale that their "retail rate" is pretty close to the "wholesale rate" due to factors like: long term contract pricing, deliberate siting near generation sources like hydro, and taking delivery at 13kV "med voltage".
On the other end of the spectrum … I’ve got a buddy doing large scale residential development in AZ. He’s currently pretty excited about using solar for profit making. So his company builds out an apartment complex and rents it out. Because the management already has to have a billing relationship with every tenant, there is a low marginal cost to add a new billing line-item for "electricity usage" and take over responsibility for billing power consumption. The company is already planning to do this so that they can pocket the difference between the retail rate to the consumer, and the semi-bulk rate from the power company who has significant cost savings from not having to send out 200 individual bills and deal with collecting unpaid accounts. The facility can then install rooftop solar which will generate very well in AZ (~35% of stated capacity or better), and will be generating the most power during the hottest parts of the day when residential usage is highest. Thus, they will be able to sell every generated watt at retail price, and potentially, if solar turns their facility into an "offpeak" energy consumer, they can use this fact to negotiate a lower rate with the utility.
Thanks for the comment Anthony. Overall, I understand most of what you have above. I quibble slightly on the "operators would be charged 20% more if the power was charged at kVAh rather than kWh". Most utilities require close to unity power factor these days so I wouldn’t expect a huge difference between these two measures.
You commented that work load power increase had close to zero impact with most of the change related to weather. There is no question that weather is a substantial factor. And, in a highly utilized data center, workload should stay relatively flat. But, the difference in power draw of a lightly loaded server and a highly loaded server has been growing over the last 5 to 7 years. Today, a good server at zero load will draw about 45% of the power of that same server at full load. 10 years ago this dynamic range was closer to 80% through 100%. So, there actually now is room for more change as workload changes. Nonetheless, you are correct that in a well utilized large facility, the power draw doesn’t change much with respect to workload.
–jrh
Hi James, I must start off by stating that I agree with your article.
For a full evaluation of alternative energies you need to widen your view:
o Fossil fuels currently receive huge government subsidies, at least $400b PA (1). This fact is currently ignored in energy comparisons.
o Most DCs are built around cheap power deals and tax incentives from governments, again more subsidies. The benefit, if any, of these subsidies to tax payers is never appropriately explained.
o Utility power should be charged at kVAh not kWh, this would cause many DC’s to take a serious look at their plant, or face a 20% increase in their utility bill.
o Supply charges in utility bills are used to subsidies large consumers; these should be reduced so that they are negligible when compared to usage charges. This changes the equation to the user pays.
I have yet to see any real comparisons when it comes to alternative energy, both sides are being economical with the truth.
(1) http://www.energyefficiencynews.com/i/4806/
I noticed in one of the comment the subject of “Work Load Power Increases” . In a previous life I studied this is DCs and found it didn’t exist.
For an entire DC the only variation was based on outside air temperature.
For UPS load the variations were all in the noise (+-10%) and had no correlation to any known activity.
Thank you for the interesting read.
Dave is on an interesting question. Could we use Solar to reduce the heat load in the datacenter while getting power at the same time. That approach has some potential but, using the Maiden NC numbers, the roof area would only deliver 0.3% of the power for the facility. Not super interesting but your angle of getting 0.3% while at the same time avoiding heat load on the building is an interesting one.
In Dave’s approach we don’t get a substantial change in the grid power consumption but we do reduce the heat load from the while getting 1/4% of the buildings power requirement. I think it could work, in some tax jurisdictions will work, and is worth thinking through more carefully.
Benjamin asked if solar would make sense if "Would the numbers work out better with an economy-wide carbon price in place at, say $23 per ton?" Possibly, yes. The biggest problem with coal and oil is that its essentially free and all the costs are finding it, extracting it, and bringing it to market. The cost of the material and the impact on the environment is essentially free. Free is a difficult number to beat.
I think you are on the right track in setting the price of different fuels correctly and then letting the forces of competition play out.
There are locations where solar is not appropriate and shouldn’t win over alternative clean sources. This is the problem with tax incentives. Even non-reasonable deployments end up working economically.
I like the approach of ensuring the raw materials costs are in line with their impact on society and then let the free market innovate and find alternatives. However, there are a couple of very powerful lobby groups between here and getting this approach implemented :-).
–jrh
Sunny, you were asking about Fuel Cells. I’m super interested in them but see some downsides. Today, the economics don’t quite work if the power is cheap, however, there are lots of places where power is expensive and others where it is very unreliable. Fuel cells have a good chance to make sense where the power is expensive, the grid unreliable, or the tax incentive strong (e.g. California).
Most fuel cell deployments run on natural gas which is not a disaster but I’m not in love it. Emissions when consuming natural gas aren’t a problem but we are consuming a non-renewable resource. Some run on biogas which is a super interesting configuration. I plan to write something up in the near future — thanks for the reminder.
–jrh
Great hearing from you Frank and, yeah, I hear you on the understatement. Many folks actually do think that the tax base supporting solar deployments is a good way to get to scale economics earlier. I understand the argument but, for it to work, the volume economics have to kick in fairly quickly or we end up having to support very high-scale and expensive deployments on the backs of the tax base.
–jrh
Hi, James – your analysis seems mostly correct, but I have an alternate proposal for thinking about solar. Solar + datacenter is not an answer to "how do I power my DC?"; it’a an answer to "Hey, I just built this huge building, what do I do with the roof?"
_If_ solar power is cost-effective in the area where you’re building your DC (Oregon probably not, but Texas is starting to become so), then it’s a very reasonable way to shade yourself. And your analysis ignored a small bit of the picture: Solar’s more useful when it’s hot out (cooling load higher) and your customers are awake (workload higher). So while it’s never going to be sufficient to power even a meaningful fraction of a datacenter’s servers, it’s possible that it actually shaves off 0.2% of the peak load. Still roughly meaningless, but again, the question to ask is _only_ about the ROI on installing the solar panels, not whether we can power an entire facility.
Would the numbers work out better with an economy-wide carbon price in place at, say $23 per ton?
Really enjoyed reading this. Would love to hear your thoughts on the other "Green" power source for data-centers – Fuel Cells. To start with, I know they will not take acres and they may help replace a bunch of components in the back-power stack.
"I’m not convinced that having the tax base fund datacenter deployments is a scalable solution."
I love your powers of understatement!