Friday, February 14, 2014

Most agree that cloud computing is inherently more efficient that on premise computing in each of several dimensions. Last November, I went after two of the easiest to argue gains: utilization and the ability to sell excess capacity (Datacenter Renewable Power Done Right):

 

Cloud computing is a fundamentally more efficiently way to operate compute infrastructure. The increases in efficiency driven by the cloud are many but a strong primary driver is increased utilization. All companies have to provision their compute infrastructure for peak usage. But, they only monetize the actual usage which goes up and down over time. What this leads to incredibly low average utilization levels with 30% being extraordinarily high and 10 to 20% the norm.  Cloud computing gets an easy win on this dimension. When non-correlated workloads from a diverse set of customers are hosted in the cloud, the peak to average flattens dramatically.  Immediately effective utilization sky rockets.  Where 70 to 80% of the resources are usually wasted, this number climbs rapidly with scale in cloud computing services flattening the peak to average ratio.

 

To further increase the efficiency of the cloud, Amazon Web Services added an interesting innovation where they sell the remaining capacity not fully consumed by this natural flattening of the peak to average. These troughs are sold on a spot market and customers are often able to buy computing at less the amortized cost of the equipment they are using (Amazon EC2 Spot Instances). Customers get clear benefit. And, it turns out, it’s profitable to sell unused capacity at any price over the marginal cost of power. This means the provider gets clear benefit as well. And, with higher utilization, the environment gets clear benefit as well.

 

Back in June, Lawrence Berkeley National Labs released a study that went after the same question quantitatively and across a much broader set of dimensions.  I first came across the report via coverage in Network Computing: Cloud Data Centers: Power Savings or Power drain? The paper was funded by Google which admittedly has an interest in cloud computing and high scale computing in general. But, even understanding that possible bias or influence, the paper is of interest. From Google’s summary of the findings (How Green is the Internet?):

 

Funded by Google, Lawrence Berkeley National Laboratory investigated the energy impact of cloud computing. Their research indicates that moving all office workers in the United States to the cloud could reduce the energy used by information technology by up to 87%.

 

These energy savings are mainly driven by increased data center efficiency when using cloud services (email, calendars, and more). The cloud supports many products at a time, so it can more efficiently distribute resources among many users. That means we can do more with less energy.

 

The paper attempts to quantify the gains achieved by moving workloads to the cloud by looking at all relevant dimensions of savings some fairly small and some quite substantial. From my perspective, there is room to debate any of the data one way or the other but the case is sufficiently clear that it’s hard to argue that there aren’t substantial environmental gains.  

 

Now, what I would really love to see is an analysis of an inefficient, poorly utilized private datacenter who’s operators wants to be green and needs to compare the installation of a fossil fuel consuming fuel cell power system with dropping the same workload down onto one of the major cloud computing platforms :-).  

 

The paper is in full available at: The Energy Efficiency Potential of Cloud-Based Software: A US Case Study.

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Friday, February 14, 2014 6:37:17 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Saturday, January 25, 2014

It’s an unusual time in our industry where many of the most interesting server, storage, and networking advancements aren’t advertised, don’t have a sales team, don’t have price lists, and actually are often never even mentioned in public. The largest cloud providers build their own hardware designs and, since the equipment is not for sale, it’s typically not discussed publically.

 

A notable exception is Facebook. They are big enough that they do some custom gear but they don’t view their hardware investments as differentiating. That may sound a bit strange -- why spend on something if it is not differentiating? Their argument is that if some other social networking site adopts the same infrastructure that Facebook uses, it’s very unlikely to change social networking market share in any measureable way. Customers simply don’t care about infrastructure. I generally agree with that point. It’s pretty clear that if MySpace had adopted the same hardware platform as Facebook years ago, it really wouldn’t have changed the outcome. Facebook also correctly argues that OEM hardware just isn’t cost effective at the scale they operate. The core of this argument is custom hardware is needed and social networking customers go where their friends go whether or not the provider does a nice job on the hardware.

 

I love the fact that part of our industry is able to be open about the hardware they are building but I don’t fully agree that hardware is not differentiating in the social networking world. For example, maintaining a deep social graph is actually a fairly complex problem. In fact, I remember when tracking friend-of-a-friend over 10s of millions of users, a required capability of any social networking site today, was still just a dream. Nobody had found the means to do it without massive costs and/or long latencies. Lower cost hardware and software innovation made it possible and the social network user experience and engagement has improved as a consequence.

 

Looking at a more modern version of the same argument, It has not been cost effective to store full resolution photos and videos using today’s OEM storage systems. Consequently, most social networks haven’t done this at scale. It’s clear that storing full resolution images would be a better user experience and it’s another example where hardware innovation could be differentiating.

 

Of course the data storage problem is not restricted to social networks nor just to photo and video.The world is realizing the incredible value of data and the same time the costs of storage are plummeting. Most companies storage assets are growing quickly. Companies are hiring data scientists because even the most mundane bits of operational data can end up being hugely valuable. I’ve always believed in the value of data but more and more companies are starting to realize that data can be game changing to their businesses. The perceived value is going up fast while, at the same time, the industry is realizing that if you have weekly data, it is good. But daily is better, hourly is a lot better, 5 min is awesome but you really prefer 1 min granularity. This number keeps falling. The perceived value of data is climbing the resolution of measures is becoming finer and, as a consequence, the amount of data being stored is skyrocketing. Most estimates have data volumes doubling on 12 to 18 month centers -- somewhat faster than Moore’s law. Since all operational data backs up to cold storage, cold storage is always going to be larger than any other data storage category.

 

Next week, Facebook will show work they have been doing in cold storage mostly driven by their massive image storage problem. At OCP Summit V an innovative low-cost archival storage hardware platform will be shown. Archival projects always catch my interest because the vast majority of the world’s data is cold, the percentage that is cold is growing quickly, and I find the purity of a nearly single dimensional engineering problem to be super interesting. Almost the only dimension of relevance in cold storage is cost. See Glacier: Engineering for Cold Data Storage in the Cloud for more on this market segment and how Amazon Glacier is addressing it in the cloud.

 

This Facebook hardware project is particularly interesting in that it’s based upon an optical media rather than tape. Tape economics come from a combination of very low cost media combined with only a small number of fairly expensive drives. The tape is moved back and forth between storage slots and the drives when needed by robots. Facebook is taking the same basic approach of using robotic systems to allow a small number of drives to support a large media pool. But, rather than using tape, they are leveraging the high volume Blu-ray disk market with the volume economics driven by consumer media applications. Expect to see over a Petabyte of Blu-ray disks supplied by a Japanese media manufacturer housed in a rack built by a robotic systems supplier.

 

I’m a huge believer in leveraging consumer component volumes to produce innovative, low-cost server-side solutions. Optical is particularly interesting in this application and I’m looking forwarding to seeing more of the details behind the new storage platform. It looks like very interesting work.

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, January 25, 2014 6:04:25 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Hardware
 Saturday, January 04, 2014

It’s not often I’m enthused about spending time in Las Vegas but this year’s AWS re:Invent conference was a good reason to be there. It’s exciting getting a chance to meet with customers who have committed their business to the cloud or are wrestling with that decision.

 

The pace of growth since last years was startling but what really caught my attention was the number of companies that had made the transition between testing on the cloud to committing their most valuable workloads to run there. I fully expected this to happen but I’ve seen these industry sweeping transitions before. They normally take a long, long time. This pace of the transition to the cloud is, at least in my experience, unprecedented.

 

I did a 15 min video interview with SiliconANGLE on Wednesday that has been poste:

·         Video: http://www.youtube.com/watch?v=dInADzgCI-s&list=PLenh213llmcYOOqcEd2TE1GHjcK9nwY-Z&index=8

 

I did a talk on Thursday that is posted as well:

·         Slides: http://mvdirona.com/jrh/TalksAndPapers/JamesHamilton_Reinvent20131115.pdf

·         Video: http://m.youtube.com/watch?feature=plpp&v=WBrNrI2ZsCo&p=PLhr1KZpdzukelu-AQD8r8pYotiraThcLV

 

The talk focused mainly on the question of is the cloud really different?  Can’t I just get a virtualization platform and do a private cloud and enjoy the same advantages? To investigate that question, I started by looking at AWS scale since scale is the foundation on which all the other innovations are built. It’s scale that funds the innovations and it’s scale that makes even small gains worth pursuing since the multiplier is so large. I then  walked through some of the innovations happening in AWS and talked about why they probably wouldn’t make sense for an on-premise deployment.  Looking at the innovations I tried to show the breadth of what was possible by sampling from many different domains including custom high voltage power substations, custom server design, custom networking hardware, a networking protocol development team, purpose built storage rack designs, the AWS Spot market, and supply chain and procurement optimization.

 

On scale, I laid out several data points that we hadn’t been talking about externally in the past but still used my favorite that is worth repeating here: Every day AWS adds enough new server capacity to support all of Amazon’s global infrastructure when it was a $7B annual enterprise. This point more than any other underlines what I was saying earlier: big companies are moving to the cloud and this transition is happening faster than any industry transition I’ve ever seen.

 

Each day enough servers are deployed to support a $7B ecommerce company. It happens every day and the pace continues to accelerate. Just imagine the complexity of getting all that gear installed each day. Forget innovation. Just the logistical challenge is a pretty interesting problem.

 

The cloud is happening in a big way, the pace is somewhere between exciting and staggering, and any company that isn’t running at least some workloads in the cloud and learning is making a mistake.

 

If you have time to attend re:Invent next year, give it some thought. More than 1/3 of the sessions are from customers talking about what they have deployed and how they did it and it’s an interesting and educational week. Most of the re:Invent talks are posted at: http://www.youtube.com/user/AmazonWebServices

 

I’ve not been doing a good job of posting talks to http://mvdirona.com/jrh/work as I should so here’s a cummulative update:

·         2013.11.14: Why Scale Matters and How the Cloud is Different (talk, video), re:Invent 2013, Las Vegas, NV

·         2013.11.13: AWS The Cube at AWS re:Invent 2013 with James Hamilton (video), re:Invent 2013, Las Vegas NV

·         2013.01.22: Infrastructure Innovation Opportunities (talk), Ycombinator, Mountain View, CA

·         2012.11.29: Failures at Scale and how to Ride Through Them (talk, video), re:Invent 2012, Las Vegas, NV

·         2012.11.28: Building Web Scale Applications with AWS (talk, video), re:Invent 2012, Las Vegas, NV

·         2012.10.09: Infrastructure Innovation Opportunities (talk, video), First Round Capital CTO Summit, San Francisco, CA.

·         2012.09.25: Cloud Computing Driving Infrastructure Innovation (talk), Intel Distinguished Speaker Series, Hillsboro, OR

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, January 04, 2014 10:53:08 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Services
 Saturday, November 30, 2013

Facebook Iowa Data Center

In 2007, the EPA released a study on datacenter power consumption at the request of the US Congress (EPA Report to Congress on Server and Data Center Efficiency).  The report estimated that the power consumption of datacenters represented about 1.5% of the US Energy Budget in 2005 and this number would double by 2010. In a way, this report was believable in that datacenter usage was clearly on the increase. What the report didn’t predict was the pace of innovation in datacenter efficiency during that period.  Increased use, spurred increased investment, which has led to a near 50% improvement in industry leading datacenter efficiency. 

 

Also difficult to predict at the time of the report was the rapid growth of cloud computing. Cloud computing is a fundamentally more efficiently way to operate compute infrastructure. The increases in efficiency driven by the cloud are many but a strong primary driver is increased utilization. All companies have to provision their compute infrastructure for peak usage. But, they only monetize the actual usage which goes up and down over time. What this leads to incredibly low average utilization levels with 30% being extraordinarily high and 10 to 20% the norm.  Cloud computing gets an easy win on this dimension. When non-correlated workloads from a diverse set of customers are hosted in the cloud, the peak to average flattens dramatically.  Immediately effective utilization sky rockets.  Where 70 to 80% of the resources are usually wasted, this number climbs rapidly with scale in cloud computing services flattening the peak to average ratio.

 

To further increase the efficiency of the cloud, Amazon Web Services added an interesting innovation where they sell the remaining capacity not fully consumed by this natural flattening of the peak to average. These troughs are sold on a spot market and customers are often able to buy computing at less the amortized cost of the equipment they are using (Amazon EC2 Spot Instances). Customers get clear benefit. And, it turns out, it’s profitable to sell unused capacity at any price over the marginal cost of power. This means the provider gets clear benefit as well. And, with higher utilization, the environment gets clear benefit as well.

 

It’s easy to see why the EPA datacenter power predictions were incorrect and, in fact, the most recent data shows just how far off the original estimates actually were. The EPA report estimated that in 2005, the datacenter industry was consuming just 1.5% of the nation’s energy budget but what really caused international concern was the projection that this percentage would surge to 3.0% in 2010. This 3% in 2010 number has been repeated so many times that it’s now considered a fact by many and it has to have been one of the most referenced statistics in the datacenter world.

 

The data on 2010 is now available and the power consumption of datacenters in 2010 is currently estimated to have been in the 1.1 to 1.5% in 2010. Almost no changes since 2005. Usage has skyrocketed. Most photos are ending up in the cloud, social networking is incredibly heavily used, computational science has exploded, machine learning is tackling business problems previously not cost effective to address, much of the world now have portable devices and a rapidly increasing percentage of these devices are cloud connected. Server side computing is exploding and yet power consumption is not.

 

However, the report caused real concern both in the popular press and across the industry and there have been many efforts to not only increase efficiency but to also to increase the usage of clean power. The progress on efficiency has been stellar but the early efforts on green power have been less exciting.

 

I’ve mentioned some of the efforts to go after cleaner data center power in the past. The solar array at the Facebook Prineville datacenter is a good example: It’s a 100kW solar array “powering” a 25MW+ facility. Pure optics with little contribution to the datacenter power mix (I love solar but… ). Higher scale efforts such as the Apple Maiden North Carolina facility are getting much closer to being able to power the entire facility go far beyond mere marketing programs. But, for me, the land consumed is disappointing with 171 acres of land cleared of trees in order to provide green power. There is no question that solar can augment the power at major datacenters but onsite generation with solar just doesn’t look practical for most facilities particularly those in densely packed urban settings. In the article Solar at Scale: How Big is a Solar Array with 9MW Average Output, we see that a 9MW average output solar plant takes approximately 314 acres (13.6 million sq/ft). This power density is not practical for onsite generation at most datacenters and may not be the best approach to clean power for datacenters.

 

The biggest issue with solar applied to datacenters is the mass space requirements are not practical with most datacenters being located in urban centers and it may not even be the right choice for rural centers. Another approach gaining rapid popularity is the use of fuel cells to power datacenters. At least partly because of highly skilled marketing, fuel cells are considered by many jurisdictions to be “green”. The good news is that some actually are green. A fuel cell running on waste biogas is indeed a green solution with considerable upside. This is what Apple plans for its Maiden North Carolina fuel cell farm and it’s a very nice solution (Apple’s Biogas Fuel Cell Plant Could Go Live by June).

 

The challenge with fuel cells in the datacenter application is just about all of them are not biogas powered relying instead on non-renewable energy. For example, the widely heralded eBay Salt Lake City fuel cell deployment is natural gas powered (New eBay Data Center Runs Almost Entirely on Bloom Fuel Cells). There have been many criticisms of the efficiency of these fuel cell-based power plants (884 may be Bloom Energy’s Fatal Number and their cost effectiveness (Delaware Press Wakes Up to Bloom Energy Boondoggle).

 

I’m mostly ignoring these reports and focusing on the simpler question: how can running a low-scale power plant on a fossil fuel possibly be green and is on-site generation really going to be the most environmentally conscious means of powering data centers? Most datacenters lack space for large on-site generation facilities, most datacenter operators don’t have the skill, and generally the most efficient power generation plants are far bigger than a single datacenter could ever consume on its worst day.

 

The argument for on-site generation is to avoid the power lost in transmission. The US Energy Information Administration (EIA)  estimates power lost in transmission to be 7%.  That’s certainly a non-zero number and the loss is relevant but I still find myself skeptical that the world leaders in operating efficient and reliable power plans are going to be datacenter operators. And, even if that prospect does excite you, does having a single plant on one location being the sole power source for the datacenter sounds like a good solution? I’m finding it hard to get excited. Maintenance, natural disasters, and human error will eventually yield some unintended consequences.

 

A solution that I really like is the Facebook Altoona Iowa facility (Facebook Says Its New Data Center Will Run Entirely on Wind). It is a standard grid connected datacenter without on-site power generation. But they have partnered to have built a 138MW wind project in nearby Wellsburg Iowa. What I like about this approach is: 1) no clear cutting was required to prepare the land for generation and the land remains multi-use, 2) it’s not fossil fuel powered, 3) the facility will be run by a major power generation operator rather than as a sideline by the datacenter operator, and 4) far more clean power is being produced than will be actually used by the datacenter so they are actually adding more clean power to the grid than they are consuming by a fairly significant margin.

 

Hats off to Facebook for doing it clean data center energy right. Nice work.

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, November 30, 2013 1:38:31 PM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Services
 Saturday, November 09, 2013

I frequently get asked “why not just put solar panels on data center roofs and run them on that.” The short answer is datacenter roofs are just way too small. In a previous article (I Love Solar But…) I did a quick back of envelope calculation and, assuming a conventional single floor build with current power densities, each square foot of datacenter space would require roughly 362 sq ft of solar panels. The roof would only contribute roughly 1% of the facility requirements. Quite possibly still worth doing but there is simply no way a roof top array is going to power an entire datacenter.

 

There are other issues with roof top arrays, the two biggest of which are weight and strong wind protection. A roof requires significant re-enforcement to support the weight of an array and the array needs to be strongly secured against strong winds. But both of these issues are fairly simple engineering problems and very solvable. The key problem with powering datacenters with solar is insufficient solar array power density. If we were to believe the data center lighting manufacturers estimates (Lighting is the Unsung Hero in Data Center Energy Efficiency), a roof top array wouldn’t be able to fully power the facility lighting system. Most modern data centers have moved to more efficient lighting systems but the difficult fact remains: a roof top array will not supply even 1% of the overall facility draw.

 

When last discussing the space requirements of a solar array sufficient to power a datacenter in I love Solar but …, a few folks took exception to my use of sq ft as the measure of solar array size. The legitimate criticism raised was that in using sq ft and then computing that a large datacenter would require 181 million sq ft of solar array, I made the solar farm look unreasonably large. Arguably 4,172 acres is a better unit and produces a much smaller number. All true.

 

The real challenge for most people is in trying to understand the practicality of solar to power datacenters is to get a reasonable feel for how big the land requirements actually would be. They sound big but data centers are big and everything associated with them is big. Large numbers aren’t remarkable. One approach to calibrating the “how big is it?” question is to go with a ratio. Each square foot of data center would require approximately 362 square feet of solar array, is one way to get calibration of the true size requirements. Roughly 100:1 which tells us that roof top solar power is a contributor but not a solution. It also helps explain why solar is not a practical solution in densely packed urban areas especially where multi-floor facilities are in use.

 

The opening last week of the Kagoshima Nanatsujima Mega Solar Power Plant is large array at over 70MW and it gives us another scaling point of land requirements for large solar plants:

·         Name: Kagoshima Nanatsujima Mega Solar Power Plant

·         Location: 2 Nanatsujima, Kagoshima City, Kagoshima Prefecture, Japan

·         Annual output: Approx. 78,800MWh (projected)

·         Construction timeline: September 2012 through October 2013

·         Total Investment: Approx. 27 billion yen (approx. 275.5 million U.S. dollars*4)

 

This is an impressive deployment with 70MW peak output over 314 acres. That’s roughly 4.5 acres/MW which is fairly consistent with other solar plants I’ve written up in the past.  The estimated annual output for Kagoshima is 78,800MWh which is, on average, 9MW output. The reason the expected average output is only 13% of the peak output is solar plants aren’t productive 24x7. 

 

Datacenter power consumption tends to be fairly constant day and night. Night being somewhat lower due to typically lower cooling requirements. But with only 10 to 15% of the total power in a modern datacenter going to cooling, the difference between day and night is actually not that large.  Where there are big differences in datacenter power consumption is in the power consumption at full utilization vs partial. Idle critical loads can be as low as 50% of full loads so some variability in power consumption will be seen in a datacenter but datacenters are largely classified as base load (near constant) demand. Using these numbers, the Kagoshima solar facility is large enough to power ½ of a large datacenter.

 

The Kagoshima Nanatsujima Mega Solar Power Plant is a vast plant and anything at scale is interesting. It’s well worth learning more about the engineering behind this project but one aspect that caught my interest is there are some excellent pictures of this deployment that do a better job of scaling that my previous “several million sq ft” reference. The Kagoshima array is 313 acres which is 13.6 million sq ft or 1.27 million sq meters but these numbers aren’t that meaningful to most of us and pictures tell a far better story.

 

Above Photos from: http://global.kyocera.com/news/2013/1101_nnms.html

 

 

Photo Credit: http://www.sma.de/en/products/references/kagoshima-nanatsujima-mega-solar-power-plant.html

 

No matter how you look at it, the Kagoshima array is truly vast. I’m super interested in all forms of energy production and consider myself fairly familiar with numbers on this scale and yet I still find the size of this plant when shown in its entirety to still be surprising. It really is big.

 

The cost of solar is falling fast due to deep R&D investment but a facility of this scope and scale is still fairly expensive. At US$275 million the solar plant is more expensive than the datacenter it would be able to power.

 

Despite the shortage of land space in Japan, expect to see continued investment in large scale solar arrays for at least another year or two driven by the Feed-in Tariff legislation which requires Japanese power companies to buy all the power produced by any solar plant bigger than 10kW at a generous 20 year fixed rate of 42 yen/kWh which at current exchange rates is US$0.42kWh (Japan Creates Potential $9.6 Billion Solar Boom with FITs and also 1.8GW of New Solar Power for Japan in Q2). As a point of comparison, in the US commercial power, if purchased in large quantities is often available at rates below $0.05/kWh.

 

Yano research institute predicts that the massive Japanese solar boom driven by the FIT legislation will be grow until 2014 and then taper off due to land availability pressure: Japan Market for Large-Scale Solar to Drop After 2014, Yano Says.

 

One possible solution to both the solar space consumption problem and the inability to produce power in poor light conditions are orbital based solar arrays (Japan Aims to Beam Solar Energy Down From Orbit). This technology is nowhere close to a reality today but it is still of interest.

 

More articles on the Kagoshima Nanatsujima Mega Solar Power Plant:

·         KYOCERA Starts Operation of 70MW Solar Power Plant, the Largest in Japan

·         Japan's biggest solar plant starts generating power in SW. Japan

·         Kyocera completes Japan’s largest offshore solar energy plant in Kagoshima

 

For those of you able to attend the AWS re:Invent conference in Vegas, I’ll see you there. I’ll presenting Why Scale Matters and How the Cloud Really is Different Thursday at 4:15 in the Delfino room and I’ll be around the conference all week.

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, November 09, 2013 2:29:22 PM (Pacific Standard Time, UTC-08:00)  #    Comments [7] - Trackback
Hardware
 Tuesday, July 16, 2013

At the Microsoft World-Wide Partners Conference, Microsoft CEO Steve Ballmer announced that “We have something over a million servers in our data center infrastructure. Google is bigger than we are. Amazon is a little bit smaller. You get Yahoo! and Facebook, and then everybody else is 100,000 units probably or less.

 

That’s a surprising data point for a variety of reasons. The most surprising is that the data point was released at all. Just about nobody at the top of the server world chooses to boast with the server count data point. Partly because it’s not all that useful a number but mostly because a single data point is open to a lot of misinterpretation by even skilled industry observers. Basically, it’s pretty hard to see the value of talking about server counts and it is very easy to see the many negative implications that follow from such a number

 

The first question when thinking about this number is where does the comparative data actually come from?  I know for sure that Amazon has never released server count data. Google hasn’t either although estimates of their server footprint abound. Interestingly the estimates of Google server counts 5 years ago was 1,000,000 servers whereas current estimates have them only in the 900k to 1m range.

 

The Microsoft number is surprising when compared against past external estimates, data center build rates, or ramp rates from previous hints and estimates. But, as I said, little data has been released by any of the large players and what’s out there is typically nothing more than speculation. Counting servers is hard and credibly comparing server counts is close to impossible.

 

Assume that each server runs 150 to 300W including all server components with a weighted average of say 250W/server. And as a Power Usage Effectiveness estimator, we will use 1.2 (only 16.7% of the power is lost to datacenter cooling and power distribution losses). With these scaling points, the implied total power consumption is over 300MW. Three hundred million watts or, looking at annual MW-h, we get an annual consumption of over 2,629,743 MWh or 2.6 terawatt hours – that’s a hefty slice of power even by my standards. As a point of scale since these are big numbers, the US Energy Information Administration reports that in 2011 the average US household consumed 11.28MWh. Using that data point, 2.6TWh is just a bit more than the power consumed by 230,000 US homes.

 

Continuing through the data and thinking through what follows from “over 1M” servers, the capital expense of servers will be $1.45 billion dollars assuming a very inexpensive $1,450 per server. Assuming a mix of different servers with an average cost of $2,000/server, the overall capital expense would be $2B before looking at the data center infrastructure and networking costs required to house them. With the overall power consumption computed above of 300MW which is 250MW of critical power (using a PUE of 1.2) and assuming a data center build cost at a low $9/W of critical load (Uptime Institute estimates numbers closer to $12/W), we would have a data center build cost of $2.25B dollars.  The implication of “over 1M servers” is an infrastructure investment of $4.25B including the servers. That’s an athletic expense even for Microsoft but certainly possible.

 

How many datacenters would be implied by “more than one million servers?” Ignoring the small points of presence since they don’t move the needle, and focusing on the big centers, let’s assume 50,000 servers in each facility. That assumption would lead to 30 major facilities.  As a cross check, if we instead focus on power consumption as a way to compute facility count and assume a total datacenter power consumption of 20MW each and the previously computed 300MW total power consumption, we would have roughly 15 large facilities. Not an unreasonable number in this context.

 

The summary following from these data and the “over 1M servers” number:

·         Facilities: 15 to 30 large facilities

·         Capital expense: $4.25B

·         Total power: 300MW

·         Power Consumption: 2.6TWh annually

 

Over one million servers is a pretty big number even in a web scaled world.

 

                                                --jrh

 

Data Center Knowledge article: Ballmer: Microsoft has 1 Million Servers

Transcript of Ballmer presentation:  Steve Ballmer: World-Wide Partner Conference Keynote


Additional note: Todd Hoff of High Scalability made a fun observation in his post Steve Ballmer says Microsoft has over 1 million servers. From Todd: “The [Microsoft data center] power consumption is about the same as used by Nicaragua and the capital cost is about a third of what Americans spent on video games in 2012.


 James Hamilton 

e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

 

Tuesday, July 16, 2013 4:34:17 PM (Pacific Standard Time, UTC-08:00)  #    Comments [11] - Trackback
Services
 Tuesday, June 18, 2013
Back in 2009, in Datacenter Networks are in my way, I argued that the networking world was stuck in the mainframe business model: everything vertically integrated. In most datacenter networking equipment, the core Application Specific Integrated Circuit (ASIC – the heart of a switch or router), the entire hardware platform for the ASIC including power and physical network connections, and the software stack including all the protocols all come from a single vender and there is no practical mechanism to make different choices. This is how the server world operated back 40 years ago and we get much the same result. Networking gear is expensive, interoperates poorly, is expensive to manage and is almost always over-subscribed and constraining the rest of the equipment in the datacenter. 

Further exaggerating what is already a serious problem, unlike the mainframe server world of 40 years back, networking equipment is also unreliable. Each has 10s of millions of lines of code under the hood forming frustratingly productive bug farms. Each fault is met with a request from the vendor to “install the latest version” – the new build is usually different than what was previously running but “different” isn’t always good when running production systems. It’s just a new set of bugs to start chasing.  The core problem is many customers ask for complex features they believe will help make their network easier to manage. The networking vendor knows delivering these features keeps the customer uniquely dependent upon that vendor’s single solution. One obvious downside is the vendor lock-in that follows from these helpful features but the larger problem is these extensions are usually not broadly used, not well tested in subsequent releases, and the overall vendor protocol stacks become yet more brittle as they become more complex aggregating all these unique customer requests. This is an area in desperate need for change.

Because networking gear is complex and, despite them all implementing the same RFCs, equipment from different vendors (and sometimes the same vendor) still interoperates poorly. It’s very hard to deliver reliable networks at controllable administration costs from multiple vendors freely mixing and matching. The customer is locked in, the vendors know it, and the network equipment prices reflect that realization. 

Not only is networking gear expensive absolutely but the relative expensive of networking is actually increasing over time. Tracking the cost of networking gear as a ratio of all the IT equipment (servers, storage, and networking) in a data center, a terrible reality emerges.  For a given spend on servers and storage, the required network cost has been going up each year I have been tracking it. Without a fundamental change in the existing networking equipment business model, there is no reason to expect this trend will change.

Many of the needed ingredients for change actually have been in place for more than half a decade now. We have very high function networking ASICs available from Broadcom, Marvell, Fulcrum (Intel), and many others. Each competes with the others driving much faster innovation and ensuring that cost decreases are passed on to customers rather than simply driving more profit margin. Each ASIC design house produces references designs that are built by multiple competing Original Design Manufacturers each with their own improvements. Taking the widely used Broadcom ASIC as an example, routers based upon this same ASIC are made by Quanta, Accton, DNI, Foxconn, Celestica, and many others. Each competes with the others driving much faster innovation and ensuring that the cost decreases are passed on to customers rather than further padding networking equipment vendor margins.

What is missing is high quality control software, management systems, and networking protocol stacks that can run across a broad range of competing, commodity networking hardware. It’s still very hard to take merchant silicon ASICs packaged in ODM produced routers and deploy production networks. Very big datacenter operators actually do it but it’s sufficiently hard that this gear is largely unavailable to the vast majority of networking customers.  

One of my favorite startups, Cumulus Networks, has gone after exactly the problem of making ODM produced commodity networking gear available broadly with high quality software support. Cumulus supports a broad range of ODM produced routing platforms built upon Broadcom networking ASICs. They provide everything it takes above the bare metal router to turn an ODM platform into a production quality router.  Included is support for both layer 2 switching and layer 3 routing protocols including OSPF (v2 and V3) and BGP.  Because the Cumulus system includes and is hosted on a Linux distribution (Debian), many of the standard tools, management, and monitoring systems just work. For example, they support Puppet, Chef, collectd, SNMP, Nagios, bash, python, perl, and ruby. 

Rather than implement a proprietary device with proprietary management as the big networking players typically do, or make it looks like a CISCO router as many of the smaller payers often do, Cumulus makes the switch look like a Linux server with high-performance routing optimizations. Essentially it’s just a routing optimized Linux server.

The business model is similar to Red Hat Linux where the software and support are available on a subscription model at a price point that makes a standard network support contract look like the hostage payout that it actually is. The subscription includes entire turnkey stack with everything needed to take one of these ODM produced hardware platforms and deploy a production quality network. Subscriptions will be available directly from Cumulus and through an extensive VAR network.

Cumulus supported platforms include Accton AS4600-54T (48x1G & 4x10G), Accton AS5600-52x (48x10G & 4x40G), Agema (DNI brand) AG-6448CU (48x1G & 4x10G), Agema AG-7448CU (48x10G & 4x40G), Quanta QCT T1048-LB9 (48x1G & 4x10G), and Quanta QCT T-3048-LY2 (48x10G & 4x40G). Here’s a picture of many of these routing platforms from the Cumulus QA lab:



In addition to these single ASIC routing and switching platforms, Cumulus is also working on a chassis-based router to be released later this year:



This platform has all the protocol support outlined above and delivers 512 ports of 10G or 128 ports of 40G in a single chassis. High-port count chassis-based routers have always been exclusively available from the big, vertically integrated networking companies mostly because high-port count routers are expensive to design and are sold in lower volumes than the simpler, single ASIC designs commonly used as Top of Rack or as components of aggregation layer fabrics. Cumulus and their hardware partners are not yet ready to release more details on the chassis but the plans are exciting and the planned price point is game changing.  Expect to see this later in the year.

Cumulus Networks was founded by JR Rivers and Nolan Leake in early 2010. Both are phenomenal engineers and I’ve been huge fans of their work since meeting them as they were first bringing the company together.  They raised seed funding in 2011 from Andreessen Horowitz, Battery Ventures, Peter Wagner, Gurav Garv, Mendel Rosenblum, Diane Greene, and Ed Bugnion.  In mid-2012, they did an A-round from Andreessen Horowitz and Battery Ventures

The pace of change continues to pick up in the networking world and I’m looking forward to the formal announcement of the Cumulus chassis-based router.

--jrh

James Hamilton 
e: jrh@mvdirona.com 
w: http://www.mvdirona.com 
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

Tuesday, June 18, 2013 11:15:27 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Hardware
 Monday, February 04, 2013

In the data center world, there are few events taken more seriously than power failure and considerable effort is spent to make them rare. When a datacenter experiences a power failure, it’s a really big deal for all involved. But, a big deal in the infrastructure world still really isn’t a big deal on the world stage. The Super Bowl absolutely is a big deal by any measure. On average over the last couple of years, the Super Bowl has attracted 111 million viewers and is the number 1 most watched television show in North America eclipsing the final episode of Mash.  World-wide, the Super Bowl is only behind the European Cup (UEFA Champions Leaque) which draws 178 million viewers.

 

When the 2013 Super Bowl power event occurred, the Baltimore Ravens had just run back the second half opening kick for a touchdown and they were dominating the game with a 28 to 6 point lead. The 49ers had already played half the game and failed to get a single touchdown. The Ravens were absolutely dominating and they started the second half by tying the record for the longest kickoff return in NFL history at 108 yards. The game momentum was strongly with Baltimore.

 

At 13:22 in the third quarter, just 98 seconds into the second half, ½ of the Superdome lost primary power. Fortunately it wasn’t during the runback that started the second half.  The power failure let to a 34 min delay to restore full lighting the field and, when the game restarted, the 49ers were on fire. The game was fundamentally changed by the outage with the 49ers rallying back to a narrow defeat of only 3 points. The game ended 34 to 31 and it really did come down to the wire where either team could have won. There is no question the game was exciting and some will argue the power failure actually made the game more exciting. But, NFL championships should be decided on the field and not impacted by the electrical system used by the host stadium.

 

What happened at 13:22 in the third quarter when much of the field lighting failed?  Entergy, the utility supply power to the Superdome reported their “distribution and transmission feeders that serve the Superdome were never interrupted” (Before Game Is Decided, Superdome Goes Dark). It was a problem at the facility.

 

The joint report from SMG the company that manages the Superdome and Entergy, the utility power provider, said:

 

A piece of equipment that is designed to monitor electrical load sensed an abnormality in the system. Once the issue was detected, the sensing equipment operated as designed and opened a breaker, causing power to be partially cut to the Superdome in order to isolate the issue. Backup generators kicked in immediately as designed.

 

Entergy and SMG subsequently coordinated start-up procedures, ensuring that full power was safely restored to the Superdome. The fault-sensing equipment activated where the Superdome equipment intersects with Entergy’s feed into the facility. There were no additional issues detected. Entergy and SMG will continue to investigate the root cause of the abnormality.

 

 

Essentially, the utility circuit breaker detected an “anomaly” and opened the breaker. Modern switchgear have many sensors monitored by firmware running on a programmable logic controller. The advantage of these software systems is they are incredibly flexible and can be configured uniquely for each installation. The disadvantage of software systems is the wide variety of configurations they can support can be complex and the default configurations are used perhaps more often than they should. The default configurations in a country where legal settlements can be substantial tend towards the conservative side. We don’t know if that was a factor in this event but we do know that no fault was found and the power was stable for the remainder of the game. This was almost certainly a false trigger.

 

Because the cause has not yet been reported and, quite often, the underlying root cause is never found. But, it’s worth asking, is it possible to avoid long game outages and what would it cost?  As when looking at any system faults, the tools we have to mitigate the impact are: 1) avoid the fault entirely, 2) protect against the fault with redundancy, 3) minimize the impact of the fault through small fault zones, and 4) minimize the impact through fast recovery.

 

Fault avoidance: Avoidance starts with using good quality equipment, configuring it properly, maintaining it well, and testing it frequently. Given the Superdome just went through $336 million renovation, the switch gear may have been relatively new and, even if it wasn’t, it likely was almost certainly recently maintained and inspected.

 

Where issues often arise are in configuration. Modern switch gear have an amazingly large number of parameters many of which interact with each other and, in total, can be difficult to fully understand. And, given the switch gear manufactures know little about the intended end-use application of each switchgear sold, they ship conservative default settings. Generally, the risk and potential negative impact of a false positive (breaker opens when it shouldn’t) is far less than a breaker that fails to open. Consequently conservative settings are common.

 

Another common cause of problems is lack of testing. The best way to verify that equipment works is to test at full production load in a full production environment in a non-mission critical setting. Then test it just short of overload to ensure that it can still reliably support the full load even though the production design will never run it that close to the limit, and finally, test it into overload to ensure that the equipment opens up on real faults.

 

The first, testing in full production environment in non-mission critical setting is always done prior to a  major event. But the latter two tests are much less common: 1) testing at rated load, and 2) testing beyond rated load.  Both require synthetic load banks and skill electricians and so these tests are often not done. You really can’t beat testing in a non-mission critical setting as a means of ensuring that things work well in a mission critical setting (game time).

 

Redundancy: If we can’t avoid a fault entirely, the next best thing is to have redundancy to mask the fault. Faults will happen. The electrical fault at the Monday Night Football game back in December of 2011 was caused by utility sub-station failing. These faults are unavoidable and will happen occasionally. But is protection against utility failure possible and affordable? Sure, absolutely. Let’s use the Superdome fault yesterday as an example.

 

The entire Superdome load is only 4.6MW. This load would be easy to support on two 2.5 to 3.0MW utility feeds each protected by its own generator. Generators in the 2.5 to 3.0 MW range are substantial V16 diesel engines the size of a mid-sized bus. And they are expensive running just under $1M each but they are also available in mobile form and inexpensive to rent. The rental option is a no-brainer but let’s ignore that and look at what it would cost to protect the Superdome year around with a permanent installation. We would need 2 generators, the switchgear to connect it to the load and uninterruptable power supplies to hold the load during the first few seconds of a power failure until the generators start up and are able to pick up the load. To be super safe, we’ll buy third generator just in case there is a problem and one of the two generators don’t start. The generators are under $1m each and the overall cost of the entire redundant power configuration with the extra generator could be had for under $10m.  Looking at statistics from the 2012 event, a 30 second commercial costs just over $4m.

 

For the price of just over 60 seconds of commercials the facility could protected against fault. And, using rental generators, less than 30 seconds of commercials would provide the needed redundancy to avoid impact from any utility failure. Given how common utility failures are and the negative impact of power disruptions at a professional sporting event, this looks like good value to me. Most sports facilities chose to avoid this “unnecessary” expense and I suspect the Superdome doesn’t have full redundancy for all of its field lighting. But even if it did, this failure mode can sometimes cause the generators to be locked out and not pick up the load during a some power events. In this failure mode, when a utility breaker incorrectly senses a ground fault within the facility, it is frequently configured to not put the generator at risk by switching it into a potential ground fault. My take is I would rather run the risk of damaging the generator and avoid the outage so I’m not a big fan of this “safety” configuration but it is a common choice.

 

Minimize Fault Zones: The reason why only ½ the power to the Superdome went down was because the system installed at the facility has two fault containment zones. In this design, a single switchgear event can only take down ½ of the facility.

 

Clearly the first choice is to avoid the fault entirely. And, if that doesn’t work, have redundancy take over and completely mask the fault. But, in the rare cases where none of these mitigations work, the next defense are small fault containment zones. Rather than using 2 zones, spend more on utility breakers and have 4 or 6 and, rather than losing ½ the facility, lose ¼ or 1/6.  And, if the lighting power is checker boarded over the facility lights, (lights in a contiguous region are not all powered by the same utility feed but the feeds are distributed over the lights evenly), rather than losing ¼ or 1/6 of the lights in one area of the stadium, we would lose that fraction of the lights evenly over the entire facility. Under these conditions, it might be possible to operate with slightly degraded field lighting and be able to continue the game without waiting for light recovery.

 

Fast Recovery: Before we get to this fourth option, fast recovery, we have tried hard to avoid failure, then we have used power redundancy to mask the failure, then we have used small fault zones to minimize the impact. The next best thing we can do is to recover quickly. Fast recovery depends broadly on two things: 1) if possible automate recovery so it can happen in seconds rather than the rate at which humans can act, 2) if humans are needed, ensure they have access to adequate monitoring and event recording gear so they can see what happened quickly and they have trained extensively and are able to act quickly.

 

In this particular event, the recovery was not automated. Skilled electrical technicians were required. They spent nearly 15 minute checking system states before deciding it was safe to restore power. Generally, 15 min on a human judgment driven recover decision isn’t bad. But the overall outage was 34 min. If the power was restored in 15 min, what happened during the next 20?  The gas discharge lighting still favored at large sporting venues, take roughly 15 minutes to restart after a momentary outage. Even a very short power interruption will still suffer the same long recovery time. Newer light technologies are becoming available that are both more power efficient and don’t suffer from these long warm-up periods.

 

It doesn’t appear that the final victor of Super Bowl XLVII was changed by the power failure but there is no question the game was broadly impacted. If the light failure had happened during the kickoff return starting the third quarter, the game may have been changed in a very fundamental way. Better power distribution architectures are cheap by comparison. Given the value of the game, the relative low cost of power redundancy equipment, I would argue it’s time to start retrofitting major sporting venues with more redundant design and employing more aggressive pre-game testing.

 

                                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, February 04, 2013 11:16:06 AM (Pacific Standard Time, UTC-08:00)  #    Comments [18] - Trackback
Hardware | Ramblings
 Monday, January 14, 2013

In the cloud there is nothing more important than customer trust. Without customer trust, a cloud business can’t succeed. When you are taking care of someone else’s assets, you have to treat those assets as more important than your own. Security has to be rock solid and absolutely unassailable. Data loss or data corruption has to be close to impossible and incredibly rare.  And all commitments to customers have to be respected through business changes. These are hard standards to meet but, without success against these standards, a cloud service will always fail. Customers can leave any time and, if they have to leave, they will remember you did this to them.

 

These are facts and anyone working in cloud services labors under these requirements every day. It’s almost reflexive and nearly second nature. What brought this up for me over the weekend was a note I got from one of my cloud service providers. It emphasized that it really is worth talking more about customer trust.

 

Let’s start with some history.  Many years ago, Michael Merhej and Tom Klienpeter started a company called ByteTaxi that eventually offered a product called Foldershare. It was a simple service with a simple UI but it did peer-to-peer file sync incredibly well, it did it through firewalls, it did it without install confusion and, well, it just worked. It was a simple service but was well executed and very useful. In 2005, Microsoft acquired Foldershare and continued to offer the service. It didn’t get enhanced much for years but it remained useful.  Then Microsoft came up with a broader plan called Windows Live Mesh and the Foldershare service was renamed. Actually the core peer-to-peer functionality passed through an array of names and implementations from Foldershare, Windows Live Foldershare, Windows Live Sync and finally Windows Live Mesh.

 

During the early days at Microsoft, it was virtually uncared for and had little developer attention. As new names and implementations were announced and the feature actually had developer attention, it was getting enhanced but, ironically, it was also getting somewhat harder to use and definitely less stable. But, it still worked and the functionality lived on in Live Mesh. Microsoft has another service called Skydrive that does the same thing that all the other cloud sync services do: sync files to cloud hosted storage. Unfortunately, it doesn’t include the core peer-to-peer functionality of Live Mesh.  Reportedly 40% of the Live Mesh users also use Skydrive.

 

This is where we get back to customer trust. Over the weekend, Microsoft sent out a note to all Mesh users confirming it will be shut off next month as a follow up to their announcement that the service will be killed that went out in December. They explained the reason to terminate the service and remove the peer-to-peer file sync functionality:

 

Currently 40% of Mesh customers are actively using SkyDrive and based on the positive response and our increasing focus on improving personal cloud storage, it makes sense to merge SkyDrive and Mesh into a single product for anytime and anywhere access for files.

 

Live Mesh is being killed without a replacement service.  It’s not a big deal but 2 months isn’t a lot of warning. I know that this sort of thing can happen to small startups anytime and, at any time, customers could get left unsupported. But, Microsoft seems well beyond the startup phase at this point.  I get that strategic decisions have to be made but there are times when I wonder how much thought went into the decision. I suspect it was something like “there are only 3 million Live Mesh customers so it’s really not worth continuing with it.” And, it actually may not be worth continuing the service. But, there is this customer trust thing. And I just hate to see it violated – it’s bad for all cloud provider when anyone in the industry makes a decision that raises the customer trust question.

 

Fortunately, there is a Mesh replacement service: http://www.cubby.com/.  I’ve been using it since the early days when it was in controlled beta. Over the last month or so Cubby has moved to full, unrestricted production. It’s been solid for the period I’ve been using it and, like Foldershare, its simple and it works. I really like it. If you are a Mesh user, were a Foldershare user, or just would like to be able to sync your files between your different systems, try Cubby.  Cubby also add support for Android or IOS devices without extra cost. Cubby is well executed and stable.

 

It must be Cloud Cleaning week at Microsoft.  A friend forwarded the note sent to the millions of active Microsoft Messenger customers this month: the service is being “retired” and users are recommended to consider Skype.

 

If you are interested in reading more on the Live Mesh service elimination, the following is the text of the note sent to all current Mesh users:

Dear Mesh customer,

Recently we released the latest version of SkyDrive, which you can use to:

  • Choose the files and folders on your SkyDrive that sync on each computer.
  • Access your SkyDrive using a brand new app for Android v2.3 or the updated apps for Windows Phone, iPhone, and iPad.
  • Collaborate online with the new Office Web apps, including Excel forms, co-authoring in PowerPoint and embeddable Word documents.

Currently 40% of Mesh customers are actively using SkyDrive and based on the positive response and our increasing focus on improving personal cloud storage, it makes sense to merge SkyDrive and Mesh into a single product for anytime and anywhere access for files. As a result, we will retire Mesh on February 13, 2013. After this date, some Mesh functions, such as remote desktop and peer to peer sync, will no longer be available and any data on the Mesh cloud, called Mesh synced storage or SkyDrive synced storage, will be removed. The folders you synced with Mesh will stop syncing, and you will not be able to connect to your PCs remotely using Mesh.

We encourage you to try out the new SkyDrive to see how it can meet your needs. During the transition period, we suggest that, in addition to using Mesh, you sync your Mesh files using SkyDrive. This way, you can try out SkyDrive without changing your existing Mesh setup. For tips on transitioning to SkyDrive, see SkyDrive for Mesh users on the Windows website. If you have questions, you can post them in the SkyDrive forums.

Mesh customers have been influential and your feedback has helped shape our strategy for Mesh and SkyDrive. We would not be here without your support and hope you continue to give us feedback as you use SkyDrive.

Sincerely,

The Windows Live Mesh and SkyDrive teams

 

There is real danger of thinking of customers as faceless aggregations of hundreds of thousands or even millions of users. We need to think through decisions one user at a time and make it work for them individually. If millions of active users are on Microsoft Messenger, what would it take to make them want to use Skype?  If 60% of the Windows Live Mesh users chose not to use Microsoft Skydrive, why is that? Considering customers one at a time is clearly the right thing for customers but, long haul, it’s also the right thing for the business. It builds the most important asset in the cloud, customer trust.

 

                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

Monday, January 14, 2013 8:32:32 PM (Pacific Standard Time, UTC-08:00)  #    Comments [16] - Trackback
Services
 Tuesday, December 11, 2012

Since 2008, I’ve been excited by, working on, and writing about Microservers. In these early days, some of the workloads I worked with were I/O bound and didn’t really need or use high single-thread performance. Replacing the server class processors that supported these applications with high-volume, low-cost client system CPUs yielded both better price/performance and power/performance. Fortunately, at that time, there were good client processors available with ECC enabled (see You Really DO Need ECC) and most embedded system processors also supported ECC.

 

I wrote up some of the advantages of these early microserver deployments and showed performance results from a production deployment in an internet-scale mail processing application in Cooperative, Expendable, Microslice, Servers: Low-Cost, Low-Power Servers for Internet-Scale Services.

 

Intel recognizes the value of low-power, low-cost processors for less CPU demanding applications and announced this morning the newest members of the Atom family, the S1200 series. These new processors support 2 cores and 4 threads and are available in variants of up to 2Ghz while staying under 8.5 watts. The lowest power members of the family come in at just over 6W. Intel has demonstrated an S1200 reference board running spec_web at 7.9W including memory, SATA, Networking, BMC, and other on-board components.

 

Unlike past Atom processors, the S1200 series supports full ECC memory. And all members of the family support hardware virtualization (Intel VT-x2), 64 bit addressing, and up to 8GB of memory. These are real server parts.

 

Centerton (S1200 series) features:

One of my favorite Original Design Manufacturers, Quanta Computer, has already produced a shared infrastructure rack design that packs 48 Atom S1200 servers into a 3 rack unit form factor (5.25”).

 

Quanta S900-X31A front and back view:

 

 

Quanta S900-X31a server drawer:

 

Quanta has done a nice job with this shared infrastructure rack. Using this design, they can pack a booming 624 servers into a standard 42 RU rack.

 

I’m excited by the S1200 announcement because it’s both a good price/performer and power/performer and shows that Intel is serious about the microserver market. This new Atom gives customers access to microserver pricing without having to change instruction set architectures. The combination of low-cost, low-power, and the familiar Intel ISA with its rich tool chain and broad application availability is a compelling combination. It’s exciting to see the microserver market heating up and I like Intel’s roadmap looking forward.

 

                                                --jrh

 

Related Microserver focused postings:

·         Cooperative Expendable Microslice Servers: Low-cost, Low-power Servers for Internet Scale Services

·         The Case for Low-Cost, Low-Power Servers

·         Low Power Amdahl Blades for Data Intensive Computing

·         Microslice Servers

·         ARMCortext-A9 Design Announced

·         2010 the Year of the Microslice Server

·         Very Low Power Server Progress

·         Nvidia Project Denver: ARM Powered Servers

·         ARM V8 Architecture

·         AMD Announced Server-Targeted ARM Part

·         Quanta S900-X31A

 

James Hamilton 
e:
jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, December 11, 2012 8:55:45 AM (Pacific Standard Time, UTC-08:00)  #    Comments [12] - Trackback
Hardware
 Wednesday, November 28, 2012

I’ve worked in or near the database engine world for more than 25 years. And, ironically, every company I’ve ever worked at has been working on a massive-scale, parallel, clustered RDBMS system. The earliest variant was IBM DB2 Parallel Edition released in the mid-90s. It’s now called the Database Partitioning Feature.

 

Massive, multi-node parallelism is the only way to scale a relational database system so these systems can be incredibly important. Very high-scale MapReduce systems are an excellent alternative for  many workloads. But some customers and workloads want the flexibility and power of being able to run ad hoc SQL queries against petabyte sized databases. These are the workloads targeted by massive, multi-node relational database clusters and there are now many solutions out there with Oracle RAC being perhaps the most well-known but there are many others including Vertica, GreenPlum, Aster Data, ParAccel, Netezza, and Teradata.

 

What’s common across all these products is that big databases are very expensive. Today, that is changing with the release of Amazon Redshift. It’s a relational, column-oriented, compressed, shared nothing, fully managed, cloud hosted, data warehouse. Each node can store up to 16TB of compressed data and up to 100 nodes are supported in a single cluster.

 

Amazon Redshift manages all the work needed to set up, operate, and scale a data warehouse cluster, from provisioning capacity to monitoring and backing up the cluster, to applying patches and upgrades. Scaling a cluster to improve performance or increase capacity is simple and incurs no downtime. The service continuously monitors the health of the cluster and automatically replaces any component, if needed.

 

The core node on which the Redshift clusters are build, includes 24 disk drives with an aggregate capacity of 16TB of local storage. Each node has 16 virtual cores and 120 Gig of memory and is connected via a high speed 10Gbps, non-blocking network. This a meaty core node and Redshift supports up to 100 of these in a single cluster.

 

There are many pricing options available (see http://aws.amazon.com/redshift for more detail) but the most favorable comes in at only $999 per TB per year. I find it amazing to think of having the services of an enterprise scale data warehouse for under a thousand dollars by terabyte per year. And, this is a fully managed system so much of the administrative load is take care of by Amazon Web Services.

 

Service highlights from: http://aws.amazon.com/redshift

 

Fast and Powerful – Amazon Redshift uses a variety to innovations to obtain very high query performance on datasets ranging in size from hundreds of gigabytes to a petabyte or more. First, it uses columnar storage and data compression to reduce the amount of IO needed to perform queries. Second, it runs on hardware that is optimized for data warehousing, with local attached storage and 10GigE network connections between nodes. Finally, it has a massively parallel processing (MPP) architecture, which enables you to scale up or down, without downtime, as your performance and storage needs change.


You have a choice of two node types when provisioning your own cluster, an extra large node (XL) with 2TB of compressed storage or an eight extra large node (8XL) with 16TB of compressed storage. You can start with a single XL node and scale up to a 100 node eight extra large cluster. XL clusters can contain 1 to 32 nodes while 8XL clusters can contain 2 to 100 nodes.

 

Scalable – With a few clicks of the AWS Management Console or a simple API call, you can easily scale the number of nodes in your data warehouse to improve performance or increase capacity, without incurring downtime. Amazon Redshift enables you to start with a single 2TB XL node and scale up to a hundred 16TB 8XL nodes for 1.6PB of compressed user data. Resize functionality is not available during the limited preview but will be available when the service launches.

 

Inexpensive – You pay very low rates and only for the resources you actually provision. You benefit from the option of On-Demand pricing with no up-front or long-term commitments, or even lower rates via our reserved pricing option. On-demand pricing starts at just $0.85 per hour for a two terabyte data warehouse, scaling linearly up to a petabyte and more. Reserved Instance pricing lowers the effective price to $0.228 per hour, under $1,000 per terabyte per year.

Fully Managed – Amazon Redshift manages all the work needed to set up, operate, and scale a data warehouse, from provisioning capacity to monitoring and backing up the cluster, and to applying patches and upgrades. By handling all these time consuming, labor-intensive tasks, Amazon Redshift frees you up to focus on your data and business insights.

 

Secure – Amazon Redshift provides a number of mechanisms to secure your data warehouse cluster. It currently supports SSL to encrypt data in transit, includes web service interfaces to configure firewall settings that control network access to your data warehouse, and enables you to create users within your data warehouse cluster. When the service launches, we plan to support encrypting data at rest and Amazon Virtual Private Cloud (Amazon VPC).

 

Reliable – Amazon Redshift has multiple features that enhance the reliability of your data warehouse cluster. All data written to a node in your cluster is automatically replicated to other nodes within the cluster and all data is continuously backed up to Amazon S3. Amazon Redshift continuously monitors the health of the cluster and automatically replaces any component, as necessary.

 

Compatible – Amazon Redshift is certified by Jaspersoft and Microstrategy, with additional business intelligence tools coming soon. You can connect your SQL client or business intelligence tool to your Amazon Redshift data warehouse cluster using standard PostgreSQL JBDBC or ODBC drivers.

 

Designed for use with other AWS Services – Amazon Redshift is integrated with other AWS services and has built in commands to load data in parallel to each node from Amazon Simple Storage Service (S3) and Amazon DynamoDB, with support for Amazon Relational Database Service and Amazon Elastic MapReduce coming soon.

 

Petabyte-scale data warehouses no longer need command retail prices of upwards $80,000 per core. You don’t have to negotiate an enterprise deal and work hard to get the 60 to 80% discount that always seems magically possible in the enterprise software world.  You don’t even have to hire a team of administrators. Just load the data and get going. Nice to see.

 

                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Wednesday, November 28, 2012 9:37:51 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Services
 Monday, October 29, 2012

I have been interested in, and writing about, microservers since 2007.  Microservers can be built using any instruction set architecture but I’m particularly interested in ARM processors and their application to server-side workloads. Today Advanced Micro Devices announced they are going to build an ARM CPU targeting the server market. This will be 4-core, 64 bit, more than 2Ghz part that is expected to sample in 2013 and ship in volume in early 2014.

 

AMD is far from new to microserver market. In fact, much of my past work on microservers has been AMD-powered. What’s different today is that AMD is applying their server processor skills while, at the same time, leveraging the massive ARM processor ecosystem. ARM processors power Apple iPhones, Samsung smartphones, tablets, disk drives, and applications you didn’t even know had computers in them.

 

The defining characteristic of server processor selection is to focus first and most on raw CPU performance and accept the high cost and high-power consumption that follows from that goal. The defining characteristic of Microservers is we leverage the high-volume client and connected device ecosystem and make a CPU selection on the basis of price/performance and power/performance with an emphasis on building balanced servers. The case for microservers is anchored upon these 4 observations:

 

·         Volume economics: Rather than draw on the small-volume economics of the server market, with Microservers we leverage the massive volume economics of the smart device world driven by cell phones, tablets, and clients. To give some scale to this observation, IDC reports that there were 7.6M server units sold in 2010. ARM reports that there were 6.1B Arm processors shipped last year. The connected and embedded device market volumes are 1000x larger than that of the server market and the performance gap is shrinking rapidly. Semiconductor analyst Semicast estimates that by 2015 there will be 2 ARM processors for every person in the world. In 2010, ARM reported that, on average, there were 2.5 ARM-based processors in each Smartphone. The connected and embedded device market is 1000x that of that of the server world.

 

Having watched and participated in our industry for nearly 3 decades, one reality seems to dominate all others: high-volume economics drives innovation and just about always wins. As an example, IBM mainframes ran just about every important server-side workload in the mid-80s. But, they were largely swept aside by higher-volume RISC servers running UNIX. At the time I loved RISC systems – databases systems would just scream on them and they offered customers excellent price/performance. But, the same trend played out again. The higher-volume X86 processors from the client world swept the superior raw performing RISC systems aside.

 

Invariably what we see happening about once a decade is a high-volume, lower-priced technology takes over the low end of the market. When this happens many engineers correctly point out that these systems can’t hold a candle to the previous generation server technology and then incorrectly believe they won’t get replaced. The new generation is almost never better in absolute terms but they are better price/performers so they first are adopted for the less performance critical applications.  Once this happens, the die is cast and the outcome is just about assured. The high-volume parts move up market and eventually take over even the most performance critical workloads of the previous generation. We see this same scenario play out roughly once a decade.

 

·         Not CPU bound: Most discussion in our industry centers on the more demanding server workloads like databases but, in reality, many workloads are not pushing CPU limits and are instead storage, networking, or memory bound.  There are two major classes of workloads that don’t need or can’t fully utilize more CPU:

1.      Some workloads simply do not require the highest performing CPUs to achieve their SLAs.  You can pay more and buy a higher performing processor but it will achieve little for these applications. Some workloads just don’t require more CPU performance to meet their goals.

2.      This second class of workloads is characterized by being blocked on networking, storage, or memory. And by memory bound I don’t mean the memory is too small. In this case it isn’t the size of the memory that is the problem, but the bandwidth.  The processor looks to be fully utilized from an operating system perspective but the bulk of its cycles are waiting for memory. Disk and CPU bound systems are easy to detect by looking for which is running close to 100% utilization while the CPU load is way lower. Memory bound is more challenging to detect but its super common so worth talking about it.  Most server processors are super-scalar, which is to say they can retire multiple instructions each cycle. On many workloads, less than 1 instruction is retired each cycle (you can see this by monitoring Instructions per cycle) because the processor is waiting for memory transfers.

 

If a workload is bound on network, storage, or memory, spending more on a faster CPU will not deliver results. The same is true for non-demanding workloads. They too are not bound on CPU so a faster part won’t help in this case either.

 

·         Price/performance: Device price/performance is far better than current generation server CPUs. Because there is less competition in server processors, prices are far higher and price/performance is relatively low compared to the device world. Using server parts, performance is excellent but price is not.

 

Let’s use an example again: A server CPU is hundreds of dollars sometimes approaching $1,000 whereas the ARM processor in an iPhone comes in at just under $15. My general rule of thumb in comparing ARM processors with server CPUs is they are capable of ¼ the processing rate at roughly 1/10th the cost. And, super important, the massive shipping volume of the ARM ecosystem feeds the innovation and completion and this performance gap shrinks the performance gap with each processor generation. Each generational improvement captures more possible server workloads while further improving price/performance

 

·         Power/performance: Most modern servers run over 200W, and many are well over 500W, while microservers can weigh in at 10 to 20W. Nowhere is power/performance more important than in portable devices, so the pace of power/performance innovation in the ARM world is incredibly strong. In fact, I’ve long used mobile devices as a window into future innovations coming to the server market. The technologies you seen in the current generation of cell phones has a very high probability of being used in a future server CPU generation.

 

This is not the first ARM based server processor that has been announced.  And, even more announcements are coming over the next year. In fact, that is one of the strengths of the ARM ecosystem. The R&D investments can be leveraged over huge shipping volume from many producers to bring more competition, lower costs, more choice, and a faster pace of innovation.

 

This is a good day for customers, a good day for the server ecosystem, and I’m excited to see AMD help drive the next phase in the evolution of the ARM Server market. The pace of innovation continues to accelerate industry-wide and it’s going to be an exciting rest of the decade.

 

Past notes on Microservers:

 

·         http://mvdirona.com/jrh/talksAndPapers/JamesRH_Lisa.pdf

·         http://perspectives.mvdirona.com/2012/01/02/ARMV8Architecture.aspx

·         http://perspectives.mvdirona.com/2011/03/20/IntelAtomWithECCIn2012.aspx

·         http://perspectives.mvdirona.com/2009/10/07/YouReallyDONeedECCMemory.aspx

·         http://perspectives.mvdirona.com/2009/09/16/ARMCortexA9SMPDesignAnnounced.aspx

·         http://perspectives.mvdirona.com/2011/01/16/NVIDIAProjectDenverARMPoweredServers.aspx

·         http://perspectives.mvdirona.com/2009/09/07/LinuxApacheOnARMProcessors.aspx

·         http://perspectives.mvdirona.com/2010/11/20/VeryLowCostLowPowerServers.aspx

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, October 29, 2012 1:33:22 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Saturday, October 20, 2012

When I come across interesting innovations or designs notably different from the norm, I love to dig in and learn the details. More often than not I post them here. Earlier this week, Google posted a number of pictures taken from their datacenters (Google Data Center Tech). The pictures are beautiful and of interest to just about anyone, somewhat more interesting to those working in technology, and worthy of detailed study for those working in datacenter design. My general rule with Google has always been that anything they show publically is always at least one generation old and typically more. Nonetheless, the Google team does good work so the older designs are still worth understanding so I always have a look.

 

Some examples of older but interesting Google data center technology:

·         Efficient Data Center Summit

·         Rough Notes: Data Center Efficiency Summit

·         Rough notes: Data Center Efficiency Summit (posting #3)

·         2011 European Data Center Summit

 

The set of pictures posted last week (Google Data Center Tech) is a bit unusual in that they are showing current pictures of current facilities running their latest work.  What was published was only pictures without explanatory detail but, as the old cliché says, a picture is worth a thousand words. I found the mechanical design to be most notable so I’ll dig into that area a bit but let’s start with showing a conventional datacenter mechanical design as a foil against which to compare the Google approach.

The conventional design has numerous issues the most obvious being that any design that is 40 years old and probably could use some innovation. Notable problems with the conventional design: 1) no hot aisle/cold aisle containment so there is air leakage  and mixing of hot and cold air, 2) air is moved long distances between the Computer Room Air Handers (CRAHs) and the servers and air is an expensive fluid to move, and 3) it’s a closed system and hot air is recirculated after cooling rather than released outside with fresh air brought in and cooled if needed.

 

An example of an excellent design that does a modern job of addressing most of these failings is the Facebook Prineville Oregon facility:

 

I’m a big fan of the Facebook facility. In this design they eliminate the chilled water system entirely, have no chillers (expensive to buy and power), have full hot aisle isolation, use outside air with evaporative cooling, and treat the entire building as a giant, high-efficiency air duct. More detail on the Facebook design at: Open Compute Mechanical System Design.

 

Let’s have a look at the Google Concil Bluffs Iowa Facility:

 

 

You can see that have chosen a very large, single room approach rather than sub-dividing up into pods. As with any good, modern facility they have hot aisle containment which just about completely eliminates leakage of air around the servers or over the racks. All chilled air passes through the servers and none of the hot air leaks back prior to passing through the heat exchanger. Air containment is a very important efficiency gain and the single largest gain after air-side economization. Air-side economization is the use of outside air rather than taking hot server exhaust and cooling it to the desired inlet temperature (see the diagram above showing the Facebook use of full building ducting with air-side economization).

 

From the Council Bluffs picture, we see Google has taken a completely different approach. Rather than completely eliminate the chilled water system and use the entire building as an air duct, they have instead kept the piped water cooling system and instead focused on making it as efficient as possible and exploiting some of the advantages of water based systems. This shot from the Google Hamina Finland facility shows the multi-coil heat exchanger at the top of the hot aisle containment system.

 

From inside the hot aisle, this shot picture from the  Mayes County data center, we can see the water is brought up from below the floor in the hot aisle using steel braided flexible chilled water hoses. These pipes bring cool water up to the top-of-hot-aisle heat exchangers that cool the server exhaust air before it is released above the racks of servers.

 

One of the key advantages of water cooling is that water is a cheaper to move fluid than air for a given thermal capacity. In the Google, design they exploit fact by bringing water all the way to the rack. This isn’t an industry first but it is nicely executed in the Google design. IBM iDataPlex brought water directly to the back of the rack and many high power density HPC systems have done this as well.

 

I don’t see the value of the short stacks above the heat exchanges. I would think that any gain in air acceleration through the smoke stack effect would be dwarfed by the loses of having the passive air stacks as restrictions over the heat exchangers.

 

Bringing water directly to the rack is efficient but I still somewhat prefer air-side economization systems. Any system that can reject hot air outside and bring in outside air for cooling (if needed) for delivery to the servers is tough to beat (see Diagram at the top for an example approach). I still prefer the outside air model, however, as server density climbs we will eventually get to power densities sufficiently high that water is needed either very near the server as Google has done or direct water cooling as used by IBM Mainframes in the 80s (thermal conduction module). One very nice contemporary direct water cooling system is the work by Green Revolution Cooling where they completely immerse otherwise unmodified servers in a bath of chilled oil.

 

Hat’s off to Google for publishing a very informative set of data center pictures. The pictures are well done and the engineering is very nice. Good work!

·         Here’s a very cool Google Street view based tour of the Google Lenoir NC Datacenter.

·         The detailed pictures released last week: Google Data Center Photo Album

 

                                                --jrh

 

James Hamilton 
e:
jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, October 20, 2012 2:48:55 PM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
Hardware
 Thursday, October 11, 2012

The last few weeks have been busy and it has been way too long since I have blogged. I’m currently thinking through the server tax and what’s wrong with the current server hardware ecosystem but don’t have anything yet ready to go on that just yet. But, there are a few other things on the go.  I did a talk at Intel a couple of weeks back and last week at the First Round Capital CTO summit. I’ve summarized what I covered below with pointers to slides. 

 

In addition, I’ll be at the Amazon in Palo Alto event this evening and will do a talk there as well. If you are interested in Amazon in general or in AWS specifically, we have a new office open in Palo Alto and you are welcome to come down this evening to learn more about AWS, have a beer or the refreshment of your choice, and talk about scalable systems. Feel free to attend if you are interested:

 

Amazon in Palo Alto

October 11, 2012 at 5:00 PM - 9:00 PM

Pampas 529 Alma Street Palo Alto, CA 94301

Link: http://events.linkedin.com/amazon-in-palo-alto-1096245

 

First Round Capital CTO Summit:

            http://mvdirona.com/jrh/talksAndPapers/JamesHamilton_FirstRoundCapital20121003.pdf

 

I started this session by arguing that cost or value models are the right way to ensure you are working on the right problem. I come across far too many engineers and even companies that are working on interesting problems but they fail at the “top 10 problem” test. You never want to first have to explain the problem to a perspective customer before you get a chance to explain your solution. It is way more rewarding to be working on top 10 problems where the value of what you are doing is obvious and you only need to convince someone that your solution actually works.

 

Cost models are a good way to force yourself to really understand all aspects of what the customer is doing and know precisely what savings or advantage you bring. A 25% improvement on an 80% problem is way better than  50% solution to a 5% problem. Cost or value models are a great way of keeping yourself honest on what the real savings or improvement of your approach actually are. And its quantifiable data that you can verify in early tests and prove in alpha or beta deployments.

 

I then covered three areas of infrastructure where I see considerable innovation and showed all the cost model helped drive me there:

 

·         Networking: The networking eco-system is still operating on the closed, vertically integrated, mainframe model but the ingredients are now in place to change this. See Networking, the Last Bastion of Mainframe Computing for more detail. The industry is currently going through great change. Big change is a hard transition for the established high-margin industry players but it’s a huge opportunity for startups.

·         Storage: The storage (and database) worlds are going through a unprecedented change where all high-performance random access storage is migrating from hard disk drives to flash storage. The early flash storage players have focused on performance over price so there is still considerable room for innovation. Another change happening in the industry is the explosion of cold storage (low I/O density storage that I jokingly refer to as write-only) due to falling prices, increasing compliance requirements, and an industry realization that data has great value. This explosion in cold storage is opening much innovation and many startup opportunities.  The AWS entrant in this market is Glacier where you can store seldom accessed data at one penny per GB per month (for more on Glacier: Glacier: Engineering for Cold Storage in the Cloud.

·         Cloud Computing: I used to argue that targeting cloud computing was a terrible idea for startups since the biggest cloud operators like Google and Amazon tend to do all custom hardware and software and purchase very little commercially. I may have been correct initially but, with the cloud market growing so incredibly fast, every teleco is entering the market, each colo provider is entering, most hardware providers are entering, … the number of players is going from 10s to 1000s. And, at 1,000s, it’s a great market for a startup to target. Most of these companies are not going to build custom networking, server, and storage hardware but they do have the need to innovate with the rest of the industry.

 

Slides: First Round Capital CTO Summit

 

Intel Distinguished Speaker Series:

http://mvdirona.com/jrh/talksAndPapers/JamesHamilton_IntelDCSGg.pdf

 

In this talk I started with how fast the cloud computing market segment is growing using examples form AWS. I then talked about why cloud computing is such an incredible customer value proposition. This isn’t just a short term fad that will pass over time. I mostly focused on how that statement I occasionally hear just can’t be possibly be correct: “I can run my on-premise computing infrastructure less expensively then hosting it in the cloud”. I walk through some of the reasons why this statement can only be made with partial knowledge. There are reasons why some computing will be in the cloud and some will be hosted locally and industry transitions absolutely do take time but cost isn’t one of the reasons that some workloads aren’t in the cloud.

 

I think walked through 5 areas of infrastructure innovation and some of what is happening in each area:

·         Power Distribution

·         Mechanical Systems

·         Data Center Building Design

·         Networking

·         Storage

 

Slides: Intel Distinguished Speaker Series

 

I hope to see you tonight at the Amazon Palo Alto event at Pampas (http://goo.gl/maps/dBZxb). The event starts at 5pm and I’ll do a short talk at 6:35.

 

                                                                --jrh

Thursday, October 11, 2012 10:04:49 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings | Services
 Tuesday, August 21, 2012

Earlier today Amazon Web Services announced Glacier, a low-cost, cloud-hosted, cold storage solution. Cold storage is a class of storage that is discussed infrequently and yet it is by far the largest storage class of them all. Ironically, the storage we usually talk about and the storage I’ve worked on for most of my life is the high-IOPS rate storage supporting mission critical databases. These systems today are best hosted on NAND flash and I’ve been talking recently about two AWS solutions to address this storage class:

 

 

Cold storage is different. It’s the only product I’ve ever worked upon where the customer requirements are single dimensional. With most products, the solution space is complex and, even when some customers may like a competitive product better for some applications, your product still may win in another. Cold storage is pure and unidimensional.  There is only really one metric of interest: cost per capacity. It’s an undifferentiated requirement that the data be secure and very highly durable. These are essentially table stakes in that no solution is worth considering if it’s not rock solid on durability and security.  But, the only dimension of differentiation is price/GB.

 

Cold storage is unusual because the focus needs to be singular. How can we deliver the best price per capacity now and continue to reduce it over time? The focus on price over performance, price over latency, price over bandwidth actually made the problem more interesting. With most products and services, it’s usually possible to be the best on at least some dimensions even if not on all. On cold storage, to be successful, the price per capacity target needs to be hit.  On Glacier, the entire project was focused on delivering $0.01/GB/Month with high redundancy and security and to be on a technology base where the price can keep coming down over time. Cold storage is elegant in its simplicity and, although the margins will be slim, the volume of cold storage data in the world is stupendous. It’s a very large market segment. All storage in all tiers backs up to the cold storage tier so its provably bigger than all the rest. Audit logs end up in cold storage as do web logs, security logs, seldom accessed compliance data, and all other data I refer jokingly to as Write Only Storage. It turns out that most files in active storage tiers are actually never accessed (Measurement and Analysis of Large Scale Network File System Workloads ). In cold storage, this trend is even more extreme where reading a storage object is the exception. But, the objects absolutely have to be there when needed. Backups aren’t needed often and compliance logs are infrequently accessed but, when they are needed, they need to be there, they absolutely have to be readable, and they must have been stored securely.

 

But when cold objects are called for, they don’t need to be there instantly. The cold storage tier customer requirement for latency ranges from minutes, to hours, and in some cases even days. Customers are willing to give up access speed to get very low cost.  Potentially rapidly required database backups don’t get pushed down to cold storage until they are unlikely to get accessed. But, once pushed, it’s very inexpensive to store them indefinitely. Tape has long been the media of choice for very cold workloads and tape remains an excellent choice at scale. What’s unfortunate, is that the scale point where tape starts to win has been going up over the years. High-scale tape robots are incredibly large and expensive. The good news is that very high-scale storage customers like Large Hadron Collider (LHC) are very well served by tape. But, over the years, the volume economics of tape have been moving up scale and fewer and fewer customers are cost effectively served by tape. 

 

In the 80s, I had a tape storage backup system for my Usenet server and other home computers. At the time, I used tape personally and any small company could afford tape. But this scale point where tape makes economic sense has been moving up.  Small companies are really better off using disk since they don’t have the scale to hit the volume economics of tape. The same has happened at mid-sized companies. Tape usage continues to grow but more and more of the market ends up on disk.

 

What’s wrong with the bulk of the market using disk for cold storage? The problem with disk storage systems is they are optimized for performance and they are expensive to purchase, to administer, and even to power. Disk storage systems don’t currently target cold storage workload with that necessary fanatical focus on cost per capacity. What’s broken is that customers end up not keeping data they need to keep or paying too much to keep it because the conventional solution to cold storage isn’t available at small and even medium scales.

 

Cold storage is a natural cloud solution in that the cloud can provide the volume economics and allow even small-scale users to have access to low-cost, off-site, multi-datacenter, cold storage at a cost previously only possible at very high scale.  Implementing cold storage centrally in the cloud makes excellent economic sense in that all customers can gain from the volume economics of the aggregate usage. Amazon Glacier now offers Cloud storage where each object is stored redundantly in multiple, independent data centers at $0.01/GB/Month. I love the direction and velocity that our industry continues to move.

 

More on Glacier:

 

·         Detail Page: http://aws.amazon.com/glacier

·         Frequently Asked Questions: http://aws.amazon.com/glacier/faqs

·         Console access: https://console.aws.amazon.com/glacier

·         Developers Guide: http://docs.amazonwebservices.com/amazonglacier/latest/dev/introduction.html

·         Getting Started Video: http://www.youtube.com/watch?v=TKz3-PoSL2U&feature=youtu.be

 

By the way, if Glacier has caught your interest and you are an engineer or engineering leader with an interest in massive scale distributed storage systems, we have big plans for Glacier and are hiring. Send your resume to glacier-jobs@amazon.com.

 

                                                                --jrh

 

James Hamilton 
e:
jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

Tuesday, August 21, 2012 4:46:33 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Services
 Sunday, August 12, 2012

Facebook recently released a detailed report on their energy consumption and carbon footprint: Facebook’s Carbon and Energy Impact. Facebook has always been super open with the details behind there infrastructure. For example, they invited me to tour the Prineville datacenter just prior to its opening:

·         Open Compute Project

·         Open Compute Mechanical System Design

·         Open Compute Server Design

·         Open Compute UPS & Power Supply

 

Reading through the Facebook Carbon and Energy Impact page, we see they consumed 532 million kWh of energy in 2011 of which 509m kWh went to their datacenters. High scale data centers have fairly small daily variation in power consumption as server load goes up and down and there are some variations in power consumption due to external temperature conditions since hot days require more cooling than chilly days. But, highly efficient datacenters tend to be effected less by weather spending only a tiny fraction of their total power on cooling. Assuming a flat consumption model, Facebook is averaging, over the course of the year, 58.07MW of total power delivered to its data centers. 

 

Facebook reports an unbelievably good 1.07 Power Usage Effectiveness (PUE) which means that for every 1 Watt delivered to their servers they lose only 0.07W in power distribution and mechanical systems. I always take publicly released PUE numbers with a grain of salt in that there has been a bit of a PUE race going on between some of the large operators. It’s just about assured that there are different interpretations and different measurement techniques being employed in computing these numbers so comparing them probably doesn’t tell us much. See PUE is Still Broken but I Still use it and PUE and Total Power Usage Efficiency for more on PUE and some of the issues in using it comparatively.

 

Using the Facebook PUE number of 1.07, we know they are delivering 54.27MW to the IT load (servers and storage). We don’t know the average server draw at Facebook but they have excellent server designs (see Open Compute Server Design) so they likely average at or below as 300W per server. Since 300W is an estimate, let’s also look at 250W and 400W per server:

·         250W/server: 217,080 servers

·         300W/server: 180,900 servers

·         350W/server: 155,057 servers

 

As a comparative data point, Google’s data centers consume 260MW in aggregate (Google Details, and Defends, It’s use of Electricity). Google reports their PUE is 1.14 so we know they are delivering 228MW to their IT infrastructure (servers and storage). Google is perhaps the most focused in the industry on low power consuming servers. They invest deeply in custom designs and are willing to spend considerably more to reduce energy consumption. Estimating their average server power draw at 250W and looking at the +/-25W about that average consumption rate:

·         225W/server: 1,155,555 servers

·         250W/server: 1,040,000 servers

·         275W/server: 945,454 servers

 

I find the Google and Facebook server counts interesting for two reasons. First, Google was estimated to have 1 million servers more than 5 years ago. The number may have been high at the time but it’s very clear that they have been super focused on work load efficiency and infrastructure utilization. To grow the search and advertising as much as they have without growing the server count at anywhere close to the same rate (if at all) is impressive. Continuing to add computationally expensive search features and new products and yet still being able to hold the server count near flat is even more impressive.

 

The second notable observation from this data is that the Facebook server count is growing fast. Back in October of 2009, they had 30,000 servers. In June of 2010 the count climbed to 60,000 servers. Today they are over 150k.

 

                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

Sunday, August 12, 2012 6:14:18 PM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Ramblings
 Wednesday, August 01, 2012

In I/O Performance (no longer) Sucks in the Cloud, I said

 

Many workloads have high I/O rate data stores at the core. The success of the entire application is dependent upon a few servers running MySQL, Oracle, SQL Server, MongoDB, Cassandra, or some other central database.

 

Last week a new Amazon Elastic Compute Cloud (EC2) instance type based upon SSDs was announced that delivers 120k reads per second and 10k to 85k writes per second. This instance type with direct attached SSDs is an incredible I/O machine ideal for database workloads, but most database workloads run on virtual storage today. The administrative and operational advantages of virtual storage are many. You can allocate more storage with a call of an API. Blocks are redundantly stored on multiple servers. It’s easy to checkpoint to S3. Server failures don’t impact storage availability.

 

The AWS virtual block storage solution is the Elastic Block Store (EBS).  Earlier today two key features were released to support high performance databases and other random I/O intensive workloads on EBS. The key observation is that these random I/O-intensive workloads need to have IOPS available whenever they are needed. When a database runs slowly, the entire application runs poorly. Best effort is not enough and competing for resources with other workloads doesn’t work. When high I/O rates are needed, they are needed immediately and must be there reliably.

 

Perhaps the best way to understand the two new features is to look at how demanding database workloads are often hosted on-premise. Typically large servers are used so the memory and CPU resources are available when needed. Because a high performance storage system is needed and because it is important to be able to scale the storage capacity and I/O rates during the life of the application, direct attached disk isn’t the common choice. Most enterprise customers put these workloads on Storage Area Network devices which are typically connected to the server by a Fiber Channel network (a private communication channel used only for storage).

 

The aim of the announcement today is to take some of what has been learned from 30+ years of on-premise storage evolution. Customers want virtualized storage but, at the same time, they need the ability to reserve resources for demanding workloads. In this announcement, we take some of the best aspects what has emerged in on-premise storage solutions and  give EC2 customers the ability to scale high-performance storage as needed, reserve and scale the available I/Os per Second (IOPS) as needed, and reserve dedicated network bandwidth to the storage device. The latter is perhaps the most important and the combination allows workloads to reserve both the IOPS rates at the storage as well as the network channel to get to the storage and be assured it will be there when they need it.

 

The storage, IOPS, and network capacity is there even if you haven’t used it recently. It’s there even if your neighbors are also busy using their respective reservations. It’s even there if you are running full networking traffic load to the EC2 instance. Just as when an on-premise customer allocates a SAN volume with a Fiber Channel attach that doesn’t compete with other network traffic, allocated resources stay reserved and they stay available. Let’s look at the two features that deliver a low-jitter, virtual SAN solution in AWS.


Provisioned IOPS is a feature of Amazon Elastic Block Store. EBS has always allowed customers to allocate storage volumes of the size they need and to attach these virtual volumes to their EC2 instances. Provisioned IOPS allows customers to declare the I/O rate they need the volumes to be able to deliver, up to 1,000 I/Os per second (IOPS) per volume. Volumes can be striped together to achieve reliable, low-latency virtual volumes of 20,000 IOPS or more. The ability to reliably configure and reserve over 10,000 IOPS means the vast majority of database workloads can be supported. And, in the near future, this limit will be raised allowing increasingly demanding workloads to be hosted on EC2 using EBS.

 

EBS-Optimized EC2 instances are a feature of EC2 that is the virtual equivalent of installing a dedicated network channel to storage. Depending upon the instance type, 500 Mbps up to a full 1Gbps are allocated and dedicated for storage use only. This storage communications channel is in addition to the network connection to the instance. Storage and network traffic no longer compete and, on large instance types, you can drive full 1Gbps line rate network traffic while, at the same time, also be consuming 1Gbps to storage. Essentially EBS Optimized instances have a dedicated storage channel that doesn’t compete with instance network traffic.

 

From the EBS detail page:

 

EBS standard volumes offer cost effective storage for applications with light or bursty I/O requirements.  Standard volumes deliver approximately 100 IOPS on average with a best effort ability to burst to hundreds of IOPS.  Standard volumes are also well suited for use as boot volumes, where the burst capability provides fast instance start-up times.

 

Provisioned IOPS volumes are designed to deliver predictable, high performance for I/O intensive workloads such as databases.  With Provisioned IOPS, you specify an IOPS rate when creating a volume, and then Amazon EBS provisions that rate for the lifetime of the volume.  Amazon EBS currently supports up to 1,000 IOPS per Provisioned IOPS volume, with higher limits coming soon.  You can stripe multiple volumes together to deliver thousands of IOPS per Amazon EC2 instance to your application. 

To enable your Amazon EC2 instances to fully utilize the IOPS provisioned on an EBS volume, you can launch selected Amazon EC2 instance types as “EBS-Optimized” instances.  EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Mbps and 1000 Mbps depending on the instance type used.  When attached to EBS-Optimized instances, Provisioned IOPS volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time.  See Amazon EC2 Instance Types to find out more about instance types that can be launched as EBS-Optimized instances. 

 

Providing scalable block storage at-scale, in 8 regions around the world is one of the most interesting combinations of distributed systems and storage problems we face. The problem has been well solved in high-cost on-premise solutions. We now get to apply what has been learned over the last 30+ years to solve the problem at cloud-scale with low-cost and 100s of thousands of concurrent customers. An incredible number of EC2 customers depend upon EBS for their virtual storage needs, the number is growing daily, and we are really only just getting started. If you want to be part of the engineering effort to make Elastic Block Store the virtual storage solution for the cloud, send us a note at ebs-jobs@amazon.com.

 

With the announcement today, EC2 customers now have access to two very high performance storage solutions. The first solution is the EC2 High I/O Instance type announced last week which delivers a direct attached, SSD-powered 100k IOIPS for $3.10/hour. In today’s announcement this direct attached storage solution is joined by a high-performance virtual storage solution. This new type of EBS storage allows the creation of striped storage volumes that can reliably delivery 10,000 to 20,000 IOPS across a dedicated virtual storage network.

 

Amazon EC2 customers now have both high-performance, direct attached storage and high-performance virtual storage with a dedicated virtual storage connection.

 

                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Wednesday, August 01, 2012 4:57:52 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Friday, July 20, 2012

Many workloads have high I/O rate data stores at the core. The success of the entire application is dependent upon a few servers running MySQL, Oracle, SQL Server, MongoDB, Cassandra, or some other central database.

 

The best design patter for any highly reliable and scalable application whether on-premise or in cloud hosted, is to shard the database. You can’t be dependent upon a single server being able to scale sufficiently to hold the entire workload. Theoretically, that’s the solution and all workloads should run well on a sufficiently large fleet even if that fleet has a low individual server I/O performance. Unfortunately, few workloads scale as badly as database workloads. Even scalable systems such as MongoDB or Cassandra need to have a per-server I/O rate that meets some minimum bar to host the workload cost effectively with stable I/O performance.

 

The easy solution is to depend upon a hosted service like DynamoDB that can transparently scale to order 10^6 transactions per second and deliver low jitter performance. For many workloads, that is the final answer. Take the complexity of configuring and administering a scalable database and give it to a team that focuses on nothing else 24x7 and does it well.

 

Unfortunately,  in the database world, One Size Does Not Fit All. DynamoDB is a great solution for some workloads but many workloads are written to different stores or depend upon features not offered in DynamoDB. What if you have an application written to run on sharded Oracle (or MySQL) servers and each database requires 10s of thousands of I/Os per second? For years, this has been the prototypical “difficult to host in the cloud” workload. All servers in the application are perfect for the cloud but the overall application won’t run unless the central database server can support the workload.

 

Consequently, these workloads have been difficult to host on the major cloud services. They are difficult to scale out to avoid needing very high single node I/O performance and they won’t yield a good customer experience unless the database has the aggregate IOPS needed. 

 

Yesterday an ideal EC2 instance type was announced. It’s the screamer needed by these workloads. The new EC2 High I/O Instance type is a born database machine. Whether you are running Relational or NoSQL, if the workload is I/O intense and difficult to cost effectively scale-out without bound, this instance type is the solution. It will deliver a booming 120,000 4k reads per second and between 10,000 and 85,000 4k random writes per second. The new instance type:

·         60.5 GB of memory

·         35 EC2 Compute Units (8 virtual cores with 4.4 EC2 Compute Units each)

·         2 SSD-based volumes each with 1024 GB of instance storage

·         64-bit platform

·         I/O Performance: 10 Gigabit Ethernet

·         API name: hi1.4xlarge

 

If you have a difficult to host I/O intensive workload, EC2 has the answer for you. 120,000 read IOPS and 10,000 to 85,000 write IOPS for $3.10/hour Linux on demand or $3.58/hour Windows on demand. Because these I/O workloads are seldom scaled up and down in real time, the Heavy Utilization Reserved instance is a good choice where the server capacity can be reserved for $10,960 for a three year term and usage is $0.482/hour.

 

·         Amazon EC2 detail page: http://aws.amazon.com/ec2/

·         Amazon EC2 pricing page: http://aws.amazon.com/ec2/#pricing

·         Amazon EC2 Instance Types: http://aws.amazon.com/ec2/instance-types/

·         Amazon Web Services Blog: http://aws.typepad.com/aws/2012/07/new-high-io-ec2-instance-type-hi14xlarge.html

·         Werner Vogels: http://www.allthingsdistributed.com/2012/07/high-performace-io-instance-amazon-ec2.html

 

Adrian Cockcroft of Netflix wrote an excellent blog on this instance type where he gave benchmarking results from Netflix: Benchmarking High Performance I/O with SSD for Cassandra on AWS.

 

You can now have 100k IOPS for $3.10/hour.

 

                                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Friday, July 20, 2012 5:57:25 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Friday, July 13, 2012

Why are there so many data centers in New York, Hong Kong, and Tokyo? These urban centers have some of the most expensive real estate in the world. The  cost of labor is high. The tax environment is unfavorable. Power costs are high.  Construction is difficult to permit and expensive. Urban datacenters are incredibly expensive facilities and yet a huge percentage of the world’s computing is done in expensive urban centers.

 

One of my favorite examples is the 111 8th Ave data center in New York. Google bought this datacenter for $1.9B.  They already have facilities on the Columbia river where the power and land are cheap. Why go to New York when neither is true? Google is innovating in cooling technologies in their Belgium facility where they are using waste water cooling. Why go to New York where the facility is conventional, the power source predominantly coal-sourced, and the opportunity for energy innovation is restricted by legacy design and the lack of real estate available in the area around the facility. It’s pretty clear that 111 8th Ave isn’t going to be wind farm powered. A solar array could likely be placed on the roof but that wouldn’t have the capacity to run the interior lights in this large facility (See I love Solar but … for more on the space challenges of solar power at data center power densities). There isn’t space to do anything relevant along these dimensions.

 

Google has some of the most efficient datacenters in the world, running on some of the cleanest power sources in the world, and custom engineered from the ground up to meet their needs. Why would they buy an old facility, in a very expensive metropolitan area, with a legacy design? Are they nuts?  Of course not, Google is in New York because many millions of Google customers are in New York or nearby.

 

Companies site datacenters near the customers of those data centers. Why not serve the planet from Iceland where the power is both cheap and clean? When your latency budget to serve customers is 200 msec, you can’t give up ¾ of that time budget on speed of light delays traveling long distances. Just crossing the continent from California to New York is a 74 msec round trip time (RTT). New York to London is 70 msec RTT. The speed of light is unbending. Actually, it’s even worse than the speed of light in that the speed of light in a fiber is about 2/3 of the speed of light in a vacuum (see Communicating Beyond the Speed of Light).

 

Because of the cruel realities of the speed of light, companies must site data centers where their customers are. That’s why companies selling world-wide, often need to have datacenters all over the world. That’s why the Akamai content distribution network has over 1,200 points of presence world-wide.  To serve customers competitively, you need to be near those customers. The reason datacenters are located in Tokyo, New York, London, Singapore and other expensive metropolitan locations is they need to be near customers or near data that is in those locations. It costs considerably to maintain datacenters all over the world but there is little alternative.

 

Many articles recently have been quoting the Greenpeace open letter asking Ballmer, Bezos and Cook to “go to Iceland”. See for example Letter to Ballmer, Bezos, and Cook: Go to Iceland. Having come many of these articles recently, it seemed worth stopping and reflecting on why this hasn’t already happened. It’s not like company just love paying more or using less environmentally friendly power sources for their data centers.

 

Google is in New York because it has millions of customers in New York. If it were physically possible to serve these customers from an already built, hyper efficient datacenter like Google Dalles, they certainly would. But that facility is 70 msec round trip away from New York. What about Iceland? Roughly the same distance. It simply doesn’t work competitively. Companies build near their users because physics of the speed of light is unbending and uncaring.

 

So, what can we do? It turns out that many workloads are not latency sensitive. The right strategy is to house latency sensitive workloads near customers or the data needed at low latency and house latency insensitive workloads optimizing on other dimensions. This is exactly what Google does but, to do that, you need to have many datacenters all over the world so the appropriate facility can be selected on a workload-by-workload basis. This isn’t a practical approach for many smaller companies with only 1 or 2 datacenters to choose from.

 

This is another area where cloud computing can help. Cloud computing can allow mid-sized and even small companies to have many different datacenters optimized for different goals all over the world. Using Amazon Web Services, a company can house workloads near customers in Singapore, Tokyo, Brazil, and Ireland to be close to their international customers. Being close to these customers makes a big difference in the overall quality of customer experience (see: The Cost of Latency for more detail on how much latency really matters). As well as allowing a company to cost effectively have an international presence, cloud computing also allows companies to make careful decisions on where they locate workloads in North America. Again using AWS as the example, customers can place workloads in Virginia to serve the east coast or use Northern California to serve the population dense California region. If the workloads are not latency sensitive or is serving customers near the Pacific Northwest, they can be housed in the AWS Oregon region where the workload can be hosted coal free and less expensively than in Northern California.

 

The reality is that physics is uncaring and many workloads do need to be close to users. Cloud computing allows all companies to have access to datacenters all over the world so they can target individual workloads to the facilities that most closely meet their goals and the needs of their customers. Some computing will have to stay in New York even though it is mostly coal powered, expensive, and difficult to expand. But some workload will run very economically in the AWS West (Oregon) region where there is no coal power, expansion is cheap, and power inexpensive.

 

Workload placement decisions are more complex than “move to Iceland.”

 

                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Friday, July 13, 2012 7:03:27 AM (Pacific Standard Time, UTC-08:00)  #    Comments [12] - Trackback
Services
 Monday, July 09, 2012

Last night, Tom Klienpeter sent me The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission Executive Summary. They must have hardy executives in Japan in that the executive summary runs 86 pages in length. Overall, It’s an interesting document but I only managed to read in to the first page before starting to feel disappointed. What I was hoping for is a deep dive into why the reactors failed, the root causes of the failures, and what can be done to rectify it.

 

Because of the nature of my job, I’ve spent considerable time investigating hardware and software system failures and what I find most difficult and really time consuming is getting to the real details. It’s easy to say there was a tsunami and it damaged the reactor complex and loss of power caused radiation release. But why did loss of power cause radiation release? Why didn’t the backup power systems work? Why does the design depend upon the successful operation of backup power systems? Digging to the root cause takes the time, requires that all assumptions be challenged, and invariably leads to many issues that need to be addresses. Good post mortems are detailed, get to the root cause, and it’s rare that a detailed investigation of any complex system doesn’t yield a long, detailed list of design and operational changes. The Rogers Commission on the Space Shuttle Challenger failure is perhaps the best example of digging deeply, finding root cause both technical and operational, and making detailed recommendations.

 

On the second page of this report, the committee members were enumerated. The committed includes 1) seismologist, 2) 2 medical doctors, 3) chemist, 4) journalist, 5) 2 lawyers, 6) social system designer, 7) one politician, and 8) no nuclear scientist, no reactor designers, and no reactor operators. The earthquake and subsequent tsunami was clearly the seed for the event but since we can’t prevent these, I would argue that they should only play a contextual role in the post mortem.  What we need to understand is exactly why the both the reactor and nuclear material storage design were not stable in the presence of cooling system failure. It's weird that there were no experts in the subject area where the most dangerous technical problems were encountered. Basically we can’t stop earthquakes and tsunamis so we need to ensure that systems remain safe in the presence of them.

 

Obviously the investigative team is very qualified to deal with the follow-on events both in assessing radiation exposure risk,  how the evacuation was carried out, and regulatory effectiveness. And it is clear these factors are all important. But still, it feels like the core problem is that cooling system flow was lost and the both the reactors and nuclear material storage ponds overheated. Using materials that, when overheated, release explosive hydrogen gas is a particularly important area of investigation.

 

Personally, the largest part of my interest were it my investigation, would be focused on achieving designs stable in the presence of failure. Failing that, getting really good at evacuation seems like a good idea but still less important than ensuring these reactors and others in the country fail into a safe state.

 

The report reads like a political document. Its heavy on blame, light on root cause and the technical details of the root cause failure, and the recommended solution depends upon more regulatory oversight. The document focuses on more oversight by the Japanese Diet (a political body) and regulatory agencies but doesn't go after the core issues that lead to the nuclear release. From my perspective, the key issues are 1) scramming the reactor has to 100% stop the reaction and the passive cooling has to be sufficient to ensure the system can cool from full operating load without external power, operational oversight, or other input beyond dropping the rods. Good SCRAM systems automatically deploy and stop the nuclear reaction. This is common. What is uncommon is ensuring the system can successfully cool from a full load operational state without external input of power, cooling water, or administrative input.

 

The second key point that this nuclear release drove home for me is 2) all nuclear material storage areas must be seismically stable, above flood water height, maintain integrity through natural disasters, and must be able to stay stable and safe without active input or supervision for long periods of time. They can't depends upon pumped water cooling and have to 100% passive and stable for long periods without tending.

 

My third recommendation is arguably less important than my first two but applies to all systems: operators can’t figure out what is happening or take appropriate action without detailed visibility into the state of the system. The monitoring system needs to be independent (power, communications, sensors, …) , detailed, and able to operate correctly with large parts of the system destroyed or inoperative.

 

My fourth recommendation is absolutely vital and I would never trust any critical system without this: test failure modes frequently. Shut down all power to the entire facility at full operational load and establish that temperatures fall rather than rise and no containment systems are negatively impacted. Shut off the monitoring system and ensure that the system continues to operate safely. Never trust any system in any mode that hasn’t been tested.

 

The recommendations from the Official Report of the Fukushima Nuclear Accident Independent Investigation Commission Executive Summary follow:

Recommendation 1:

Monitoring of the nuclear regulatory body by the National Diet

A permanent committee to deal with issues regarding nuclear power must be established in the National Diet in order to supervise the regulators to secure the safety of the public. Its responsibilities should be:

1.       To conduct regular investigations and explanatory hearings of regulatory agencies, academics and stakeholders.

2.       To establish an advisory body, including independent experts with a global perspective, to keep the committee’s knowledge updated in its dealings with regulators.

3.       To continue investigations on other relevant issues.

4.       To make regular reports on their activities and the implementation of their recommendations.

Recommendation 2:

Reform the crisis management system

A fundamental reexamination of the crisis management system must be made. The boundaries dividing the responsibilities of the national and local governments and the operators must be made clear. This includes:

1.       A reexamination of the crisis management structure of the government. A structure must be established with a consolidated chain of command and the power to deal with emergency situations.

2.       National and local governments must bear responsibility for the response to off-site radiation release. They must act with public health and safety as the priority.

3.       The operator must assume responsibility for on-site accident response, including the halting of operations, and reactor cooling and containment.

Recommendation 3:

Government responsibility for public health and welfare

Regarding the responsibility to protect public health, the following must be implemented as soon as possible:

1.       A system must be established to deal with long-term public health effects, including stress-related illness. Medical diagnosis and treatment should be covered by state funding. Information should be disclosed with public health and safety as the priority, instead of government convenience. This information must be comprehensive, for use by individual residents to make informed decisions.

2.       Continued monitoring of hotspots and the spread of radioactive contamination must be undertaken to protect communities and the public. Measures to prevent any potential spread should also be implemented.

3.       The government must establish a detailed and transparent program of decontamination and relocation, as well as provide information so that all residents will be knowledgeable about their compensation options. 

Recommendation 4:

Monitoring the operators

TEPCO must undergo fundamental corporate changes, including strengthening its governance, working towards building an organizational culture which prioritizes safety, changing its stance on information disclosure, and establishing a system which prioritizes the site. In order to prevent the Federation of Electric Power Companies (FEPC) from being used as a route for negotiating with regulatory agencies, new relationships among the electric power companies must also be established—built on safety issues, mutual supervision and transparency.

1.       The government must set rules and disclose information regarding its relationship with the operators.NAIIC 23

2.       Operators must construct a cross-monitoring system to maintain safety standards at the highest global levels.

3.       TEPCO must undergo dramatic corporate reform, including governance and risk management and information disclosure—with safety as the sole priority.

4.       All operators must accept an agency appointed by the National Diet as a monitoring authority of all aspects of their operations, including risk management, governance and safety standards, with rights to on-site investigations.

Recommendation 5:

Criteria for the new regulatory body

The new regulatory organization must adhere to the following conditions. It must be:

1.       Independent: The chain of command, responsible authority and work processes must be: (i) Independent from organizations promoted by the government (ii) Independent from the operators (iii) Independent from politics.

2.       Transparent: (i) The decision-making process should exclude the involvement of electric power operator stakeholders. (ii) Disclosure of the decision-making process to the National Diet is a must. (iii) The committee must keep minutes of all other negotiations and meetings with promotional organizations, operators and other political organizations and disclose them to the public. (iv) The National Diet shall make the final selection of the commissioners after receiving third-party advice.

3.       Professional: (i) The personnel must meet global standards. Exchange programs with overseas regulatory bodies must be promoted, and interaction and exchange of human resources must be increased. (ii) An advisory organization including knowledgeable personnel must be established. (iii) The no-return rule should be applied without exception.

4.       Consolidated: The functions of the organizations, especially emergency communications, decision-making and control, should be consolidated.

5.       Proactive: The organizations should keep up with the latest knowledge and technology, and undergo continuous reform activities under the supervision of the Diet.

Recommendation 6:

Reforming laws related to nuclear energy

Laws concerning nuclear issues must be thoroughly reformed.

1.       Existing laws should be consolidated and rewritten in order to meet global standards of safety, public health and welfare.

2.       The roles for operators and all government agencies involved in emergency response activities must be clearly defined.

3.       Regular monitoring and updates must be implemented, in order to maintain the highest standards and the highest technological levels of the international nuclear community.

4.       New rules must be created that oversee the backfit operations of old reactors, and set criteria to determine whether reactors should be decommissioned.

Recommendation 7:

Develop a system of independent investigation commissions

A system for appointing independent investigation committees, including experts largely from the private sector, must be developed to deal with unresolved issues, including, but not limited to, the decommissioning process of reactors, dealing with spent fuel issues, limiting accident effects and decontamination.

 

Many of the report recommendations are useful but they fall short of addressing the root cause. Here’s what I would like to see:

1.       Scramming the reactor has to 100% stop the reaction and the passive cooling has to be sufficient to ensure the system can cool from full operating load without external power, operational oversight, or other input beyond dropping the rods.

2.       All nuclear material storage areas must be seismically stable, above flood water height, maintain integrity through natural disasters, and must be able to stay stable and safe without active input or supervision for long periods of time.

3.       The monitoring system needs to be independent, detailed, and able to operate correctly with large parts of the system destroyed or inoperative.

4.       Test all failure modes frequently. Assume that all systems that haven’t been tested will not work. Surprisingly frequently, they don’t.

 

 

The Official Report of the Fukushima Nuclear Accident Independent Investigation Commission Executive Summary can be found at: http://naiic.go.jp/wp-content/uploads/2012/07/NAIIC_report_lo_res2.pdf.

 

Since our focus here is primarily on building reliable hardware and software systems, this best practices document may be of interest: Designing & Deploying Internet-Scale Services: http://mvdirona.com/jrh/talksAndPapers/JamesRH_Lisa.pdf

 

                                                --jrh

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, July 09, 2012 6:39:00 AM (Pacific Standard Time, UTC-08:00)  #    Comments [8] - Trackback
Hardware | Process | Services

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<April 2014>
SunMonTueWedThuFriSat
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton