Sunday, September 21, 2008

Ken Church, Albert Greenberg, and I just finished On Delivering Embarrassingly Distributed Cloud Services which has been accepted for presentation at ACM Hotnets 2008 in Calgary, Alberta October 6th and 7th.  This paper followed from the discussion and debate around a blog entry that Ken and I did some time back: Diseconomies of scale where we argue that the industry trend towards mega-datacenters needs to be questioned and, in many cases, is simply not cost effective.


There are times when Mega-datacenters do makes sense. Very large data analysis jobs and large, multi-server workloads with considerable inter-node communications traffice run best against large central data stores. MapReduce jobs are the classic example of this sort of workload.  However, we argue that other types of workloads actually run better in distributed micro-datacenters.  Highly partitionable applications with light inter-partition traffic can be better hosted in distributed micro-datacenters. Highly interactive applications such as Google Docs need to be close to their users.  Network round trip latencies can make highly interactive applications frustrating to use. We collectively refer to applications can be partitioned effectively and run close to the edge (the users) as Embarrassingly Distributed. Essentially, these are the easy applications when it comes to running them close to the edge.


In the paper, we argue that the class of applications that are embarrassingly distributed and therefore run well on distributed micro-datacenters is large and we are go on to show that distributed micro-datacenters can offer considerable advantage over mega-centers.  Essentially the point is that you can run many applications over distributed micr-datacenters and, if you can, you should.


Micro-datacenters are made possible by containerization that I wrote about in a 2007 Conference on Innovative Data Research Paper: Architecture for Modular Data Centers.  When that paper was published Rackable Systems had just shipped their first containerized design and Sun Microsystems had announced Black Box but it wasn’t yet shipping. Two years later, containerized designs are offered by most of the major datacenter server vendors:

·         IBM Scalable modular data center

·         Rackable ICE Cube™ Modular Data Center

·         Sun Modular Datacenter S20 (project Blackbox)

·         Dell Insight

·         Verari Forest Container Solution


Microsoft recently announced the first containerized data center in Chicago: First Containerized Data Center Announcement. The Chicago announcement is a mega-center but it does show that containerized designs are now ready for primetime.


Mega-datacenters remain useful and aren’t going away any time soon but, in Delivering Embarrassingly Distributed Cloud Services, we argue that distributed micro-datacenters are appropriate for many workloads and can reduce costs, improve the quality of service, and increase the speed of deployment.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Sunday, September 21, 2008 6:22:13 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Tuesday, September 16, 2008

Earlier today, I gave a talk at LADIS 2008 (Large Scale Distributed Systems & Middleware) in Yorktown Heights, New York. The program for LADIS is at:  The slides presented are posted to:


The quick summary of the talk: Hosted services will be a large part of enterprise information processing and consumer services with economies of scale of 5 to 10x over small scale deployments.  Enterprise deployments are very different from high scale services. The former is people-dominated from a cost perspective whereas people-costs are not in the top 4 major factors in the services world. 


The talk looks at limiting factors in the economic application of resources to services, one of which is power.  Looking at power in more detail, we go through where power goes in a modern data center inventorying power disapation in power distribution, cooling and server load in a high-scale data center.


Then it steps through a sampling of high scale services implementation techniques and possible optimizations including modular data centers, multi-data center failover replacing single data center redundancy, NAND flash bridging the memory to disk chasm, graceful degradation mode, admission control, power yield management, and resource consumption shaping.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Tuesday, September 16, 2008 10:06:12 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Thursday, September 11, 2008

This note describes a conversation I’ve had multiple times with data center owners and concludes that blade servers frequently don’t help and they sometimes hurt, easy data center power utilization improvements are available independent of the blade server premium, and enterprise data center owners have a tendency to buy gadgets from the big suppliers rather than think through overall data center design. We’ll dig into each.


In talking to data center owners, I’ve learned a lot but every once in a while I come across a point that just doesn’t make sense.  My favorite example is server density.  I’ve talked to many DC owners (and I’ll bet I’ll hear from many after this note) that have just purchased blades servers.  The direction of conversation is always the same. “We just went with blades and now have 25+kW racks”. I ask if their data center has open floor and it almost always does. We’ll come back to that.  Hmmm, I’m thinking. They now have much higher power density racks at higher purchase cost in order to get more computing per square foot but the data center already has open floor space (since almost all well designed centers are power and cooling bound rather than floor space bound).  Why?


Earlier, we observed that most well designed data centers are power and cooling bound rather than space bound.  Why is that anyway?  There is actually very little choice.  Here’s the math: Power and Cooling make up roughly 70% of the cost of the data center while the shell (the building) is just over 10%. As a designer, you need to design a data center to lasts for 15 years. Who has a clue of the needed power density (usually expressed in W/sq ft) 15 years from today? It depends upon the server technology, the storage ratio, and many other factors.  The only thing we know for sure is we don’t know and almost any choice will inevitably be wrong.  So a designer is going to have too much power and cooling or too much floor space.  One or the other will be wasted no matter what.  Wasting floor space is a 10% mistake whereas stranding power and cooling is a 70% mistake.  This 10% number applies to large scale data centers of over 10MW not in the center of New York – we’ll come back to that. Any designer that strands power and cooling by running out of floor space should have been fired years ago.  Most avoid this by providing more floor space than needed in any reasonable usage and that’s why most data centers have vast open spaces. Its insurance against the expensive mistake of stranding power.


There are rare exceptions to this rule of well designed data centers being power and cooling rather than floor space limited. But the common case is that a DC owner just paid the blade server premium to get yet again more unused data center floor space. They were power and cooling limited before and now, with the addition of higher density servers, even more so.  No gain visible yet so the conversation then swings over to efficiency. When talking about the amazing efficiency of the new racks, we usually talk about PUE.  PUE is Power Usage Effectiveness and it’s actually simpler than it sounds.  It’s the total power that comes into the data center divided by the power delivered to the critical load (the servers themselves). As an example, a PUE of 1.7 means that for every watt delivered to the load 0.7 W  is lost in power distribution and cooling.  Some data centers, especially those that have accreted over time rather than having been designed as a whole, can be as bad as 3.0 but achieving numbers this bad takes work and focus so we’ll stick with the 1.7 example as a baseline.


So, in this conversation about the efficiency of blade servers, we hear the PUE improved PUE from 1.7 to 1.4. Sounds like a fantastic deal and, if true, that kind of efficiency gain will more than pay the blade premium and is also good for society.  That would be good news all around but let’s dig deeper. I first congratulate them on the excellent PUE and ask if they had data center cooling problems when the new blade racks were first installed.  Usually they experienced exactly that and eventually bought water cooled racks from APC, Rittal, or others.  Some purchased blade racks with back-of-rack water cooling like the nicely designed IBM iDataPlex. But the story is always the same: they purchased blade servers and, at the same time, moved to water cooling at the rack. New generation servers can be more efficient than the previous generation and better cooling designs are more efficient whether or not blade servers are part of the equation. Turning the servers over onto their sides didn’t make them more efficient.


They key part of that PUE improvement above is they replaced the inefficiency of conventional data center cooling with water at the racks. Here’s an example of a medium to large scale deployment that went with blades and water cooled racks: One Datacenter to Rule Them All. There is nothing magical about water at the rack cooling designs.  Many other approach yield similar or even better efficiency. The important factor is that they used something other than the most common data center cooling system design which is amazingly inefficient as deployed in most centers. Conventional data centers typically move air from a water cooled CRAC unit through a narrow raised floor choked with cabling.  The air comes up into the cold aisle through perforated tiles.  In some aisles there are too many perforated tiles and in others too few.  Sometimes someone on the ops staff has put a perforated tile into the hot aisle to “cool things down” or to make it more habitable.  This innocent decision unfortunately reduces cooling efficiency greatly. The cool air that comes up into the cold aisle is pulled through the servers to cool them but some spills over the top of the rack and some around the ends.  Some goes through open rack positions without blanking panels. All these flows not going through the servers reduces cooling system efficiency.  After flowing through the servers, the air rises to the ceiling and returns to the CRAC. Moving air that distance with so many paths that don’t go through the servers, is inefficient.  If you move the water directly to the rack in what I call a CRAC-at-the-Rack design, the overall cooling design can be made much more efficient mostly through the avoidance of all these not-through-the-server air paths and avoiding the expense of pumping air long distances. It’s mostly not the blades that are more efficient, it’s the cooling systems redesign required as a side effect of deploying the high power density servers.


Rather than moving to blades and paying the blade premium, just changing the cooling system design to avoid the problems in the previous paragraph will yield big efficiency improvements.


Why are some data centers in expensive locations?  Sometimes for good reason in that the communications latency to low cost real estate is too high for a very small number of applications. But, for most data centers, having them in expensive locations is simply a design mistake.  Many time it’s to allow easy access to the data center but you shouldn’t need to be in data center frequently. In fact, if people are in the DC frequently, you are almost assured to have mistakes and outages.  Placing DCs in hard to get to locations substantially reduces costs and improves reliability. For those few that need to have them located in New York, Tokyo, London, etc., there aren’t very many of you and you all know who you are.  The remainder are spending too much.  Remember my first law of data centers: if you have a windows to see in, you are almost certainly paying too much for servers, network gear, etc. Keep it cheap and ugly.


What about data centers that are out of cooling capacity but can’t use all their power or floor space.  It’s bad design to strand power and simply shouldn’t happen.  We know that for every watt we bring into the building we need to get it back out again. It has got to go somewhere.  If the cooling system isn’t designed to dissipate the power being brought into the building, it’s bad design.


Now a more common cooling system problem is someone brought a 30kW rack into the data center and an otherwise fine cooling system that is appropriately sized overall, can’t manage that hot spot. This isn’t bad data center design but it does raise a question: why is a 30kW rack a good idea?  We’re now back to asking “why” on the blade server question.  Generally, unless you are getting value for extreme high power density, don’t buy it. High power density drives more expensive cooling.  Unless you are getting measurable value from the increased density, don’t pay for it. 


Summary so far: Blade servers allow for very high power density but they cost more than commodity, low power density servers. Why buy blades?  They save space and there are legitimate reasons to locate data centers where the floor space is expensive. For those, more density is good.  However, very few data center owners with expensive locations are able to credibly explain why all their servers NEED to be there.  Many data centers are in poorly chosen locations driven by excessively manual procedures and the human need to see and touch that for which you paid over 100 million dollars.  Put your servers where humans don’t want to be. Don’t worry, attrition won’t go up. Servers really don’t care about life style, how good the schools are, and related quality of life issues.


We’ve talked about increased efficiency possible with blades by bringing water cooling directly to the rack but this really has nothing to do with blades. Any DC designer can employ this technique or a myriad of other mechanical designs and substantially improve their data centers cooling efficiency.  For those choosing modular data centers like the Rackable Ice Cube, you get the efficiency of water at the rack it as a side effect of the design. See Architecture for Modula Data Centers for more on container-based approaches and First Containerized Data Center Announced for information on the Microsoft modular DC deployment in Chicago.


We’ve talked about the high heat density of blade servers and argued that increased heat density increases operational or capital cooling expense and usually both.  Generally, don’t buy increased density unless there is a tangible gain from it that actually offsets the cooling cost penalty.  Basically, do the math. And then check it. And then make sure that there isn’t some cheaper way to get the same gain.


There are many good reasons to want higher density racks.  One good one is that you are using very high speed, low latency communications between servers in the cluster – I know of examples of this from the HPC world but I’ve not found them in many commercial data centers.  Another reason to go dense is the value of floor space is high.  We’ve argued above that a very small number of centers need to be located in expensive locations due to wide-area communications delays but, again, these are rare. The vast majority of folks buying high density, blade servers aren’t able to articulate why they are buying them in a way that stands up to scrutiny.  In these usage patterns, blades are not the best price/performing solutions.  In fact, that’s why the world’s largest data center operator, Google, doesn’t use blade servers. When you are deploying 10’s of thousands of servers a month, all that matters is work done per dollar. And, at today price points, blade servers do not yet make sense for these high scale, high efficiency deployments.


I’m not saying that there aren’t good reason to buy high density server designs.  I’ve seen many. What I’m arguing is that many folks that purchase blades, don’t need them. The arguments explaining the higher value often don’t stand scrutiny. Many experience cooling problems after purchasing blade racks.  Some experience increased cooling efficiency but, upon digging more deeply, you’ll see they made cooling system design changes to increase cooling system efficiency after installation but these excellent design changes could have been deployed without paying the blade premium.  In short, many data center purchases don’t really get the “work done per dollar” scrutiny that they should get. 


Density is fine but don’t pay a premium for it unless there is a measurable gain and make sure that the gain can’t be achieved by cheaper means.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Thursday, September 11, 2008 5:01:01 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Saturday, September 06, 2008

IBM just announced achieving over one million Input-output operations per second: IBM Breaks Performance Records Through Systems Innovation. That’s an impressive number.  To put the achievement in context, a very good (and way too expensive) enterprise disk will deliver somewhere between 180 to just over 200 IOPS. A respectable, but commodity, SATA disk will usually drive somewhere in the 70 to 100 IOPS range.


To achieve this goal, IBM actually used a Fusion-IO NAND flash based storage component.  It’s unfortunate that the original press release from IBM didn’t include FusionIO. However, an excellent blog write-up on the performance run by Barry Whyte of IBM offers the details behind the effort: 1M IOPs from Flash - actions speak louder than words.  The Fusion-IO press release is at: Fusion-io and IBM Team to Improve Enterprise Storage Performance.


 FusionIO is a PCIe storage subsystem based upon NAND flash.  I mentioned them in 100,000 IOPS.  It’s a bit expensive at this point but a very nice part nonetheless.  NAND prices continue to free-fall based upon mammoth volumes driven by usage in consumer devices and some over-capacity in the NAND market. As the base technology prices fall and sales of enterprise Flash-based storage devices increases, I expect we’ll see pricing improvements as well over the near term.  And, for the very hottest online transaction workloads where IOPS are the primary limiting factor, even current prices work and we’re starting to see some high I/O rate workloads migrate from spinning media to NAND flash.  Some have already moved and I know of many more that have devices in test.


Digging deeper into the IBM result, we see that the Fusion-IO part in this run was mounted behind a SAN.  I’ve already taken a bit of heat on this point as it’s well known that I’m not a lover of SANs. Actually, its not really true that I hate SANs. What I hate are expensive, scale-up solutions and it is true that many SAN fall into this catagory.  I want servers, storage, and networking to all be built from clusters of commodity components.   Quarter million dollar network routers just don’t make sense to me and most SANs are not affordable at internet service scale. Essentially, high end network routers and SAN storage arrays are the last bastion of the mainframe -- very high quality, very expensive, scale-up solutions.  As an example, consider the Symmetrix DMX3000.  At full scale, it has 576 disk drives, ¼ TB of memory and over 100 1GHz embedded PowerPC processors. When it was announced back in 2003, the starting price was $1.7M (in lightly configured form– the sky is the limit).


It’s really mainframe priced storage subsystems that I’m objecting to.  SANs could be great if built from commodity parts and priced to sell in volume. The ability to migrate storage between machines is clearly useful.  I’m not in love with an entire networking and switching infrastructure dedicated to storage (Fibre Channel) but that’s not inheriently required by SANs either.  FCOE should solve that problem and iSCSI does.


The IBM Million IOPS number built upon Fusion-IO NAND Flash components and a virtual SAN over a cluster of Intel-based servers is very interesting.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Saturday, September 06, 2008 10:06:02 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Sunday, August 31, 2008

In Designing and Deploying Internet Scale Services I’ve argued that all services should expect to be overloaded and all services should expect mass failures.  Very few do and I see related down-time in the news every month or so.


The Windows Genuine Advantage failure (WGA Meltdown...) from a year ago is a good example in that the degraded operations modes possible for that service are unusually simple and the problem and causes were well documented.  The obvious degraded operations model for WGA is allow users to continue as  “WGA Authorized” when the service isn’t healthy enough to fully check their O/S authenticity.  In the case of WGA, this actually is the intended operation and it is actually designed to do this.  This should have worked but services rarely have the good sense to fail.  They normally just run very, very slowly or otherwise misbehave.


The actual cause of the WGA issues are presented in detail here: So What Happened?. This excellent post even includes some of the degraded operation modes that the WGA team have implemented.  This is close to the right answer.  However, the problem with the implemented approach is: 1) it doesn’t detect unacceptable rises in latency or failure rate via deep monitoring and automatically fall back to degraded mode, and 2) it doesn’t allow the service to be repaired and  retested in production selectively with different numbers of users (slow restart).  It’s either on or off in this design.  A better model is one where 100% of the load can be directed to a backup service that just says “yes”.  And then real service that actually does the full check can be brought back live incrementally by switching more and more load from the “yes” service to the real, deep check service.  Here again, deep real time monitoring is needed to measure whether the service is performing properly.  Implementing and production testing a degraded operation mode is hard but I’ve never talked to a service who had invested in this work and later regretted it.


15 years ago I worked on a language compiler which, amongst others,  targeted a Navy fire control system.  This embedded system had a large red switch tagged as “Battle Ready”.  This switch would disable all emergency shutdowns and put the server into a mode where it would continue to run when the room was on fire or water is beginning to rise up the base of the computer.  In this state, the computer runs until it dies.  In the services world, this isn’t exactly what we’re after but it’s closely related.  We want all system to be able to drop back to a degraded operation mode that will allow it to continue to provide at least a subset of service even when under extreme load or suffering from cascading sub-system failures.  We need to design and, most important, we need to test these degraded modes of operation in at least limited production or they won’t work when we really need them.  Unfortunately, almost all services but the least successful will need these degraded operations modes at least once.


Degraded operation modes are service specific and, for many services, the initial gut reaction is that everything is mission critical and there exist no meaningful degraded modes.  But, they are always there if you take it seriously and look hard.  The first level is to stop all batch processing and periodic jobs.  That’s an easy one and almost all services have some batch jobs that are not time critical.   Run them later.  That one is fairly easy but most are hard to come up with.  It’s hard to produce a lower quality customer experience that is still useful but I’ve yet to find an example where none were available. As an example, consider Exchange Hosted Services.  In that service, the mail must get through.  What is the degraded operation mode?  They actually can be found in mission critical applications such as EHS as well.  Here’s some examples: turn up the aggressiveness of edge blocks, defer processing of mail classified as Spam until later, process mail from users of the service ahead of non-known users, prioritize premium customers ahead of others.  There actually are quite a few options.  The important point is to think what they should be ahead of time and ensure they are developed and tested prior to Operations needing them in the middle of the night.


Some time back Skype recently had a closely related problem where the entire service went down or mostly down for more than a day.  What they report happened was that Windows Update forced many reboots and it lead to a flood of Skype login requests as the clients were coming back up and “that when combined with lack of peer to peer resources had a critical impact” (What Happened on August 16th?).  There are at least two interesting factors here, one generic to all services and one Skype specific.  Generically, it’s very common for login operations to be MUCH more expensive than steady state operation so all services need to engineer for login storms after service interruption.  The WinLive Messenger team has given this considerable thought and has considerable experience with this issue.  They know there needs to be an easy way to throttle login requests such that you can control the rate with which they are accepted (a fine grained admission control for login).  All services need this or something like this but it’s surprising how few have actually implemented this protection and tested it to ensure it works in production.  The Skype-specific situation is not widely documented put is hinted at by the “lack of peer-to-peer” resources note in the above referenced quote.  In Skype’s implementation, the lack of an available supernode will cause client to report login failure (this is documented in An Analysis of the Skype Internet Peer-to-Peer Internet Telephony Protocol which was sent to me by Sharma Kunapalli of IW Services Marketing team).  This means that nodes can’t login unless they can find a supernode.  This has a nasty side effect in that the fewer clients that can successfully login, the more likely it is that other clients won’t successfully find a supernode since a super-node is a just a well connected client.  If they can’t find a supernode, they won’t be able to login either.  Basically, the entire network is unstable due to the dependence on finding a supernode to successfully log a client into the network.  For Skype, a great “degraded operation” mode would be to allow login even when a supernode can’t be found. Let the client get on and perhaps establish peer connectivity later.


Why wait for failure and the next post-mortem to design in AND production test degraded operations for your services? 




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Sunday, August 31, 2008 7:47:58 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Tuesday, August 26, 2008

Facebooks F8 conference was held last month in San Francisco. During his mid-day keynote Mark Zuckerberg reported that the Facebook platform now has 400,000 developers and 90 million users of which 32% are from the United States.  The platforms US user population grew 2.4x last year while the international population grew at an astounding 5.1x.

Vladimir Fedorov (Windows Live Mesh) attended F8 and brought together this excellent sent of notes on the conference.



I spent the day on Wednesday at Facebook (F8) conference and talked to some of the companies building facebook applications today. Overall I was pleasantly surprised by overall sense of direction/messaging and organization of the conference itself.

There were only 12 talks divided into 3 tracks - Technical/User Experience/Business, so I was able to attend a third of all talks.

The was focus throughout the day was on making it easier for applications that increase the value of the Facebook ecosystem and stopping abusing applications that detract value. The event itself was organized through a Facebook application. Here are the main changes in the Facebook application platform:

  1. Improve visibility of applications and allow users to observe functionality offered by an application without user taking an explicit install action
  2. Lower the barrier to using the application i.e. remove the necessity of a dialog granting rights to the application prior to any functionality being available
  3. Make the rights granting to application more granular i.e. remove the necessity of granting the application an extensive set of rights prior to using it. Grant specific permission at the time the application performs an operation.
  4. Allow external websites to act as applications on the Facebook platform by using Facebook as an identity provider, using social graph from Facebook and submitting data to Facebook news feed
  5. Allow the internalization method used for Facebook itself (translation by users) to be used by applications

The statistics given at keynote were 400k developers, 90 million users (32 % US / 68 % International) as compared to 24 million last year (50% US / 50% international), 200 million in venture capital given to facebook applications. Note that while the number of international users increased by 5.1x (by 49.2 million), the number of US users only increased by 2.4x (16.8 million).  

I went through the booths and talked to a number of Facebook application companies. I was primarily focusing on what they do and how they plan to make money. The business models are:

  1. Transaction fees - charging a small percentage per transaction for organizing events or coordinating travel
  2. Software as service - sell packages to organizations such as donation drive or car pooling applications
  3. Indirect advertising - large companies want to drive brand awareness through the social graph, but don't know how. There were different methods here - branded gifts i.e. Gunness beer, full featured brand campaigns, games which incorporate brand info in them, etc
  4. Direct advertising - trip planning, activity planning, wedding planning, reviews, etc

There were a number of companies that didn't have a real business model, but are still adding value to the ecosystem especially when combined with an offering from a different company.

The major features released are new application authorization model, new news feed (with new backend),  Facebook Connect, new look to the site and opening of internalization support used for Facebook itself to applications.

News Feed/News backend

They decided to do fan out on read in order to minimize storage costs and maximize the ability to tinker with the algorithm that decides which news events are shown to the user. The backend is made of two classes of machines – transient storage machines and aggregator machines. The users are assigned to buckets using a hashing algorithm and the buckets are assigned to transient storage machines using a DB table. For each user they store 30 days of events generated by the user in the transient storage machines. There are two replicas of the data in transient storage machines (replicas are on different racks). Each transient storage machine has 40GB of RAM and they use 40 machines for 90 million users. They also use 40 aggregator machines which actually construct the news feed that is shown on the website (in <50ms) by reading the events for each friend of the user from the transient storage machines and aggregating them. There are two racks each with twenty aggregators and twenty transient storage machines, where each rack has a complete copy of the data. The aggregators have affinity to transient storage machines in the same rack, but will go to the other  rack when local machine fails. There is no affinity between users and aggregators. They report they have 8x to 10x extra capacity in this solution. Facebook doesn’t have any geo-partitioning, which interesting given that majority of users are international. [JRH: they now have some geo-redundancy to serve read only queries nearer to users and to backup the primary site: Geo-Replication at Facebook]

The transient data is updated by another process called the “tailor” which reads the tail of a file on a network file system which actually contains the persisted copy of the data. The “tailor” periodically updates each user in the transient storage system via a system of dirty flags. Any of the transient machines can be restarted and reloaded from the persisted store in 10-20 minutes. This is different from the normal MySQL solution they use for the rest of their metadata.

They now allow comments on the news feed items. They also formalized 3 formats for the entry – one liner, summary, and picture plus text. The developers can register templates for each format (i.e. “author” has listened to “track” on MyMusicFoo) and then post just the data together with the template id instead of the whole message. The coalescing is black box – the system requires the developers to register multiple templates and will choose between them depending on the event volume. The event volume is throttles but the throttles change dynamically on the basis of user behavior i.e. if your applications event is marked as spam by some percentage of users the throttle is lowered.

Facebook Connect

In order to merge other websites into the ecosystem, Facebook is providing identity services to third party registered websites. A good example is integration with CitySearch. If you are logged into Facebook, you are automatically logged into CitySearch if you “CitySearch” enabled your Facebook account. Whenever you do a CitySearch review you have an option of spamming your friends news feed with it. You can also view reviews by your friends, who have “CitySearch” enabled their  Facebook account. Through a system of exchanging hashes for email addresses, there is a UI to invite your friends who are already on CitySearch to “CitySearch” enabled their  Facebook account. The end result is that in addition to providing identity services they also provide social network services and drive extra traffic to your site, making it more desirable for third party web sites to offer integration. In exchange the Facebook pages become more content rich and third party websites start acting almost like Facebook application.


International Support


Facebook is translated by the users themselves via voting system, where a user suggests a translation and the rest of the users vote on it. They opened this system up to applications, where application strings can be translated in the same way. While Facebook itself has had success with this model (complete translation to a  new language in <24 hours) it is less clear that application with smaller user bases will be translated quickly.



Tuesday, August 26, 2008 5:09:05 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Saturday, August 23, 2008

Kevin Clark, Director of IT Operations at Lucasfilm was interviewed by On-Demand Enterprise in We’ve Come a Long Way Since Star Wars.  His organization owns IT for LucasArts, Lucasfilm, and Industrial Light and Magic.


Lucasfilm runs a 4,500 server dedicated rendering farm and they expand this farm with workstations when  they are not in use to 5,500 servers in total.  The servers are dual socket, dual core Opterons with 32GB of memory.  Nothing unusual except the memory configuration is a bit larger than the current average.  They have 400TB of storage and produce 10 to 20TB of new and changed data each day.


Clark expects the big investment next year is making their datacenter more efficient. Partly for environmental reasons and partly because, like all businesses, they are power and cooling rather than floor space constrained. This is becoming the number one issue industry-wide and I’m glad to see. Current data center designs leave a lot of room for improvement.  At this year’s Foo Camp, I lead a short session on large scale data center power consumption: Where Does the Power Go and What Can We Do About It?


This cluster is medium sized but the data change rate is unusually high at 10 to 20TB a day.  It’s mostly batch work with each job being quite large.  It would be interesting to see more detail on the workload scheduler they have written to manage this workload.  It’s a bit ironic that IBM MVS (now called Z/OS) had a great scheduler 40 years ago. In the 10 years I worked for IBM, they constantly were requesting that a high quality batch scheduler be added to AIX.  And in the 11 years I’ve worked at Microsoft, there has been great interest in improving batch scheduling to the MVS-like levels.  More recently, Apache Hadoop has been used to run mega-jobs and, guess what?  It too needs a high-quality, prioritized, multi-job scheduler.  At the Hadoop Summit, Yahoo said they are working on one.  They typically contribute their Hadoop work to open source so Hadoop may have a better scheduler coming.


Thanks to Jeff Hammerbacher for pointing me to the note on Lucasfilm.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Saturday, August 23, 2008 6:50:16 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Thursday, August 21, 2008

Last Friday I arrived back from vacation (Back from the Outside Passage in BC) to 3,600 email messages.  I’ve been slogging through them through the weekend to now and I’m actually starting to catch up. 


Yesterday Tom Kleinpeter pointed me to this excellent posting from Jason Sobel of Facebook: Scaling Out. This excellent post describes the geo-replication support recently added to Facebook.  Rather than having a single west coast data center they added an east coast center both to be near to East Coast users and to provide redundancy for the west coast site.


It’s a cool post for two reasons: 1) it’s a fairly detailed description of how one large scale service implemented geo-redundancy, and 2) they are writing about it externally.  Way too many of these techniques are never discussed outside the implementing company and so everyone needs to keep re-inventing.  Facebook continues to share both how they engineer aspects of their service in addition to contributing some of the code used to do it to open source (e.g. Facebook Releases Cassandra as Open Source).  Good to see.


I’ve long been interested in geo-replication, geo-partitioning, and all other forms of cross data center operation because it’s a problem that every high scale service needs to solve and yet there really are no general, just-do-this recipes to easily operate over multiple data centers. A few common design patters have emerged but all solutions of reasonable complexity end up being application specific.  I would love to see general infrastructure emerge to generally support geo-redundancy but it’s a hard problem and solutions invariably exploit knowledge of application semantics. Since most solutions tend to be ad hoc and application specific today, it’s worth studying what they do while looking for common patterns. That was the other reason I enjoyed seeing this posting by Jason. He described how Facebook is currently addressing the issue and its always worth understanding as many existing solutions as possible before attempting a generalization.


The solution they have adopted is one where the California data center is primary and all writes are routed to it. The Virginia data center is secondary and serves read-only traffic.  A load balancer does layer 7 packet inspection and routes URIs for pages that support writes to CA.  If the page is read-only, as much of the Facebook traffic is, it gets routed to the east coast site (assuming you are “near” to it for some definition of near).  


All writes are done through the California data center. Its serves as primary for the entire service. When a write is done in the Facebook architecture, the corresponding memcached layer is invalidated after the write is completed.  MySQL replication is used to replicate the changes to the remote data center and solved the problem of only invalidating the remote memcached entries after the remote MySQL update by modifying MySQL.  They changed MySQL to clear the local memcached of the appropriate keys once the replicated write is complete. 


It’s a simple and fairly elegant approach and there is no doubt that simple is good. I would prefer an approach that scales out both reads and writes and there is a slight robustness risk that some engineer in the future may sometime add a page to the site that does a write and forget to update the this-page-writes URI list.  If MySQL supports running a secondary replica in read-only mode, then that potential issue can be quickly and easily detected. 


An approach that would allow multi-data center updates is to replicate the entire DB contents everywhere as Jason described in this post but rather than routing all writes through the California DB, partition the user-base on userID and route their traffic to a fixed center. Allow updates for that user at the data center they were routed to and replicate these changes to the other centers.  Essentially it’s the same solution that was described except rather than routing all requests to the single primary database in CA, the primary database is distributed over multiple datacenters partitioned on userID.  This approach would have many advantages but it is much more difficult if not all the data is cleanly partitioned by userID.  And, in social networks, it isn’t.  I refer to this as the “Hairball Problem” (Scaling LinkedIn).


The alternative approach I describe above of partitioning the primary database by userID would be reasonably easy to do in a service like an email system with most state partitioned cleanly by user but in a social network with lots of cross-user, shared state (the hairball problem), it’s harder to do.  Nonetheless, it’s probably the right place for FB to end up but the current solution is clean and works and it’s hard to argue with that.


If you come across other articles on geo-support or want to contribute one here, drop me a note




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Thursday, August 21, 2008 5:36:23 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Thursday, July 17, 2008

Going  boating: so I’ll be taking a break from blogging until mid-august when I’m back and caught back up.  Enjoy,




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:

Thursday, July 17, 2008 4:53:25 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
 Wednesday, July 16, 2008

I’ve been collecting scaling stories for some time now and last week I came across the following run down on Fliker scaling: Federation at Flickr: Doing Billions of Queries Per Day by Dathan Vance Pattishall, the Flickr database guy.


The Flickr DB Architecture is sharded with a PHP access layer to maintain consistency.  Flickr users are randomly assigned to a shard. Each shard is duplicated in another database that is also serving active shards. Each DB needs to be less than 50% loaded to be able to handle failover.


Shards are found via a lookup ring that maps userID or groupID to shardID and photoID to userID.  The DBs are protected by a memcached layer with a 30 minute caching lifetime. Slide 16 says they are maintaining consistency using distributed transactions but I strongly suspect they are actually just running two parallel transactions with application management rather than 2pc.


Maintenance is done by bringing down ½ the DBs and the remaining DBs will handle the load but it appears they have no redundancy (failure protection) during the maintenance periods.


They have 12TB of user data in aggregate and they appear to be using MySQL (slide 25 complains about an INNODB bug).


Other web site scaling stories:

·         Scaling Linkedin:

·         Scaling Amazon:

·         Scaling Second Life:

·         Scaling Technorati:

·         Scaling Flickr:

·         Scaling Craigslist:

·         Scaling Findory:

·         MySpace 2006:

·         MySpace 2007:

·         Twitter, Flickr, Live Journal, Six Apart, Bloglines,, SlideShare, and eBay:




Thanks to Kevin Merritt (Blist) and Dave Quick (Microsoft) for sending this my way.


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Wednesday, July 16, 2008 5:13:40 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Sunday, July 13, 2008

I just got back from O’Reilly’s Foo Camp.   Foo is an interesting conference format in that there is no set agenda. It’s basically self organized as a open space-type event but that’s not what makes it special.  What makes Foo a very cool conference is the people.  Lots of conferences invite good people but few invite such a  diverse a set of attendees.  It was a lot of fun.


Here’s a picture from Saturday night of (right to left) Jesse Robbins (Seattle entrepreneur and co-chair of O’Reilly’s Velocity conference), Pat Helland (Microsoft Developer Division Architect) and myself.




Foo is an invitational event loosely organized around folks O’Reilly find interesting or want to get to know better.  Tim O’Reilly describes the invite process in this blog comment: .


The conference starts around 5pm on Friday with drinks followed by dinner. After dinner, the sign-up boards for Saturday and Sunday talk sessions are put up. When Tim O’Reilly announced the sign-up boards were up, attendees rose from their tables and casually started ambling towards the sign-up boards as though there really was no rush. We’ll just be doing some signing up over the course of the evening.  We might do it now or perhaps we’ll do it later. No big rush.  And then some folks break into a slight trot towards but still not really an overt run  Suddenly, it’s a full raging stampede.  We became a single flow than as a discrete set of individuals. The flow accelerated and then crashed up against the sign-up boards spilling to both sides with folks madly negotiating for pens, better locations, sharing sessions, a shift of 6” to one side or the other, or  requesting to join two sessions together. Welcome to foo camp.  It didn’t slow down much from that point for the next 44 hours.


Sessions are 1 hour long but it’s good form to share a session if you don’t need the full hour. Speakers use their judgment to sign up for a small, medium or large room.  Some rooms have AV equipment and some don’t.


I shared a 1 hour ssession with Jeff Hammerbacher who leads the Facebook data team.  Earlier last week Jeff announced that he will be leaving Facebok in September:  Jeff and I got a medium sized room in a tent beside the main building without A/V. 


The title for my session was Where Does the Power go in Data Centers and How to get it Back?  I didn’t show slides but much of what we covered is posted at:  In the session, we talked through how contemporary large data centers work first looking at power distribution. We tracked the power from the feed to the substation at 115,000 volts through numerous conversions before arriving at the CPU at 1.2 volts. We then talked about power saving server design techniques.  And then the mechanical systems used to get the heat back out.  In each section we discussed what could be done to improve the design and how much could be saved.


Our conclusion from the session was that power savings of nearly 4x where both possible and affordable using only current technology.  For those participated in the session, thanks for your contribution and  for your help. It was fun.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:

Sunday, July 13, 2008 5:58:43 PM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
 Saturday, July 12, 2008

Last week the Facebook Data team released Cassandra as open source. Cassandra is an structured store with write ahead logging and indexing. Jeff Hammerbacher, who leads the Facebook Data team described Cassandra as a BigTable data model running on a Dynamo-like infrastructure.


Google Code for Cassandra (Apache 2.0 License):


Avinash Lakshman, Prashant Malik, and Karthik Ranganathan presented at SIGMOD 2008 this year: Cassandra: Structured Storage System over a P2P Network.  From the presentation:


Cassandra design goals:

·         High availability

·         Eventual consistency

·         Incremental scalability

·         Optimistic replication

·         Knobs to “tune” tradeoffs between consistency, durability, and latecy

·         Low cost of ownership

·         Minimal administration


Write operation: write to arbitrary node in Cassandra cluster, request sent to node owning the data, node writes to log first and then applied to in-memory copy. Properties of write: no locks in critical path, sequential disk accesses, behaves like a write through cache, atomicity guarantee for a key, and always writable.


Cluster membership is maintained via gossip protocol.


Lessons learned:

·         Add fancy features only when required

·         Many types of failures are possible

·         Big systems need proper systems-level monitoring

·         Value simple designs


Future work:

·         Atomicity guarantees across multiple keys

·         Distributed transactions (I’ll try to talk them out of this one)

·         Compression support

·         Fine grained security via ACLs


It looks like a well engineered system.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Saturday, July 12, 2008 3:17:11 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Thursday, July 10, 2008

What follows is a guest posting from Phil Bernstein on the Google Megastore presentation by Jonas Karlsson, Philip Zeyliger at SIGMOD 2008:


Megastore is a transactional indexed record manager built by Google on top of BigTable. It is rumored to be the store behind Google AppEngine but this was not confirmed (or denied) at the talk. [JRH: I certainly recognize many similarities between the Google IO talk on the AppEngine store (see Under the Covers of the App Engine Datastore in Rough notes from Selected Sessions at Google IO Day 1) and Phil’s notes below].


·         A transaction is allowed to read and write data in an entity group.

·         The term “entity group” refers to a set of records, possibly in different BigTable instances. Therefore, different entities in an entity group might not be collocated on the same machine. The entities in an entity group share a common prefix of their primary key.  So in effect, an entity group is a hierarchically-linked set of entities.

·         A per-entity-group transaction log is used. One of the rows that stores the entity group is the entity group’s root. The log is stored with the root, which is replicated like all rows in Big Table.

·         To commit a transaction, its updates are stored in the log and replicated to the other copies of the log. Then they’re copied into the database copy of the entity group.

·         They commit to replicas before acking the caller and use Paxos to deal with replica failures. So it’s an ACID transaction.

·         Optimistic concurrency control is used. Few details were provided, but I assume it’s the same as what they describe for Google Apps.

·         Schemas are supported.

·         They offer vertical partitioning to cluster columns that are frequently accessed together.

·         They don’t support joins except across hierarchical paths within entity groups. I.e., if you want to do arbitrary joins, then you write an application program and there’s no consistency guarantee between the data in different entity groups.

·         Big Table does not support indexes. It simply sorts the rows by primary key. Megastore supports indexes on top. They were vague about the details. It sounds like the index is a binary table with a column that contains the compound key as a slash-separated string and a column containing the primary key of the entity group.

·         Referential integrity between the components of an entity group is not supported.

·         Many-to-many relationships are not supported, though they said they can store the inverse of a functional relationship.  It sounded like a materialized view that is incrementally updated asynchronously.

·         It has been in use by apps for about a year.


The follow are the notes that I typed while listening to the talk. For the most part, it’s just what was written on the slides and is incomplete. I don’t think it adds much to my summary above.


TITLE: Megastore – Scalable Data System for User-facing Apps

SPEAKERS: Jonas Karlsson, Philip Zeyliger (Google)


User-facing apps have TP-system-like requirements

Ø  data updated by a few users

Ø  largely reads, small updates

Ø  lots of work scaling the data layer

Ø  users expect consistency


Megastore – scale, RAD, consistency, performance



Ø  start with Big Table for storage

Ø  add db technologies that scale: indices, schemas

Ø  offer transparent replication and failover between data centers

Ø  support many, frequent reads

Ø  writes may be more costly, because they’re less frequent

Ø  with a correct schema, the app should scale naturally


Rapid App Development

Ø  hide operational aspects of scalability from app code

Ø  Flexible declarative schemas (MDL) – looks like SQL

o   indices, rich data types, consistency and partitioning

Ø  offer transactions


Entity Group consistency is supported

Ø  it’s a logical partition – e.g., blogs, posts, comments, which are all keyed by the owner of the blog

Ø  all entities in the group have the same high-order key component



Ø  roll forward transaction log per entity group, no rollbacks

o   pb: It’s unclear to me whether they pool the log across all entity groups in a partition. If not, then they don’t benefit from group commit.

Ø  A transaction over an entity group is ACID , but not across entity groups

Ø  optimistic concurrency control

Ø  updates are available only after commit

Ø  api: newTransaction, read/write, commit (pb: couldn’t type fast enough for the details)

Ø  non-blocking, consistent reads (pb: does a transaction see its previous writes?)

Ø  cross-entity group operations have looser consistency



Ø  schemas declare their physical data locality

Ø  optimized to minimize seeks, RPCs, bandwidth, and storage

Ø  several ways of declaring physical locality

o   entity groups

o   shared primary key prefixes (collocating tables in Big Table)

o   locality groups – i.e., attribute partitioning

Ø  simply cost-transparent API primitives imply

o   only add scalable features

o   cost of writes is linear to data/indices

o   avoid scalability surprises


Avoiding joins

Ø  hierarchical primary keys

Ø  repeated fields (I guess just like Big Table)

Ø  store hierarchical data  in a column (it’s unclear to me whether this is the whole entity group or only part of it)

Ø  Syntax looks like SQL: Create Table (with primary key), Create Index (on particular columns), ..


Replication HA

Ø  uses Paxos-based algorithm, per entity group

Ø  it was more complicated than they expected

Ø  writes need a majority of replicas to be up in order to commit

Ø  most reads are local, consistency ensured

Ø  replication is consistent and synchronous

Ø  automatic query failover: individual table servers may fail



Ø  has been used in production for a year

Ø  used by several internal tools and projects

Ø  several large and user-visible apps (they wouldn’t say which ones, except we know them)

Ø  used for rapidly implementing new features in older projects

Ø  many other projects are migrating to it


Technical lessons

Ø  declarative schemas are good

Ø  cost-transparent APIs good (SQL is not cost-transparent)

Ø  avoid joins with hierarchical data and indices (if you want a join that isn’t on a hierarchical path, then write a program, e.g., Sawzall

Ø  avoid scalability surprises



Ø  schema reviews helpful

Ø  consistency is necessary

Ø  need a mindset of scalability and performance


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Thursday, July 10, 2008 1:47:45 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Tuesday, July 08, 2008

Jim Gray proposed the original sort benchmark back in his famous Anon et al paper A Measure of Transaction Processing Power originally published in Datamation April 1, 1985. TeraSort is one of the benchmarks that Jim evolved from this original proposal.


TeraSort is essentially a sequential I/O benchmark and the best way to get lots of I/O capacity is to have many servers.  The mainframe engineer-a-bigger-bus technique has produced some nice I/O rates but it doesn’t scale. There have been some very good designs but, in the end, commodity parts in volume always win. The trick is coming up with a programming model that is understandable to allow thousands of nodes to be harnessed.  MapReduce takes some heat for not being innovative and not having learned enough from the database community (MapReduce – A Major Step Backwards).  However, Google, Microsoft, and Yahoo run the model over thousands of nodes.  And all three have written higher level languages layers above MapReduce some of which look very SQL-like.


Owen O’Malley of the Yahoo Grid team took a moderate sized Hadoop cluster of 910 nodes and won the TeraSort benchmark.  Owen blogged the result: Apache Hadoop Wins Terabyte Sort Benchmark and provided more details in a short paper: TeraByte Sort on Apache Hadoop. Great result Owen.


Here’s the configuration that won:

  • 910 nodes
  • 4 dual core Xeons @ 2.0ghz per a node
  • 4 SATA disks per a node
  • 8G RAM per a node
  • 1 gigabit ethernet on each node
  • 40 nodes per a rack
  • 8 gigabit ethernet uplinks from each rack to the core
  • Red Hat Enterprise Linux Server Release 5.1 (kernel 2.6.18)
  • Sun Java JDK 1.6.0_05-b13

Yahoo bought expensive  4-socket servers for this configuration but, even then, this effort was won on less than ½ million in hardware.  Let’s assume that their fat nodes are $3.8k each.  They have 40 servers per rack so well need 23 racks. Let’s assume $4.2k per top of rack switch and $100k for core switching.  That’s 910*$3.8k+23*4.2k+$100k or $3,655k.  That means you can go out and spend roughly $3.5m and have the same resources that won the last sort benchmark. Amazing.  I love what’s happening in our industry.

Update: Math glitch in original posting fixed above (thanks to Ari Rabkin & Nathan Shrenk).


The next thing I would like to see is this same test run on very low power servers.  Assuming the fairly beefy nodes used above are 350W each (they may well be more), the overall cluster ignoring networking will be 318kW and it ran for 209 seconds which is 18.490kW/hrs. Let’s focus on power and show what can be sorted for 1kW/hr. The kW/hr sort.


Congratulations to the Yahoo and Hadoop team for a great result.




Wei Xiao of the Internet Search Research Center sent Owen’s result my way.


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Tuesday, July 08, 2008 4:03:17 AM (Pacific Standard Time, UTC-08:00)  #    Comments [10] - Trackback
 Friday, July 04, 2008

Recently results from two academic researchers in Japan will be significant to the NAND Flash market:  Clearly the trip from laboratory to volume production is often longer than the early estimates but these results look important. 


Back in 2006, Jim Gray argued in Tape is Dead, Disk is Tape, Flash is Disk, & Ram Locality is King that we need a new layer in the storage hierarchy between memory and disk and NAND Flash was an excellent candidate.  Early NAND Flash-based SSDs could sustain read rates well beyond 10x of disk random IO rates but the write rates were terrible. Some were as bad as 1/5 the rate of magnetic disk. Second generation devices are solving the random write problem as expected.  Costs continue to plunge, overall performance continue to improve, and many very high scale server workloads have been deploying flash devices over the past year. A success by most measures but two issues remain. The first issue is that we have one important metric heading in the wrong direction: endurance.  NAND flash can only support a limited number of erase cycles before failing.  The second issue is that many don’t expect the feature size to be reduced below 32 nm which, were that to happen, would slow the improvement rate dramatically.


When I first got interested in single level cell (SLC) NAND Flash most published endurance numbers were typically in the 10^6 cycle range. Most current devices are in the 10^5 range and many see as low as 10^4 cycles on the horizon.  A million cycles is fine and will not restrict the life of the device.  100,000 cycles is closer to the line but my back of envelope numbers suggest 100k will (barely) be acceptable.  10k cycles is a problem and will restrict longevity of the device.


In this research work Shigeki Sakai and Ken Takeuchi show how Feroelectric gate Field Effect Transistors can dramatically improve the durability, reduce required programming voltage, improve performance, and support further generational reductions in feature size.  The device prototype they demonstrated uses 6v to program rather than 20v which may reduce the cost or increase the speed of devices slightly.  What’s most important in the demonstrated results is estimated endurance in the 10^8 cycle range which is at least three orders of magnitude better than most current generation NAND parts.  That would take endurance completely off the NAND Flash worry list. 


Potential feature size reduction is the other improvement of interest in this result.  Feature size reduction is the engine of Moore’s law and drives the semi-conductor economics we’ve all become used to.  Many experts don’t expect to be able to reduce NAND flash features size below roughly 30nm.  The Fe-NAND result shows potential for two more generational feature size reductions down to the 10nm range. This is important in that it drives costs reductions and we all want them to continue.


Fe-NAND looks extremely interesting and, if the research can be confirmed and is manufacturability, we have a very significant technology that can address the two major concerns with current generation NAND flash: 1) rapidly falling endurance, and 2) expected inability to drop down below 32nm feature size.  Flash continues to build industry momentum.




Thanks to Jack Creasey for sending this my way.


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Friday, July 04, 2008 5:55:51 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
 Wednesday, July 02, 2008

Updated below with additional implementation details.


Last week Spansion made an interesting announcement: EcoRAM, a NOR Flash based storage part in a Dual In-line Memory Module (DIMM) package. 


NOR Flash technology growth has been fueled by the NOR support for Execute in Place (XIP).  Unlike the NAND Flash interface, where entire memory pages need to be shifted into memory to be operated upon, NOR flash is directly addressable.  And this direct addressability allows instructions to be read and executed directly from the memory.  There is no need to shift pages out one at a time. Byte addressability and support for XIP makes NOR ideal for boot loaders, ROMs, and the control program store for consumer devices. For example, the iPod Nano uses Silicon Storage Technology 39WF800A 8-Megabyte NOR boot flash (eeTimes).


Since NAND flash is not byte addressable providing only a block mode interface, it is typically attached to PCs and servers as an I/O device.  The NOR support for direct byte addressability makes it a candidate for attachment as a memory rather than as a block mode I/O device and when I first read the press release I thought this was what Spansion has done in partnership with Virident Systems. It’s clear they have NOR Flash memory in a memory (DIMM) package and they refer to it throughout the press release as “memory extension”. However, upon closer inspection, it appears to require that the NOR memory DIMM packages all be installed in a separate gateway server they refer as a “Green Gateway”.  It looks like the design has all the NOR flash in this separate server and a device driver on the host to virtualize memory on the NOR flash server. Essentially it still may be accessed via an I/O interface which is to say it’s not clear why you couldn’t do the same thing with NAND Flash.  And, it’s not immediately clear what protocol is used, what operating systems are supported, nor the exact performance but, overall, it still looks interesting.


Update: In conversations with Virident, it appears this part is potentially more interesting that I initially speculated. Rather than hosting the memory in an independent server as I speculated, it’s an in-server design but does require some BIOS engineering. From Virident: The current interconnect is the HTx bus for AMD servers.  Will be QPI for Intel.  We are doing AMD first.  You should be able to install on a standard two socket board.  DRAM sits behind the processor, and EcoRAM sits behind the controller.  Of course, the BIOS for the system must support the extended memory – we have HP systems up and running as a proof of concept, and Dell should work fairly soon. 


HTX and QPI open up big opportunities for hardware startups to innovate.  I know of many startups heading down this path. More innovation coming.


EcoRAM looks like it’s worth investigating in more detail.




A (slightly) more detailed presentation is available at:  Some interesting speeds and feeds from the press release and the presentation:


·         1/8th the power of DRAM at a given capacity,

·         Estimating that 8x power to storage capacity advantage over DRAM will grow to a full 16x by 2012

·         10x the reliability of DRAM,

·         smaller die area per bit,

·         much closer to DRAM access times (a bit vague on this one).


The Achilles heel of NOR Flash has been the poor write speed.  The press release claims 2x to 10x better than traditional NOR Memories but this is still considerably slower than DRAM.


We need a lot more technical data and repeatable performance measures but, with what has been published so far, it would appear that the sweet spot for this device are very high random IO rate, read-mostly workloads.  Potentially fairly interesting.



Thanks to Son VoBa of the Windows Virtualization team for sending this my way.


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Wednesday, July 02, 2008 5:14:17 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Sunday, June 29, 2008

Title: Needle in a Haystack: Efficient Storage of Billions of Photos

Speaker: Jason Sobel, Manager of the Facebook, Infrastructure Group)



An excellent talk that I really enjoyed.  I used to lead a much smaller service that also used a lot of NetApp storage and I recognized many of the problems Jason mentioned.  Throughout the introductory part of the talk I found myself thinking they need to move to a cheap, directly attached blob store. And that’s essentially the topic of remainder of the talk.  Jason presented Haystack, the Facebook solution to the problem of a filesystem not working terribly well for their high volume blob storage needs.


The same thing happened when he talked through the Facebook usage of Content Delivery Networks (CDNs).  The CDN stores the data once in a geo-distributed cache, Facebook stores it again in their distributed cache (Memcached) and then again the database tier.  Later in the talk Jason, made exactly this observation and observed the new design will allow them to use the CDNs less and as they get a broader geo-diverse data center footprint, they may move to being their own CDN. I 100% agree.


My rough notes below with some of what I found most interesting.


Overall Facebook facts:

·         #6 site on the internet

·         500 total employees

o   200 in engineering

o   25 in Infrastructure Engineering

·         One of the largest MySQL installations in the world

·         Big user and contributor to Memcached

·         More than 10k servers in production

·         6,000 logical databases in production


Photo Storage and Management at Facebook:

·         Photo facts:

o   6.5B photos in total

§  4 to 5 sizes of each picture is materialized (30B files)

§  475k images/second

·         Mostly served via CDN (Akamai & Limelight)

·         200k profile photos/second

§  100m uploads/week

o   Stored on netapp filers

·         First level caching via CDN (Akamai & Limelight)

o   99.8% hit rate for profiles

o   92% hit rate for remainder

·         Second level caching for profile pictures only via Cachr (non-profile goes directly against file handle cache)

o   Based upon a modified version of evhttp using memcached as a “backing” store

o   Since cachr is independent from memcachd, cachr failure doesn’t lose state

o   1 TB of cache over 40 servers

o   Delivers microsecond response

o   Redundancy so no loss of cache contents on server failure

·         Photo Servers

o   Non-profile requests go directly against the photo-servers

o   Only profile requests that miss the cachr cache.

·         File Handle Cache (FHC)

o   Based upon lighttpd and uses memcached as backing store

o   Reduces metadata workload on NetApp servers

o   Issue: filename to inode lookup is a serious scaling issue: 1) drives many I/Os or 2) wastes too much memory with a very large metadata caceh

§  They have extended the Linux kernel to allow NFS file opens via inode number rather than filename to avoid the NetApp scaling issue.

§  The inode numbers are stored in the FHC

§  This technique offloads the NetApp servers dramatically.

§  Note that files are write only.  Mods write a new file and delete the old ones so the handles will fail and a new metadata lookup will be driven.

·         Issues with this architecture:

o   Netapp storage overwhelmed by metadata (3 disk I/Os to read a single photo).

§  The original design required 15 I/Os for a single picture (due to deeper directory hierarchy I’m guessing)

§  Tracking last access time, last modified etc. has no value to Facebook.  They really only need a blob store but they are using a filesystem at additional expense

o   Heavy reliance on CDNs and caches such that netapp is basically almost pure backup

§  92% of non-profile and 99.8% of profile pictures are stored in CDN

§  Many of the rest are almost all stored in caching layers

·         Solution: Haystacks

o   Haystacks are a user level abstraction where lots of data is stored in a single file

o   Store an independent index vastly more efficient than the file store

o   1M of metadata/1G of data

§  Order of magnitude better on average than standard NetApp metadata

o   1 disk seek for all reads with any workload

o   Most likely store in XFS

o   Expect each haystack to be about 10G (with an index)

o   Speaker equates a Haystack to be a lot like a LUN and could be implemented on a LUN.  The actual implementation is via NFS onto NetApp as photos were previously stored

o   Net of what’s happening:

§  Haystack always hits on the metadata

o   Plan to replace NetApp

§  Haystack is a win over NetApp but we’ll likely run over XFS (originally done by Silicon Grapics)

§  Want more control of the cache behavior

o   Each Haystack Format:

§  Version number,

§  Magic number,

§  Length,

§  Data,

§  Checksum

o   Index format

§  Version,

§  Photo key,

§  Photo size,

§  Start,

§  Length.

o   Not planning to delete photos at all since delete rate is VERY low so it the resource that would be recovered are not worth the work to recover them in the Facebook usage.  Deletion just removes the entry from the index which makes the data unavailable but they don’t bother to actually remove it from the Haystack bulk storage system.

o   Q:Why not store the index in a RDBMS?  Feels that it’ll drive too many I/Os and have the problems they are trying to avoid (I’m not completely convinced but do understand that simplicity and being in control has value).

·         They still plan to use the CDN but they are hoping to reduce their dependence on CDN. They are considering becoming their own CDN (Facebook is absolutely large enough to be able to do this cost effectively today).

·         They are considering using to SSDs in the future.

·         Not interested in hosting with Google or Amazon. Compute is already close to the data and they are working to get both closer to users but don’t see a need/use for GAE or AWS at the Facebook scale.

·         The Facebook default is to use databases.  Photos are the largest exception but most data is stored in DBs. Few actions use transactions and joins though.

·         Almost all data is cached twice: once in memcached and then again in the DBs.

·         Random bits:

o   Canada: 1 out of 3 Canadians use Facebook.

o   Q:What is the strategy in China?  A:“not to do what Google did” :-)

o   Looking at de-duping and other commonality exploiting systems for client to server communications and storage (great idea although not clearly a big win for photos).

o   90% Indians access internet via a mobile device.  Facebook very focused on mobile and international.


Overall, an excellent talk by Jason.




Sent my way by Hitesh Kanwathirtha  of the Windows Live Experience team, Mitch Wyle of Engineering Excellence, and Dave Quick and Alex Mallet both in Windows Live Cloud Storage group. It was originally Slashdotted at: The presentation is posted at:


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Sunday, June 29, 2008 8:04:21 PM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
 Friday, June 27, 2008

Alex Mallet and Viraj Mody of the Windows Live Mesh team took great notes at the Structure ’08 (Put Cloud Computing to Work) conference (appended below).


Some pre-reading information was made available to all attendees as well: Refresh the Net: Why the Internet needs a Makeover?



-          Interesting mix of attendees from companies in all areas of “cloud computing”

-          The quality of the presentations and panels was somewhat uneven

-          Talks were not very technical

-          Amazon is the clear leader in mindshare; MS isn’t even on the board

-          Lots of speculation about how software-as-a-service, platform-as-a-service, everything-as-a-service is going to play out: who the users will be, how to make money, whether there will be cloud computing standards etc


5 min Nick Carr video [author of “The Big Switch”]

-          Drew symbolic link between BillG retiring and Structure ’08, the first “cloud computing” conference, being the same week, marking the shift of computing from the desktop to the datacenter

-          Generic pontificating on the coming “age of cloud computing”


“The Platform Revolution: a look into disruptive technologies”, Jonathan Yarmis, research analyst

-          Enterprises always lag behind consumers in adoption of new technology, and IT is powerless to stop users from adopting new technology

-          4 big tech trends: social networks, mobility, cloud computing, alternative business models [eg ad-supported]

-          Tech trends mutually reinforcing: mobility leads to more social networking applications, being able to access data/apps in the cloud leads to more mobility

-          Mobile is platform for next-gen and emerging markets: 1.4 billion devices per year, 20% device growth per year, average device lifetime 21 months; opens up market to new users and new uses

-          Claim: “single converged device will never exist”; cloud computing enables independence of device and location

-          Stream computing: rate of data creation is growing at 50-500% per year, and it’s becoming increasingly important to be able to quickly process the data, determine what’s interesting and discard the rest

-          “Economic value of peer relationships hasn’t been realized yet” – Facebook Beacon was a good idea, but poorly realized


“Virtualization and Cloud Computing”, with Mendel Rosenblum, VMWare founder 

-          Virtualization can/should be used to decouple software from hardware even in the datacenter

-          Virtualization is cloud-computing enabler: can decide whether to run your own machines, or use somebody else’s, without having to rewrite everything

-          Coming “age of multicore” makes virtualization even more important/useful

-          Smart software that figures out how to distribute VMs over physical hardware isn’t a commodity yet

-          VMWare is working on merging the various virtualization layers: machine, storage, networks [eg VLANs]

-          HW support for virtualization is mostly being done for server-class machines [?]

-          Rosenblum doesn’t think moving workloads from the datacenter to edge machines, to take advantage of spare cycles, will ever take off – it’s just too much of a pain to try to harness those spare cycles

-          Single-machine hypervisor is becoming commodity, so VMWare is moving to managing the whole datacenter, to stay ahead of the competition


Keynote, Werner Vogels, Amazon CTO:

-          Mostly a pitch for Amazon’s web services: EC2, S3, SQS, SimpleDB

-          Gave example of Animoto, company that merges music + photos to create a video, which has no servers whatsoever: had 25K users, launched a Facebook app, and went from 25K users total to adding 25K users/hour; were able to handle it by moving from 50 EC2 instances to 3000 EC2 instances in 2 days

-          Currently 370K registered AWS developers

-          Bandwidth consumed by AWS is bigger than bandwidth consumed by Amazon e-commerce services

-          Shift to service-oriented architecture occurred as result of being approached by Target in 2001/2002, asking whether Amazon could run their e-commerce for them. Realized that their current architecture wouldn’t scale/work, so they re-engineered it

-          Single Amazon page can depend on hundreds of other services

-          Big barrier between developing web app and operating it at scale: loadbalancing, hardware mgmt, routing, storage management etc. Called this the “undifferentiated heavy lifting” that needs to be done to even get in the game

-          Claim: typical company spends 70% effort/money on “undifferentiated heavy lifting” and 30% on differentiated value creation; AWS is intended to allow companies to focus much more on differentiated value creation

-          SmugMug has been at forefront of companies relying on AWS; currently store 600TB of photos in S3, and have launched an entirely new product, SmugVault, based purely on the existence of S3 => AWS not just replacement for existing infrastructure, but enabling new businesses

-          In 2 years, cloud computing will be evaluated along 5 axes: security, availability, scalability, performance, cost-effectiveness

-          Really plugged the pay-as-you-go model


“Working the Cloud: next-gen infrastructure for new entrepreneurs” panel

-          Q: is lock-in going to be a problem ie how easy will it be to move an app from one cloud computing platform to another ?

o   A: Strong desire for standards that will make it easy to port apps, but not there yet

o   A: To really use the cloud, you need to embed assumptions about it in your code; even bare-metal clouds require intelligence, like scripts to spin up new EC2 instances, so lock-in is a real concern

o   Side thread: Google person on panel claimed that using Google App Engine didn’t lock in developers, because the GAE APIs are well-documented, and he was promptly verbally mugged by just about everyone else on the panel, pointing out things like the use of BigTable in GAE making it difficult to extract data, or replace the underlying storage layer etc.

o   Prediction: there will be convergence to standards, and choice will come down to whether to use a generic cloud, or a more specialized/efficient cloud, eg one targeted at the medical information sector, with features for HiPPA compliance

-          Need new licensing models for cloud computing, to deal with the dynamic increase/decrease in number of application instances/virtual machines as load changes

-          Tidbit: Google has geographically distributed data centers, and geo-replicates [some] data

-          Q: will we be able to use our old “toys” [APIs, programming models etc] in the cloud ?

o   A: Yes, have to be able to, otherwise people won’t adopt it

o   A:  Yes, just have to be smart about replacing the plumbing underneath the various APIs

o   A: Yes, but current API frameworks are lacking some semantics that become important in cloud computing, like ways to specify how many times an object should be replicated, whether it’s ok to lazily replicate some data etc


Mini-note, “Optical networking”, Drew Perkins

-          Video is, by far, the largest consumer of bandwidth on the internet

-          Cost of content is disproportionate to size: 4MB song costs $1, 200MB TV show episode costs $2, 1.5GB movie costs $3-4.

-          Photonic integrated circuits that can be used to build 100GB/s are needed to meet future bandwidth requirements: less power,  need fewer network devices


“Race to the next database” panel

-          Quite poorly organized: panelists each got to give an [uninformative] infomercial for their company, and there was very little time for actual questions and discussion

-          Aster Data Systems is back-end data warehouse and analytics system for MySpace: 1 billion impressions/day, 1 TB of new data per day, new data is loaded into 100-node Aster cluster every hour and needs to be available for ad analytics engine to decide which ads to show

-          SQLStream is company that has built a data stream processing product that collapses the usual processing stages [data staging, cleaning, loading etc] into a pipeline that continuously produces results for “standing” queries; useful for real-time analytics

-          Web causes disruption to traditional DB model because of [10x] larger data volumes, need for high interactivity/turn-around, need to scale out instead of up. For example, GreenPlum is building a 20PB data warehouse for one customer.

-          Can’t rely on all the data being in a single store, so need to be able to do federated/distributed queries


“MS Datacenters”

-          Presentation centered on MS plans for datacenters-in-a-box

-          Datacenters-in-a-container are long-term strategy for MS, not just transient response to high demand and lack of space

-          Container blocks have lower reliability than traditional datacenters, so applications need to be geo-distributed and redundant to handle downtime


“End of boxed software”, Parker Harris, co-founder of

-          Origins of Modeled on consumer internet sites -;

-          Transition from client->server site to a platform ( first instinct is to build a platform, but then you lose touch with why you're building it. As they started building their experience, they abstracted away components and started realizing it could become a platform. Revenue comes from site, platform is a bonus.

-          Initially scaled by buying bigger [Sun] boxes ie scaled up, not out, and ran into lots of complexity. Unclear whether that’s still the case or whether they’ve re-architected.


“Scaling to satiate demand” panel

-          Q: “When did you first realize your architecture was broken, and couldn’t scale ?”

o   A: When site started to get slow; Ebay: after massive site outages

-          Q: “How do you handle running code supplied by other people on your servers ?”

o   A: Compartmentalize ie isolate apps; have mgmt infrastructure and tooling to be able to monitor and control uploaded apps; provide developers with fixed APIs and tools so you can control what they do

-          Q: “How do Facebook and Slide [builds Facebook apps] figure out where the problems are if Slide starts failing ?”

o   A: Lots of real-time metrics; ops folks from both companies are in IM contact and do co-operative troubleshooting

-          Q: “How should you handle PR around outages ?”

o   A: Be transparent; communicate; set realistic timelines for when site will be back up; set expectations wrt “bakedness” of features

-          Beware of retrying failed operations too soon, since retries may cause an overloaded system to never be able to come up

-          Ebay: each app is instrumented with the same set of logging infrastructure and there’s a real-time OLAP engine that analyzes the logs and does correlation to try to find troubled spots

-          Facebook and Meebo both utilized their user base to translate their sites into multiple languages

-          Need to know which bits of the system you can turn off if you run into trouble

-          Biggest challenge is scaling features that involve many-many links between users; it’s easy to scale a single user + data

-          Keep monitoring: there are always scale issues to find, even without problems/outages

-          Slide: “Firefox 3 broke all of our apps”

-          Facebook has > 10K servers


Mini-note, “Creating fair bandwidth usage on the Internet”, Dr. Lawrence Roberts, leader of the original ARPANET team

-          P2P leads to unfair usage: people not using P2P get less; 5% of users (P2P users) receive 80% capacity

-          Deep packet inspection catches 75% of p2p traffic, but isn’t effective in creating fairness

-          Anagran has flow behavior mgmt: observe per user flow behavior & utilization and then equalize. Equalization is done in memory, on networking infrastructure [routers etc] and at the user level instead of the flow level


Mini-note, “Cloud computing and the mid-market”, Zach Nelson, CEO of NetSuite

-          Mid-market is the last great business applications opportunity

-          Cloud computing makes it economical to reach the “Fortune 5 million”

-          Cloud computing still doesn’t solve problem of application integration

-          Consulting services industry is next to be transformed by cloud computing


Mini-note, “Electricity use in datacenters”, Dr.Jonathan Koomey

-          Site Uptime Network, an organization of data center operators and designers did study of 19 datacenters from 1999-2006:

o   Floor area remained relatively constant

o   Power density went from 23 W/sq ft to 35 W/sq ft

-          In 2000, datacenters used 0.5% of world’s electricity; in 2005, used 1%.

-          Cooling and power distribution are largest electricity consumers; servers are second-largest; storage and networking equipment accounts for a small fraction

-          Asia-Pacific region’s use of power is increasing the fastest, over 25% growth per year

-          Lots of inefficiencies in facility design: wrong cost metrics [sq feet versus kW], different budgets and costs borne by different orgs [facilities vrs IT], multiple safety factors piled on top of each other

-          Designed Eco-Rack, which, with only a few months of work, reduces power consumption on normalized workload by 16-18%

-          Forecast: datacenter electricity consumption will grow by 76% by 2010, maybe a bit less with virtualization


“VC investment in cloud computing infrastructure” panel

-          Overall thesis of panel was that VCs are not investing in infrastructure

-          VCs disagreed with panel theme, and said that it depended on the definition of infrastructure; said they are investing in infrastructure, but it’s moving higher in the stack, like Heroku [?]

-          HW infrastructure requires serious investment, large teams, and long time-frame – not a good fit with VC investment model

-          Any companies that want to build datacenters or commodity storage and compute services are not  a good investment – there are established, large competitors, and it’s very expensive to compete in that space

-          Infrastructure needed for really large scale [like a 400 Gbit/sec switch] has a pretty small market, which makes it hard to justify the investment. If there’s a small market, the buyers all know they’re the only buyers and exert large downward pressure on price, which makes it hard for company to stay in business

-          Quote: “any company that’s doing something worthwhile, and building something defensible, will take at least 24 months to develop”


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Friday, June 27, 2008 5:45:07 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
 Wednesday, June 25, 2008

John Breslin did an excellent job of writing up Kai-Fu Lee’s Keynote at WWW2008.  John’s post: Dr. Kai-Fu Lee (Google) – “Cloud Computing”. 


There are 235m internet users in China and Kai-Fu believes they want:

1.       Accessibility

2.       Support for sharing

3.       Access data from wherever they are

4.       Simplicity

5.       Security


He argues that Cloud Computing is the best answer for these requirements.  He defined the key components of what he is referring to as the cloud to be: 1) data stored centrally without need for the user to understand where it actually is, 2) software and services also delivered from the central location and delivered via browser, 3) built on open standards and protocols (Linux, AJAX, LAMP, etc.) to avoid control by one company, and 4) accessible from any device especially cell phones.  I don’t follow how the use of Linux in the cloud will improve or in any way change the degree of openness and the ease with which a user could move to a different provider.  The technology base used in the cloud is mostly irrelevant. I agree that open and standard protocols are both helpful and a good thing.


Kai-Fu then argues that what he has defined as the cloud has been technically possible for decades but three main factors make it practical today:

1.       Falling cost of storage

2.       Ubiquitous broadband

3.       Good development tools available cost effectively to all


He enumerated six properties that make this area exciting: 1) user centric, 2) task centric, 3) powerful, 4) accessible, 5) intelligent, and 6) programmable.  He went through each in detail (see Dan’s posting).  In my read I just spent time on the data provided on GFS and Bigtable scaling, hardware selection, and failure rates that were sprinkled throughout the remainder of the talk:

·         Scale-out: he argues that when comparing a $42,000 high-end servers to the same amount spent on $2,500 servers, the commodity scale-out solution is 33x more efficient.  That seems like a reasonable number but I would be amazed if Google spent anywhere near $2,500 for a server.  I’m betting on $700 to perhaps as low as $500. See Jeff Dean on Google Hardware Infrastructure for a picture of what Jeff Dean reported to be the  current Google internally designed server design.

·         Failure management. Kai-Fu stated that a farm of 20,000 servers will have 110 failures per day.  This is a super interesting data point from Google in that failure rates are almost never published by major players. However, 110 per day on a population of 20k servers is ½% a day which seems impossibly high.  That says, on average, the entire farm is turned over in 181 days.  No servers are anywhere close to that unreliable so this failure data must be of all types of failures whether software or hardware. When including all types of issues, the ½% number is perfectly credible.  Assuming there current server population is roughly one million, they are managing 5,500 failures per day requiring some form of intervention.  It’s pretty clear why auto-management systems are needed at anything even hinting at this scale. It would be super interesting to understand how many of these are recoverable software errors, recoverable hardware errors (memory faults etc.), and unrecoverable hardware errors requiring service or replacement. 

·         He reports there are “around 200 Google File System (GFS) clusters in operation. Some have over 5 PB of disk space over 200 machines.”   That ratio is about 10TB per machine.  Assuming they are buying 750GB disks that just over 13 disks.  I’ve argued in the past that a good service design point is to build everything on two hardware SKUs: 1) data light, and 2) data heavy.  Web servers and mid-tier boxes run the former and data stores run the later.  One design I like uses the same server board for both SKUs with 12 SATA disks in SAS attached disk modules.  Data light is just the server board.  Data heavy is the server board coupled with 1 or optionally more disk modules to get 12 , 24, or even 36 disks for each server. Cheap cold storage needs high disk to server ratios.

·         The largest Big table cells are 700TBs over 2,000 servers.”  I’m surprised to see two thousand reported by Kai-Fu as the largest BigTable cell – in the past I’ve seen references to over 5k. Let’s look at the storage to server ratio since he offered both.  700TB storage spread over 2k servers is only 350 GB per node. Given that they are using SATA disks, that would be only a single disk and a fairly small one at that.  That seems VERY light on storage. BigTable is a semi-structured storage layer over GFS.  I can’t imagine a GFS cluster with only 1 disk/server so I suspect the 2,000 node BigTable cluster that Kai-Fu described didn’t include the GFS cluster that it’s running over.  That helps but the number still are somewhat hard to make work.  These data don’t line up well with what’s been published in the past nor do they appear to be the most economic configurations.


Thanks to Zac Leow to sending this pointer my way.




James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Wednesday, June 25, 2008 8:06:53 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Tuesday, June 24, 2008

Earlier today Nokia announced it will acquire the remaining 52% share of the Symbian Limited to take over controlling interest of the mobile operating system provider with 91% of the outstanding shares.  This alone is interesting but what is fascinating is they also announced their intention to open source Symbian to create “the most attractive platform for mobile innovation and drive the development of new and compelling web-enabled applications”.  The press release reports the acquisition will be completed at 3.647 EUR/share at a total cost of 264m EUR. The new open source project responsible for the Symbian operating systems will be managed by the newly set up Symbian Foundation with support announced by Nokia, AT&T, Broadcom, Digia, NTT docomo, EA Mobile, Freescale, Fujitsu, LG, Motorola, Orange, Plusmo, Samsung, Sony Ericcson, ST, Symbian, Teleca, Texas Instruments, T Mobile, Vodaphone, and Wipro.


Other articles on the acquisition:




This substantially changes the mobile operating system world with all major offerings other than Windows Mobile, iPhone, and RIM now available in open source form.  The timing of this acquisition strongly suggests that it’s a response to a perceived threat from Google Android ensuring that, even if Android never gets substantial market traction, it’s already had a lasting impact on the market.




Sent my way by Eric Schoonover

Update: Added iPhone to prooprietary mobile O/S list (thanks Dare Obasanjo).


James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Tuesday, June 24, 2008 4:01:40 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

<September 2008>

This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton