Google’s Dr. Kai-Fu Lee on Cloud Computing

John Breslin did an excellent job of writing up Kai-Fu Lee’s Keynote at WWW2008. John’s post: Dr. Kai-Fu Lee (Google) – “Cloud Computing”.

There are 235m internet users in China and Kai-Fu believes they want:

1. Accessibility

2. Support for sharing

3. Access data from wherever they are

4. Simplicity

5. Security

He argues that Cloud Computing is the best answer for these requirements. He defined the key components of what he is referring to as the cloud to be: 1) data stored centrally without need for the user to understand where it actually is, 2) software and services also delivered from the central location and delivered via browser, 3) built on open standards and protocols (Linux, AJAX, LAMP, etc.) to avoid control by one company, and 4) accessible from any device especially cell phones. I don’t follow how the use of Linux in the cloud will improve or in any way change the degree of openness and the ease with which a user could move to a different provider. The technology base used in the cloud is mostly irrelevant. I agree that open and standard protocols are both helpful and a good thing.

Kai-Fu then argues that what he has defined as the cloud has been technically possible for decades but three main factors make it practical today:

1. Falling cost of storage

2. Ubiquitous broadband

3. Good development tools available cost effectively to all

He enumerated six properties that make this area exciting: 1) user centric, 2) task centric, 3) powerful, 4) accessible, 5) intelligent, and 6) programmable. He went through each in detail (see Dan’s posting). In my read I just spent time on the data provided on GFS and Bigtable scaling, hardware selection, and failure rates that were sprinkled throughout the remainder of the talk:

· Scale-out: he argues that when comparing a $42,000 high-end servers to the same amount spent on $2,500 servers, the commodity scale-out solution is 33x more efficient. That seems like a reasonable number but I would be amazed if Google spent anywhere near $2,500 for a server. I’m betting on $700 to perhaps as low as $500. See Jeff Dean on Google Hardware Infrastructure for a picture of what Jeff Dean reported to be the current Google internally designed server design.

· Failure management. Kai-Fu stated that a farm of 20,000 servers will have 110 failures per day. This is a super interesting data point from Google in that failure rates are almost never published by major players. However, 110 per day on a population of 20k servers is ½% a day which seems impossibly high. That says, on average, the entire farm is turned over in 181 days. No servers are anywhere close to that unreliable so this failure data must be of all types of failures whether software or hardware. When including all types of issues, the ½% number is perfectly credible. Assuming there current server population is roughly one million, they are managing 5,500 failures per day requiring some form of intervention. It’s pretty clear why auto-management systems are needed at anything even hinting at this scale. It would be super interesting to understand how many of these are recoverable software errors, recoverable hardware errors (memory faults etc.), and unrecoverable hardware errors requiring service or replacement.

· He reports there are “around 200 Google File System (GFS) clusters in operation. Some have over 5 PB of disk space over 200 machines.” That ratio is about 10TB per machine. Assuming they are buying 750GB disks that just over 13 disks. I’ve argued in the past that a good service design point is to build everything on two hardware SKUs: 1) data light, and 2) data heavy. Web servers and mid-tier boxes run the former and data stores run the later. One design I like uses the same server board for both SKUs with 12 SATA disks in SAS attached disk modules. Data light is just the server board. Data heavy is the server board coupled with 1 or optionally more disk modules to get 12 , 24, or even 36 disks for each server. Cheap cold storage needs high disk to server ratios.

· The largest Big table cells are 700TBs over 2,000 servers.” I’m surprised to see two thousand reported by Kai-Fu as the largest BigTable cell – in the past I’ve seen references to over 5k. Let’s look at the storage to server ratio since he offered both. 700TB storage spread over 2k servers is only 350 GB per node. Given that they are using SATA disks, that would be only a single disk and a fairly small one at that. That seems VERY light on storage. BigTable is a semi-structured storage layer over GFS. I can’t imagine a GFS cluster with only 1 disk/server so I suspect the 2,000 node BigTable cluster that Kai-Fu described didn’t include the GFS cluster that it’s running over. That helps but the number still are somewhat hard to make work. These data don’t line up well with what’s been published in the past nor do they appear to be the most economic configurations.

Thanks to Zac Leow to sending this pointer my way.

–jrh

James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
JamesRH@microsoft.com

H:mvdirona.com | W:research.microsoft.com/~jamesrh | blog:http://perspectives.mvdirona.com

2 comments on “Google’s Dr. Kai-Fu Lee on Cloud Computing
  1. John Breslin says:

    BTW since I was transcribing on the fly it is possible there may be some errors in the numbers but so far no-one has disputed what I heard so hopefully it is correct in the main…

  2. John Breslin says:

    Glad you found it useful James, all the best – John.

Leave a Reply

Your email address will not be published. Required fields are marked *