Friday, March 20, 2009

From Data Center Knowledge yesterday: Rackable Turns up the Heat, we see the beginnings of the next class of server innovations. This one is going to be important and have lasting impact. The industry will save millions of dollars and megawatts of power ignoring the capital expense reductions possible. Hat’s off to Rackable Systems to being the first to deliver. Yesterday they announced the CloudRack C2.  CloudRack is very similar to the MicroSlice offering I mentioned in the Microslice Servers posting. These are very low cost, high efficiency and high density, server offerings targeting high scale services.

 

What makes the CloudRack C2 particularly notable is they have raised the standard operating temperature range to a full 40C (104F).  Data center mechanical systems consume roughly 1/3 of all power brought into the data center:

       Data center power consumption:

      IT load (servers): 1/1.7=> 59%

      Distribution Losses: 8%

      Mechanical load(cooling): 33%

From: Where Does the Power Go?

 

The best way to make cooling more efficient is to stop doing so much of it.  I’ve been asking all server producers including Rackable to commit to full warrantee coverage for servers operating with 35C (95F) inlet temperatures.  Some think I’m nuts but a few innovators like Rackable and Dell fully understand the savings possible. Higher data center temperatures conserve energy and reduce costs. It’s good for the industry and good for the environment.

 

To fully realize these industry-wide savings we need all data center IT equipment certified for high temperature operations particularily top of rack and aggregation switches.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Friday, March 20, 2009 6:25:39 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Hardware
 Thursday, March 19, 2009

HotCloud ’09 is a workshop that will be held at the same time as USENIX ’09 (June 14 through 19, 2009). The CFP:

 

Join us in San Diego, CA, June 15, 2009, for the Workshop on Hot Topics in Cloud Computing. HotCloud '09 seeks to discuss challenges in the Cloud Computing paradigm including the design, implementation, and deployment of virtualized clouds. The workshop provides a forum for academics as well as practitioners in the field to share their experience, leverage each other's perspectives, and identify new and emerging "hot" trends in this area.

HotCloud '09 will be co-located with the 2009 USENIX Annual Technical Conference (USENIX '09), which will take place June 14–19, 2009. The exact date of the workshop will be set soon.

The call for paper is at: http://www.usenix.org/events/hotcloud09/cfp/.

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, March 19, 2009 4:22:14 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Wednesday, March 18, 2009

This the third posting in the series on heterogeneous computing. The first two were:

1.       Heterogeneous Computing using GPGPUs and FPGAs

2.       Heterogeneous Computing using GPGPUs:  NVidia GT200

 

This post looks more deeply at the AMD/ATI RV770.

 

The latest GPU from AMD/ATI is the RV770 architecture.  The processor contains 10 SIMD cores, each with 16 streaming processor (SP) units.   The SIMD cores are similar to NVidia’s Texture Processor Cluster (TPC) units (the NVidia GT200 also has 10 of these), and the 10*16 = 160 SPs are “execution thread granularity” similar to NVidia’s SP units (GT200 has 240 of these).  Unlike NVidia’s design which executes 1 instruction per thread, each SP on the RV770 executes packed 5-wide VLIW-style instructions.  For graphics and visualization workloads, floating point intensity is high enough to average about 4.2 useful operations  per cycle.  On dense data parallel operations (ex. dense matrix multiply), all 5 ALUs can easily be used.

 

The ALUs in each SP are named x, y, z, w and t.  x, y, z and w are symmetric, and capable of retiring a single precision floating point multiply-add per cycle.  The t unit is a Special Function Unit (SFU) capable of everything an xyzw ALU can do, plus transcendental functions like sin, cos, etc.  There is also a branch unit in each SP to deal with shader program branches.

 

From this information, we can see that when people are talking about 800 “shader cores” or “threads” or “streaming processors”, they are actually referring to the 10*16*5 = 800 xyzwt ALUs.  This can be confusing, because there are really only 160 simultaneous instruction pipelines.  Also, both NVidia and AMD use symmetric single issue streaming multiprocessor architectures, so branches are handled very differently from CPUs. 

 

The RV770 is used in the desktop Radeon 4850 and 4870 video cards, and evidently the “workstation” FireStream 9250 and FirePro V8700.  The Radeon 48x0 X2 “enthusiast desktop” cards have two RV770s on the same card. Like NVidia Quadro cards, the typical difference between the “desktop” and “workstation” cards is that the workstation card has anti-aliased (AA) line capability enabled (primarily for the CAD market) and it costs 5-10 times as much.    

 

[The computing cores always have AA line capability, so it’s probably more accurate to say that the desktop cards have this capability disabled.  Theoretically, foundry binning could sort processors with hard faults in the “anti-aliased line hardware” as “desktop” processors.  However, this probably never really happens since this is just a tiny bit of instruction decode logic or microcode that sends “lines” to shared setup logic that triangles are computed on.  Likewise, the NVidia Tesla boards are just GT200 processors with potentially some extra compliance testing and more (non-ECC) board memory.  Arguably, these artificially maintained high margin product lines are what keep these companies profitable; industrial design subsidizes gamers!]

 

Double precision floating point is accomplished by fusing the xyzw ALUs within an SP into two pairs.  These two double units can perform either multiply or add (but not both) each cycle.  The t unit is unaffected by this fused mode, and ALU/transcendental operations can be co-scheduled alongside the doubles just like with single precision-only VLIW issue.

 

Local card memory is 512MB of GDDR3 for the 4850 and 1GB of GDDR5 for the 4870.  Both use a 256 bit wide bus, but GDDR3 is 2 channel while GDDR5 is 4 channel.

 

Let’s look at peak performance numbers for the Radeon 4870, clocked at reference 750MHz.  Keep in mind that all of the ALUs are capable of multiply-add instructions (2 flop/cycle):

= 750MHz/s * 10 SPMD * 16 SIMD/SPMD * 5 ALU/SIMD * 2 flop/cycle per ALU

= 1200000M flop/s = 1.2 TFlop/s

For double precision:

= 750MHz/s * 10 * 16 * 2 “double FPU” * 1 Flop/cycle per “double FPU”

= 240 GFlop/s double precision + 240 GFlop/s single precision on the 160 t SFUs

 

Reference memory frequency is 900 MHz:

= 900MHz/s * 4 channels * 256 bits/channel = 115 GB/s

 

Here are peak performance numbers for some RV770 cards:

                                                Single                    Double                 Bandwidth          TDP Power          Cost

·         Radeon 4850      1000 GFlop/s      200 GFlop/s        64 GB/s                180W                     $130

·         Radeon 4870      1200                       240                         115                         200                         $180

·         4850 X2                 2000                       400                         127                         230                         $255

·         4870 X2                 2400                       480                         230                         285                         $420

·         FireStrm 9250    1000                       200                         64                           180                         $790       (same as 4850)

·         FirePro V8700    1200                       240                         115                         200                         $1130    (same as 4870)

 

The Radeon 4850 X2 is the cheapest compute capability per retail dollar available outside of DSPs and fixed function ASICs.  However, it’s bandwidth is very low compared to floating point horsepower – if it executes less than 63 floating point instructions for every F32 piece of data that must be fetched from memory, then memory bandwidth will be the bottleneck!  The 4870 is better balance at a computational intensity breakpoint of 42.  However, NVidia’s cards are applicable to a wider range of workloads; the GTX 285 has a breakpoint of 27 instructions (less compute power, more bandwidth).  For reference a Core i7 is about 16, and CPU caches are much bigger than GPU “caches” so there is a more opportunity to reuse data before fetching off-chip.

 

Thanks to Mike Marr for the research and the detailed write-up above. Errors or omissions are mine.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, March 18, 2009 4:09:07 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Monday, March 16, 2009

In the last posting, Heterogeneous Computing using GPGPUs: NVidia GT200 I promised the next post would be a follow-on look at the AMD/ATI RV770.  However, over the weekend, Niraj Tolia of HP Labs sent this my way as a follow-up on the set of articles on GPGPU Programming. Prior to reading this note, I hadn’t really been interested in virtualizing GPUs but the paper caught my interest and, I’m posting my notes on it  just ahead of the RV770 architectural review that I’ll get up later in the week.

 

The paper GViM: GPU-accelerated Virtual Machines tackles the problem of implementing GPGPU programming in a virtual machine environment. The basic problem is this.  If you are running N virtual machines each of which is running 1 or more GPGPU jobs and you have less than N GPGPUs physically attached to the server, then you need to virtualize the GPGPU. As covered in the last two postings, GPUs are large, very high state devices and, consequently, hard to efficiently virtualize.

 

The approaches discussed in this paper extend the trick from that I first saw used in Virtual Interface Adapter communications and is also supported Infiniband.  I’m sure this model appeared elsewhere earlier but these are two good examples. In this networking interface model, the cost of each send and receive passing through the operating system communication path are avoided without giving up security by first making operating system calls to set up a communication path and to register buffers and door bells. The door bell is a memory location that, when written to, will cause the adapter to send the contents of the send buffer. At this point, the communications channel is set up, and all sends and receives can now be done directly in user space without further operating system interactions.  It’s a nice, secure implementation of Remote Direct Memory Access (RDMA).

 

This technique of virtualizing part of a communications adapter and mapping it into the address space of the application program, can be played out in the GPGPU world as well to allow efficient sharing of GPUs between host operating systems in a virtual machine environment.

 

The approach to this problem proposed in the paper is based upon three observations: 1) GPU calls are course grained with considerable work done between each call so overhead on the calls themselves doesn’t dominate, 2) data transfer in and out of the device is very important and can dominate if not done efficiently, and 3) high level API access to GPUs is common. Building on the third observation, they chose to virtualize at the CUDA API level and implement CUDA over a what is called in the virtual machine world, a split driver model. In the split driver model a front end, or client, device driver is loaded into the guest O/S and it does calls to the management domain (called dom0 in Xen).  In dom0, the other half of the driver is implemented. This other half of the driver makes standard CUDA calls against the physical GPUs device(s).

 

The approach taken by this paper is to implement all calls to CUDA via an interposer library that makes calls to the guest O/S driver which makes calls to the dom0 component that makes calls to the GPU. This effectively virtualizes the GPU device but the required call path is very inefficient.  The authors note that calls to CUDA are course-grained and do considerable work so the per-call inefficiency actually does get amortized out nicely as long as the data is brought to and from the device efficiently. This later point is the tough one this is where the memory mapping tricks I introduced above are used.

 

The authors proposed three solutions to getting data to and from the GPU:

1.       2-copy: user program allocates memory in the guest O/S using malloc.  Memory transferred to GPGPU must be first copied to host O/S kernel, then dom0 writes to the GPU.

2.       1-copy: user program and the device driver in the guest O/S kernel address space share a mapped memory space to avoid one copy of the two above.

3.       Bypass: Exploit the fact that GPU is 100% managed by the dom0 component of the device driver and have it call cudaMalocHost() to map all GPU memory at start-up time. This map all GPU memory into its address space. Then employ the mapping trick of point 2 above to selectively map this space into the guest application space. This has the upside of avoiding copies but the downside of statically partitioning the GPU memory space.  Each app gets access to only a portion of it. Less copying and less cost on context switch but much less memory is available for each application program.

 

Summary: By choosing to virtualize at the API layer rather than at the hardware layer, the task of virtualization was made easier with the downside that only one API is supported on this model. The authors use the split driver model to implement this level of virtualization easily on Xen exploiting the fact that there is considerable work done per CUDA call. Finally, they efficiently manage memory using the three techniques described above.

 

If you are interested in virtualization and GPGPU programming, it’s a good read with a simple and practical approach to virtualizing GPUs: http://www.cc.gatech.edu/~vishakha/files/GViM.pdf.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Monday, March 16, 2009 6:32:50 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Sunday, March 15, 2009

In Heterogeneous Computing using GPGPUs and FPGAs I looked at the Heterogeneous computing, the application of multiple instruction set architectures within a single application program under direct programmer control.  Heterogeneous computing has been around for years but  usage has been restricted to fairly small niches.  I’m predicting that we’re going to see abrupt and steep growth over the next couple of years. The combination of delivering results for many workloads cheaper, faster, and more power efficiently coupled with improved programming tools is going to vault GPGPU programming into being a much more common technique available to everyone.

 

Following on from the previous positing, Heterogeneous Computing using GPGPUs and FPGAs, in this one we’ll take a detailed look at the NVidia GT200 GPU architecture and, in the next, the AMD/ATI RV770.

 

The latest NVidia GPU is called the GT200 (“GT” stands for: Graphics Tesla).  The processor contains 10 Texture/Processor Clusters (TPC) each with 3 Single Program Multiple Data (SPDM) computing cores which NVidia calls Streaming Multiprocessors (SM).  Each has two instruction issue ports (I’ll call them Port 0 and Port 1):

·         Port 0 can issue instructions to 1 of 3 groupings of functional units on any given cycle:

o   “SIMT” (Single Instruction Multiple Thread) instructions to 8 single precision floating point units, marketed as “Stream Processors (SP) a.k.a. thread processors or shader cores

o   a double precision floating point unit

o   8 way branch unit that manages state for the SIMT execution (basically, it deals with branch instructions in shader programs)

·         Port 1 can issue instructions to two Special Function Units (SFU) each of which can process packed 4-wide vectors.  The SFUs perform transcendental operations like sin, cos, etc. or single precision multiplies (like the Intel SSE instruction: MULPS)

 

From this information, you can derive some common marketing numbers for this hardware:

·         “240 stream processors” are the 10*3*8 = 240 single precision FPUs on Port 0.

·         “30 double precision pipelines” are the 10*3* 1 = 30 double precision FPUs on Port 0.

·         “dual-issue” is the fact that you can (essentially) co-issue instructions to both Port 0 and Port 1.

 

The GT200 is used in the line of “GeForce GTX 2xx” commodity video cards (ex. GeForce GTX 280) and the Tesla C1060 [there will also be a Quadro NVS part].  The Tesla S1070 is a PCI bridge that packages four Tesla C1060s into a 1U rack unit – since it is just a bridge, it still requires a host rack unit to drive the GPUs.  The GeForce GTX 295 packages two GT200 processors on the same card (similar to AMD Radeon 48xx X2 cards).

 

Total transistor count is 1.4B – about twice the number of an Intel quad Core i7 or AMD RV770.  The GeForce GTX 2x5 parts (ex. GeForce GTX 285) are die shrunk versions of the original core: 55nm vs. 65nm.  On the original 65nm process, the GT200 was 583.2 mm2, or about 6 times the surface area of a dual-core Penryn.  A 300mm wafer produced only 94 processors (where 45nm Atom processors would yield about 2500).

 

Local card memory is GDDR3 configured as 2 channels with a bus width of 512 bits – typically 1GB.

 

The original GTX 260 was a GT200 which disabled 2 of the 10 TPC units (for a total of 24 SMs or 192 SPs) – presumably to deal with manufacturing hard faults in some of the cores.  It also disables part of the memory bus: 448 bits instead of 512 and consequently local memory is only 896MB.  [Disabling parts of a chip is a now common manufacturing strategy to more fully monetize die yields on modular circuit designs – Intel has been doing this for years with L2 caches.]  As the fab process improved, NVidia started shipping the GTX 260-216, which disables only 1 of the TPCs, and is apparently the only GTX260 part that is actually being manufactured nowadays (216 = 3*9*8, so refers to the number of shader cores).

 

Let’s look at peak performance numbers for the GTX 280, reference clocked at 1296 MHz.  Notice that Port 0 instructions can be multiply-adds (2 flop/cycle) and Port 1 instructions are just multiplies (1 flop/cycle):

1296 MHz/s * 30 SM * (8 SP/SM  * 2 flop/cycle per SP + 2 SFU * 4 FPU/SFU * 1 flop/cycle per FPU)

= Port 0 throughput + Port 1 throughput = 622080 Mflop/s + 311040 Mflop/s = 933 GFlop/s single precision

For double precision:

                1296MHz/s * 30 SM * 1 double precision FPU * 2 flop/cycle = 78 GFlop/s

The Port 1 units can be co-issued with double precision instructions, so can also process 311GFlop/s of single precision multiplies while doing double precision multiply-adds.  [That’s probably not terribly useful without single precision adds though.]

 

Reference memory frequency is 1107 MHz:

                1107 MHz/s * 2 channels * 512 bits/channel = 142 GB/s

 

Here are the peak performance numbers for various parts:

                                                Single Precision                 Double Precision              Bandwidth

·         GTX 260-216:      805 GFlop/s                        67 GFlop/s                          112 GB/s

·         GTX 280:              933                                         78                                           142

·         GTX 285:              1062                                       89                                           159

·         GTX 295:              1789                                       149                                         224

·         Tesla C1060:       933                                         78                                           102

Notice the GTX 285 breaks the single core 1 Teraflop/s barrier.  The Tesla card has the lowest bandwidth; this is presumably because there is 4GB of local memory instead of just 1 GB as on the GTX 285 (more memory typically requires lower bus clock rate).  Finally, notice that even the GTX 285 still gets less than twice the double precision throughput of an AMD Phenom II 940 or Intel Core i7, both of which get about 50 GFlop/s for double and don’t require sophisticated latency hiding data transfer or a complex programming model.

 

Thanks to Mike Marr for the research and the detailed write-up above. Errors or omissions are mine.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com 

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, March 15, 2009 5:18:58 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Saturday, March 14, 2009

It’s not at all uncommon to have several different instruction sets employed in a single computer. Decades ago IBM mainframes had I/O processing systems (channel processors). Most client systems have dedicated  graphics processors. Many networking cards off-load the transport stack (TCP/IP off load). These are all examples of special purpose processors used to support general computation. The application programmer doesn’t directly write code for them. 

 

I define Heterogeneous computing as the application of processors with different instruction set architectures (ISA) under direct application programmer control. Even heterogeneous processing has been around for years in that application programs have long had access to dedicated floating point coprocessors with instructions not found on the main CPU. FPUs where first shipped as coprocessors but have since been integrated on-chip with the general CPU.  FPU complexity has usually been hidden behind compilers that generated FPU instructions when needed or by math libraries that could be called directly by the application program.  

 

It’s difficult enough to program symmetric multi-processors (SMPs) where the application program runs over many identical processors in parallel.  Heterogeneous processing typically also employs more than one processor but these different processors don’t all share the same ISA. Why would anyone want to accept this complexity? Speed and efficiency. General purpose processors are, well, general.  And as a rule, general purpose processors are easy to program but considerably less efficient than specialized processors at some operations.  Graphics can be several orders  of magnitude more efficient in silicon than in software and, as a consequence, almost all graphics is done on graphics processors.  Network processing is another example of a very repetitive task where in-silicon implementations are at least an order of magnitude faster. As a consequence, it’s not unusual to see network switches where the control plane is implemented on a general purpose processor but the data plane is all done on an Application Specific Integrated Circuit (ASIC).

 

Looking at still more general systems that employ heterogeneous processing, newer supercomputers like RoadRunner, which took top spot in the super computer Top500 list last June, are good examples.  RoadRunner is a massive cluster of 6,562 X86 dual core processors and 12,241 IBM Cell Processors. The Cell processor was originally designed by Sony, Toshiba, and IBM and was first commercially used in the Sony Playstation 3. The cell processors themselves are heterogeneous components made up 9 processors, 1 control processor called a Power Processing Element (PPE) and 8 Synergistic Processing Elements (SPE). The bulk of the application performance comes from the SPEs but they can’t run without the PPE which hosts the operating system and manages the SPEs.  Although RoadRunner consumes a prodigious 2.35MW – more than a small power plant – it is actually much more efficient than comparable performing systems not using heterogeneous processing.

 

Hardware specialization can be cheaper, faster, and far more power efficient.  Traits that are hard to ignore.  Heterogeneous systems are beginning to look pretty interesting for some very important commercial workloads.  Over the last 9 months I’ve been interested in two classes of heterogeneous systems and their application to commercial workloads:

·         GPGPU: General Purpose computation on Graphics Unit Processing (GPU)

·         FPGA: Field Programmable Grid Array (FPGA) Coprocessors

 

I’ve seen both techniques used experimentally in petroleum exploration (seismic analysis) and in hedge fund analysis clusters (financial calculations). GPGPUS are being used commercially in rendering farms. Research work is active across the board.  Programming tools are emerging to make these systems easier to program. 

 

Heterogeneous computing  is being used commercially and usage is spreading rapidly.  In the next two articles I’ll post guest blog entries from Mike Marr describing the hardware architecture for two GPUs, the Nvidia GT200 and the AMD RV770. In a subsequent article I’ll look more closely at a couple of FPGA options available for mainstream heterogeneous programming.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, March 14, 2009 4:52:39 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Hardware
 Friday, March 13, 2009

Google Maps is a wonderfully useful tool for finding locations around town or around the globe.  Microsoft Live Labs Seadragon is a developer tool-kit for navigating wall-sized or larger displays using pan and zoom. Here’s the same basic tiled picture display technique (different implementation) applied to navigating the Linux kernel: Linux Kernel Map.

 

The kernel map has a component-by-component breakdown of the entire Linux kernel for hardware interfaces up to user space system calls and most of what is in between. And it’s all navigatable using zoom and pan. I’m not sure what I would actually use the kernel map for but it’s kind of cool.  If you could graphically zoom from the map to the source it might actually be a useful day-to-day tool rather than one a one-time thing.

 

Originally posted via Slashdot (Navigating the Linux Kernel like Google Maps) and sent my way by John Smiley of Amazon.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Friday, March 13, 2009 5:36:54 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Thursday, March 12, 2009

Febuary 28th, Cloud Camp Seattle was held at an Amazon facility in Seattle. Cloud Camp is described organizers as an unconference where early adapters of Cloud Computing technologies exchange ideas. With the rapid change occurring in the industry, we need a place we can meet to share our experiences, challenges and solutions. At CloudCamp, you are encouraged you to share your thoughts in several open discussions, as we strive for the advancement of Cloud Computing. End users, IT professionals and vendors are all encouraged to participate.

 

The Cloud Camp schedule is at: http://www.cloudcamp.com/.

 

Jeanine Johnson attended the event and took excellent notes. Jeanine’s notes follow.

 

It began with a series of “lightening presentations” – 5 minute presentations on cloud topics that are now online (http://www.seattle20.com/blog/Can-t-Make-it-to-Cloud-Camp-Watch-LIVE.aspx). Afterwards, there was a Q&A session with participants that volunteered to share their expertise. Then, 12 topics were chosen by popular vote to be discussed in an “open space” format, in which the volunteer who suggested the topic facilitated its 1 hour discussion.

 

Highlights from the lightening presentations:

·         AWS has launched several large data sets (10-220GB) in the cloud and made them publically available (http://aws.amazon.com/publicdatasets/). Example data sets are the human genome and US census data; large data sets that would take hours, days, or even weeks to download locally with a fast Internet connection.

·         A pyramid was drawn, with SaaS (e.g. Hotmail, SalesForce) on top, followed by PaaS (e.g. GoogleApp Engine, SalesForce API), IaaS (e.g. Amazon, Azure; which leverages virtualization), and “Traditional hosting” as the pyramid’s foundation, which was a nice and simple rendition of the cloud stack (http://en.wikipedia.org/wiki/Cloud_computing). In addition, SaaS applications were shown to have more functionality, and traveling down that pyramid stack resulted in less functionality, but more flexibility.

 

Other than that info, the lightening presentations were too brief with no opportunity for Q&A to learn much. After the lightening presentations, open space discussions were held. I attended three: 1) scaling web apps, 2) scaling MySql, and 3) launching MMOGs (massively multiplayer online games) in the cloud – notes for each session follow.

 

1.       SCALING WEB APPS

One company volunteered themselves as a case study for the group of 20ish people. They run 30 physical servers, with 8 front-end Apache web servers on top of 1 scaled-up MySql database, and they use PHP channels to access their Drupal http://drupal.org content. Their MySql machine has 16 processors and 32GB RAM, but is maxed-out and they’re having trouble scaling it because they currently hover around 30k concurrent connections, and up to 8x that during peak usage. They’re also bottlenecked by their NFS server, and used basic Round Robin for load balancing.

 

Using CloudFront was suggested, instead of Drupal (where they currently store lots of images). Unfortunately, CloudFront takes up to 24 hours to notice content changes, which wouldn’t work for them. So the discussion began around how to scale Drupal, but quickly morphed into key-value-pair storage systems (e.g. SimpleDB http://aws.amazon.com/simpledb/) versus relational databases (e.g. MySql) to store backend data.

 

After some discussion around where business logic should reside, in StoredProcs and Triggers or in the code via an MVC http://en.wikipedia.org/wiki/Model-view-controller paradigm, the group agreed that “you have to know your data: Do you need real-time consistency? Or eventual consistency?”

 

Hadoop http://hadoop.apache.org/core/ was briefly discussed, but once someone said that popular web-development frameworks Rails http://rubyonrails.org/ and  Django http://www.djangoproject.com/ steer folks towards relational databases, the discussion turned to scaling MySql. Best practice tips given to scale MySql were:

·         When scaling-up, memory becomes a bottleneck, so use memcach http://www.danga.com/memcached/ to extend your system’s lifespan.

·         Use MySql cluster http://www.mysql.com/products/database/cluster/.

·         Use MySql proxy http://forge.mysql.com/wiki/MySQL_Proxy and shard your database, such that users are associated with a specific cluster (devs turn to sharding because horizontal scaling for WRITES isn’t as effective as it is for READS, aka replication processing becomes untenable).

 

Other open source technologies mentioned included:

·         Galary2 http://www.gallery2.org/, an open source photo album.

·         Jingle http://www.slideshare.net/stpeter/jingle, Jabber-based VoIP technology.

 

2.       SCALING MYSQL

Someone volunteered from the group of 10ish people to white-board the “ways to scale MySql,” which were:

·         Master / Slave, which can use Dolphin/Sakila http://forge.mysql.com/wiki/SakilaSampleDB, but  becomes inefficient around 8+ machines.

·         MySql proxy, and then replicate each machine behind the proxy.

·         Master : Master topology using sync replication.

·         Master ring topology using MySql proxy. It works well, and the replication overhead can be helped by adding more machine, but several thought it would be hard to implement this setup in the cloud.

·         Mesh topology (if you have the right hardware). This is how a lot of high-performance systems work, but recovery and management are hard.

·         Scale-up and run as few slaves as possible – some felt that this “simple” solution is what generally works best.

 

Someone then drew a “HA Druple Stack in the cloud,” which consisted of 3 front-end load balancers with hot-swap for failures to either the 2nd or 3rd machines, followed by 2 web-servers, 2 master/slave databases in the backend. If using Drupal, 2 additional NFS servers should be setup for static content storage with hot swap (aka fast Mac failover). However, it was recommended that Drupal be replaced with a CDN when the system begins to need scaling-up. This configuration in the Amazon cloud costs around $700 monthly to run (plus network traffic).

 

Memcach (http://memcachefs.sourceforge.net/) was mentioned as a possibility as well.

 

3.       LAUNCHING MMOGs IN THE CLOUD

This topic was suggested by a game developer lead. He explained to the crowd of 10ish people that MMOs require persistent connections to servers, and their concurrent connections has a relatively high standard deviation daily, with a trend over the week that peaks around Saturday and Sunday. MMO producers must plan their capacity a couple months in advance of publishing their game. And since up to 50% of a MMO’s subscriber base is active on the first day, they usually end up with left-over capacity after launch, when active subscribers drop to 20% of their base and continue to dwindle down until the end of the game’s lifecycle. As a result, it’d be ideal to get MMOGs into the cloud, but no one in the room knew how to get around the latency induced by virtualization, which is too much for flashy MMOGs (although the 5%-ish perf-hit is fine for asynchronous or low-graphics games). On a side note, iGames http://www.igames.org/ was mentioned as a good way to market games.

 

Afterwards, those people that were left went to the Elysian on 1st for drinks, and continued their cloud discussions.

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, March 12, 2009 5:06:17 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Tuesday, March 10, 2009

Whenever I see a huge performance number without the denominator, I shake my head.  It’s easy to get a big performance number on almost any dimension but what is far more difficult is getting a great work done per dollar. Performance alone is not interesting.

 

I’m super interested in flash SSDs and see great potential for SSDs in both client and server-side systems. But, our industry is somewhat hype driven. When I first started working with SSDs and their application to server workloads, many thought it was a crazy ideas pointing out that the write rates were poor and they would wear out in days.  The former has been fixed in Gen 2 devices and later was never true.  Now SSDs are climbing up the hype meter and I find myself arguing on the other side: they don’t solve all problems. I still see the same advantages I saw before but I keep seeing SSDs proposed for applications where they simply are not the best price/performing solution.

 

Rather than write the article about where SSDs are a poor choice, I wrote two articles on where they were a good one:

·         When SSDs Make Sense in Server Applications

·         When SSDs Make Sense in Client Applications

 

SSDs are really poor choices for large sequential workloads. If you want aggregate sequential bandwidth, disks deliver it far cheaper.

 

In this article and referenced paper (Microslice Servers), I argue in more detail why performance is a poor measure for servers on any dimension. It’s work done per dollar and work done per watt we should be measuring.

 

I recently came across a fun little video, Samsung SSD Awesomeness. It’s actually a Samsung SSD advertisement. Overall, the video is fun. It’s creative and sufficiently effective that I watched the entire thing and you might as well. Clearly it’s a win for Samsung.  However, the core technical premise is broken. What they are showing is that you can get 2 GB/s by RAID 24 SSDs together.  This is unquestionably true. However, we can get 2 GB/s by raiding together 17 Seagate Barracuda 7200.11 (big, cheap, slow hard drives) at considerably lower cost. The 24 SSDs will produce awe striking random I/O performance and not particularity interesting sequential performance.  24 SSDs is not the cheapest way to get 2GB/s of sequential I/O.

 

Both Samsung and Intel have excellent price performing SSDs and both can produce great random IOPS/$.  There are faster SSDs out there (e.g. FusionIO) but the Samsung and Intel components are better price/performers and that’s the metric that really matters. However, none of them are good price/performers on pure sequential workloads and yet that’s how I see them described and that’s the basis for many purchasing decisions. 

 

See Annual Fully Burdened Cost of Power for a quick analysis of when an SSD can be a win based upon power savings and IOPS/$.

 

Conclusion: If the workload is large and sequential, use a hard disk drive. If it’s hot and random, consider an SSD-based solution.

 

                                                -jrh

 

Thanks to Sean James of Microsoft for sending the video my way.

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Tuesday, March 10, 2009 5:36:30 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Hardware
 Saturday, March 07, 2009

In the current ACM SIGCOMM Computer Communications Review, there is an article on data center networking, Cost of a Cloud: Research Problems in Data Center Networks by Albert Greenberg, David Maltz, Parveen Patel, and myself.

 

Abstract: The data centers used to create cloud services represent a significant investment in capital outlay and ongoing costs. Accordingly, we first examine the costs of cloud service data centers today. The cost breakdown reveals the importance of optimizing work completed per dollar invested. Unfortunately, the resources inside the data centers often operate at low utilization due to resource stranding and fragmentation. To attack this first problem, we propose (1) increasing network agility, and (2) providing appropriate incentives to shape resource consumption. Second, we note that cloud service providers are building out geo-distributed networks of data centers. Geo-diversity lowers latency to users and increases reliability in the presence of an outage taking out an entire site. However, without appropriate design and management, these geo-diverse data center networks can raise the cost of providing service. Moreover, leveraging geo-diversity requires services be designed to benefit from it. To attack this problem, we propose (1) joint optimization of network and data center resources, and (2) new systems and mechanisms for geo-distributing state.

 

Direct link to the paper: http://ccr.sigcomm.org/online/files/p68-v39n1o-greenberg.pdf  (6 pages)

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, March 07, 2009 2:14:59 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Wednesday, March 04, 2009

Yesterday Amazon Web Services announced availability of Windows and SQL Server under  Elastic Compute Cloud (EC2) in the European region.  Running in the EU is important for workloads that need to be near customers in that region or workloads that operate on data that needs to stay in region.  The AWS Management Console has been extended to support EC2 in the EU region.  The management council supports administration of Linux, Unix, and Windows systems under Elastic Compute Cloud as well as management of Elastic Block Store and Elastic IP. More details up at: http://aws.amazon.com/about-aws/whats-new/2009/03/03/amazon-ec2-running-windows-in-eu-region/.

 

Also yesterday, Microsoft confirmed Windows Azure Cloud Software Set for Release This Year. The InformationWeek article reports that that Azure will be released by the end of the year and that SQL Server Data Services will include some relational database capabilities. Details are expected at MIX in Vegas this March.

 

The utility computing world continues to evolve incredibly quickly.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, March 04, 2009 6:01:49 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Tuesday, March 03, 2009

Earlier this evening I attended the Washington Technology Industry Association event  Scaling into the Cloud with Amazon Web Services. Adam Selipsky, VP of Amazon Web Services gave an overview of AWS and was followed by two AWS customers each of which talked about their services and how they use AWS. My rough notes follow.

 

Adam Selipsky, VP Amazon Web Services

·         490k registered developers

·         Amazon is primarily a technology company.

o   Started experimenting with web services in 2002

o   Each page in the Amazon retail web site makes calls to 200 to 300 services prior to rendering

·         AWS design principles:

o   Reliability

o   Scalability

o   Low-latency

o   Easy to use

o   Inexpensive

·         Enterprises have to provision to peak – 10 to 15% utilization is a pretty common number

·         Amazon web services:

o   Simple Storage Service, Elastic Compute Cloud, SimpleDB, CloudFront, SQS, Flexible Payment Service, & Mechanical Turk

·         SimpleDB: 80/20 rule – most customers don’t need much of the functionality of relational systems most of the time

·         What were the biggest surprises over the last three years:

o   Growth:

§  AWS Developers: 160k in 2006 to 490k in 2008

§  S3 Objects Stored:: 200m in 2006 to 40B in 2008

§  S3 Peak request rate: 70k/s

o   Diverse use cases: web site/app hosting, media distribution, storage, backup, disaster recovery, content delivery, HPC, & S/W Dev & Test

o   Diverse customers: Enterprise to well funded startups to individuals

o   Partners: IBM, Oracle, SalesForce, Capgemini, MySQL, Sun, & RedHat

·         Customer technology investment:

o   30% focused on business

o   70% focused on infrastructure

·         AWS offloads this investment in infrastructure and allows time and capital invested into your business rather than infrastructure.

o   Lowers costs

o   Faster to market

o   More efficient use of capital

·         Trends being seen by AWS:

o   Multiple services

o   Enterprise adoption

o   Masive atasets and large-scale parallel processing

o   Increased nee for support and transparency so customers know what’s happening in the infrastructure:

§  Service health dashboard

§  Premium developer support

o   Running more sophisticated software in AWS

·         Animoto case study

o   Steady state of about 50 EC2 instances

o   Within 3 days they spiked to 5000 EC2 instances

 

Smartsheet: Todd Fasullo

·         Not just an online spreadsheet.  Leverage the spreadsheet paradigm but focused on collaboration

·         Hybrid model AWS and private infrastructure

·         Use CloudFront CDN to get javascript and static content close to users

·         Benefits & savings:

o   S3: 5% of the cost of our initial projects from existing hosting provider

o   CloudFront: <1% cost of traditional CDN

o   No sales negotiations

Picnik and AWS: Mike Harrington

·         Photo-editing awesomeness

·         Built-in editor on flkr

·         Facebook application

·         About Picnik:

o   Founded in 2005

o   Based in Seattle

o   16 employees

o   No VC

·         Flash based application

·         9m unique visitors per month

·         Hybrid model where base load is internally provided and everything above base load is EC2 hosted.

·         Heavy use of S3

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Tuesday, March 03, 2009 8:08:35 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Sunday, March 01, 2009

I collect postings on high-scale service architectures, scaling war stories, and interesting implementation techniques. For past postings see Scaling Web Sites and Scaling LinkedIn.

 

Last week Bret Taylor posted an interesting description of the FriendFeed backend storage architecture: How FriendFeed uses MySQL to store Schema-less Data.  Friendfeed faces a subset of what I’ve called the hairball problem. The short description of this issue is social networking sites need to be able to access per-user information both by user and also by searching for the data to find the users. For example, group membership. Sometimes we will want to find groups that X is a member of and other times we’ll want to find a given group and all users who are members of that group.  If we partition by user, one access pattern is supported well. If we partition by group, then the other works well.  The hairball problem shows up in many domains – I just focus on social networks as the problem is so common there --  see Scaling LinkedIn.

 

Common design patterns to work around the hairball are: 1) application maintained, asynchronous materialized views, 2) distributed in-memory caching of alternate search paths, and 3) central in-memory caching.  LinkedIn is a prototypical example of central in-memory caching. Facebook is the prototypical example of distributed in-memory caching using memcached.  And, FriendFeed is a good example of the first pattern, application maintained, async materialized views.

 

In Bret’s How FreindFeed uses MySQL to store Schema-less Data he describes how Friendfeed manages the hairball problem. Data is stored in primary table sharded over the farm.  The primary table can be efficiently accessed on whatever its key is. If you want access to the same data searching on a different dimension, they would have to search every shard individually. To avoid this, they create a secondary table with the appropriate search key where the “data” is just the primary key of the primary table.  To find entities with some secondary property, they search first the secondary table to get the qualifying entity ID and then fetch the entities from the primary table.

 

Primary and secondary tables are not updated atomically – that would require two phase commit the protocol Pat Helland jokingly refers to as the anti-availability protocol.  Since the primaries and secondary tables are not updated atomically, a secondary index may point to a primary that actually doesn’t qualify and some primaries that do quality may not be found if the secondary hasn’t yet been updated. The later is simply a reality of this technique and the application has to be tolerant of this short-time period data integrity anomaly. The former problem can be solved by reapplying the search predicate as a residual (a common RDBMS implementation technique).

 

The FriendFeed systems described in Bret Taylor’s post also addresses the schema change problem. Schema changes can disruptive and some RDBMS implement schema change incredibly inefficiently. This by the way, is completely unnecessary – the solution is well known – but bad implementations persist. The FriendFeed technique to deal with the schema change issue is arguably a bit heavy handed: they simply don’t show the schema to MySQL and, instead, use it as a key-value store where the values are either JSON objects or Python dictionaries.

 

                                                --jrh

 

Thanks to Dave Quick for pointing me to the FriendFeed posting.

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, March 01, 2009 9:01:10 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
Services
 Friday, February 27, 2009

Yesterday I presented Service Design Bets Practices at an internal Amazon talk series called Principals of Amazon. This talk series is very similar to the weekly Microsoft Enterprise Computing Series that I hosted for 8 years at Microsoft (also an internal series).  Ironically both series were started by Pat Helland who is now back at Microsoft. 

 

None of the talk content is Amazon internal so I posted the slides at: http://mvdirona.com/jrh/TalksAndPapers/JamesHamilton_POA20090226.pdf. 

 

It’s an update of an earlier talk first presented at LISA 2007:

·         Talk: http://mvdirona.com/jrh/talksAndPapers/JamesRH_CIDR.ppt

·         Paper: http://mvdirona.com/jrh/talksAndPapers/JamesRH_Lisa.pdf

 

--jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Friday, February 27, 2009 6:25:33 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Thursday, February 26, 2009

Google has announced that the App Engine free quota resources will be reduced and pricing has been announced for greater-than-free tier usage. The reduction in free tier will be effective 90 days after the February 24th announcement and reduces CPU and bandwidth allocations by the following amounts:

 

·         CPU time free tier reduced to 6.4 hours/day from 46 hours/day

·         Bandwidth free tier reduced to 1 GB/day from 10 GB/day

 

Also announced February 24th is the charge structure for usage beyond the free-tier:

  • $0.10 per CPU core hour. This covers the actual CPU time an application uses to process a given request, as well as the CPU used for any Datastore usage.
  • $0.10 per GB bandwidth incoming, $0.12 per GB bandwidth outgoing. This covers traffic directly to/from users, traffic between the app and any external servers accessed using the URLFetch API, and data sent via the Email API.
  • $0.15 per GB of data stored by the application per month.
  • $0.0001 per email recipient for emails sent by the application

--jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

Thursday, February 26, 2009 6:41:29 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Services
 Wednesday, February 25, 2009

This morning Alyssa Henry, did the keynote at USENIX File and Storage Technology (FAST) Conference. Alyssa is General Manager of Amazon Simple Storage Service. Alyssa kicked off the talk by announcing that S3 now has 40B objects under management which is nearly 3x what was stored in S3 at this time last year. The remainder of the talk focuses first on design goals and then gets into techniques used.

 

Design goals:

·         Durability

·         Availability

·         Scalability

·         Security

·         Performance

·         Simplicity

·         Cost effectiveness

 

Techniques used:

·         Redundancy

·         Retry

·         Surge protection

·         Eventual consistency

·         Routine testing of failure modes

·         Diversity of s/w, h/w, & workloads

·         Data scrubbing

·         Monitoring

·         Auto-management

 

The talk:AlyssaHenry_FAST_Keynote.pdf (729.04 KB)

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

Wednesday, February 25, 2009 11:34:19 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Monday, February 23, 2009

Building Scalable Web Apps with Google App Engine was presented by Brett Slatkin of Goolgle at Google I/O 2008. The link above points to the video but Todd Hoff of High Scalability summarized the presentation in a great post Numbers Everyone Should Know.

 

The talk mostly focused on the Google App Engine and how to use it. For example, Brett shows how to implement a scalable counter and (nearly) ordered comments using App Engine Megastore. For the former, shard the counter to get write scale and sum them on read. 

 

Also included in the presentation where some general rule of thumb from Jeff Dean of Google. Rules of Thumb are good because they tell us what to expect and, when we see something different, they tell us to pay attention and look more closely.  When we see an exception, either our rule of thumb has just been proven wrong and we learned something. Or the data we’re looking at is wrong and we need to dig deeper. Either one is worth noticing. I use Rules of Thumb all the time not as way of understanding the world (they are sometimes wrong) but as a way of knowing where to look more closely.

 

Check out Toldd’s post: http://highscalability.com/numbers-everyone-should-know.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Monday, February 23, 2009 6:13:41 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Sunday, February 22, 2009

Richard Jones of Last.fm has compiled an excellent list of key-value stores in Anti-RDBMS: A list of key-value stores.

 

In this post, Richard looks at Project Voldemort, Ringo, Scalaris, Kai, Dynomite, MemcacheDB, ThruDB, CouchDB, Cassandra, HBase and Hypertable. His conclusion for Last.fm use is that Project Voldemort has the most promise with Scalaris being a close second and Dynomite is also interesting.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, February 22, 2009 7:43:13 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Software
 Saturday, February 21, 2009

Back in the early 90’s I attended High Performance Transactions Systems for the first time. I loved it. It’s on the ocean just south of Monterey and some of the best in both industry and academia show up to attend the small, single tracked conference. It’s invitational and kept small so it can be interactive. There are lots of discussions during the sessions, everyone eats together, and debates & discussions rage into the night. It’s great.

 

The conference was originally created by Jim Gray and friends with a goal to break the 1,000 transaction/second barrier. At the time, a lofty goal.  Over the years it’s morphed into a general transaction processing and database conference and then again into a high-scale services get together. The sessions I mostly like today are from leaders from eBay, Amazon, Microsoft, Google, etc. talking about very high scale services and how they work.

 

The next HPTS is October 26 through 28, 2009 and I’ll be there again this year: http://www.eecs.harvard.edu/~margo/HPTS/cfp.html. Consider attending, it’s a great conference.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

Saturday, February 21, 2009 8:01:00 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Thursday, February 19, 2009

Earlier today I presented Where Does the Power Go and What to do About it at the Western Washington Chapter of AFCOM. I basically presented the work I wrote up in the CIDR paper: The Case for Low-Cost, Low-Power Servers.

 

The slides are at: JamesHamilton_AFCOM2009.pdf (1.22 MB).

 

The general thesis of the talk is that improving data center efficiency by a factor of 4 to 5 is well within reach without substantial innovation or design risk.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, February 19, 2009 4:56:49 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<March 2009>
SunMonTueWedThuFriSat
22232425262728
1234567
891011121314
15161718192021
22232425262728
2930311234

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton