Saturday, October 09, 2010

Hosting multiple MySQL engines with MySQL Replication between them is a common design pattern for scaling read-heavy MySQL workloads. As with all scaling techniques, there are workloads for which it works very well but there are also potential issues that need to be understood. In this case, all write traffic is directed to the primary server and, consequently is not scaled which is why this technique works best for workloads heavily skewed towards reads. But, for those fairly common read heavy workloads, the techniques works very well and allows scaling the read workload across over a fleet of MySQL instances.  Of course, as with any asynchronous replication scheme, the read replicas are not transactionally updated. So any application running on MySQL read replica’s must be tolerant of eventually consistent updates.


Load balancing high read traffic over multiple MySQL instances works very well but this is only one of the possible tools used to scale this type of workload. Another very common technique is to put a scalable caching layer in front of the relational database fleet. By far the most common caching layer used by high-scale services is Memcached.


Another database scaling technique is to simply not use a relational database. For workloads that don’t need schema enforcement and complex query, NoSQL databases offer both a cheaper and a simpler approach to hosting the workload. SimpleDB is the AWS hosted NoSQL database with Netflix being one of the best known users (slide deck from Netflix’s Adrian Cockcroft: Cassandra is another common RDBMS alternative in heavy use by many high-scale sites including Facebook where it was originally conceived. Cassandra is also frequently run on AWS with the Cassandra Wiki offering scripts to make it easy install and configure on Amazon EC2.


For those workloads where a relational database is the chosen solution, MySQL read replication is a good technique to have in your scaling tool kit. Last week Amazon announced read replica support for the AWS Relational Database Service. The press release is at: Announcing Read Replicas, Lower High Memory DB Instance Price for Amazon AWS.


You can now create one or more replicas of a given “source” DB Instance and serve incoming read traffic from multiple copies of your data. This new database deployment option enables you to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can use Read Replicas in conjunction with Multi-AZ replication for scalable, reliable, and highly available production database deployments.


If you are running MySQL and wish you had someone else to manage it for you, check out Amazon RDS. The combination of read replicas to scale read workloads and Multi-AZ support for multi-data center, high availability make it a pretty interesting way to run MySQL.




James Hamilton



b: /


Saturday, October 09, 2010 10:05:02 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
 Saturday, September 18, 2010

A couple of years ago, I did a detailed look at where the costs are in a modern , high-scale data center. The primary motivation behind bringing all the costs together was to understand where the problems are and find those easiest to address. Predictably, when I first brought these numbers together, a few data points just leapt off the page: 1) at scale, servers dominate overall costs, and 2) mechanical system cost and power consumption seems unreasonably high. Both of these areas have proven to be important technology areas to focus upon and there has been considerable industry-wide innovation particularly in cooling efficiency over the last couple of years.


I posted the original model at the Cost of Power in Large-Scale Data Centers. One of the reasons I posted it was to debunk the often repeated phrase “power is the dominate cost in a large-scale data center”. Servers dominate with mechanical systems and power distribution  close behind. It turns out that power is incredibly important but it’s not the utility kWh charge that makes power important. It’s the cost of the power distribution equipment required to consume power and the cost of the mechanical systems that take the heat away once the power is consumed. I referred to this as fully burdened power. 


Measured this way, power is the second most important cost. Power efficiency is highly leveraged when looking at overall data center costs, it plays an important role in environmental stewardship, and it is one of the areas where substantial gains continue to look quite attainable. As a consequence, this is where I spend a considerable amount of my time – perhaps the majority – but we have to remember that servers still dominate the overall capital cost.


This last point is a frequent source of confusion.  When server and other IT equipment capital costs are directly compared with data center capital costs, the data center portion actually is larger. I’ve frequently heard “how can the facility cost more than the servers in the facility – it just doesn’t make sense.”  I don’t know whether or not it makes sense but it actually is not true at this point. I could imagine the infrastructure costs one day eclipsing those of servers as server costs continue to decrease but we’re not there yet. The key point to keep in mind is the amortization periods are completely different. Data center amortization periods run from 10 to 15 years while server amortizations are typically in the three year range. Servers are purchased 3 to 5 times during the life of a datacenter so, when amortized properly, they continue to dominate the cost equation.


In the model below, I normalize all costs to a monthly bill by taking consumable like power and billing them monthly by consumption and taking capital expenses like servers, networking or datacenter infrastructure, and amortizing over their useful lifetime using a 5% cost of money and, again, billing monthly. This approach allows us to compare non-comparable costs such as data center infrastructure with servers and networking gear each with different lifetimes. The model includes all costs “below the operating system” but doesn’t include software licensing costs mostly because open source is dominant in high scale centers and partly because licensing costs very can vary so widely. Administrative costs are not included for the same reason. At scale, hardware administration, security, and other infrastructure-related people costs disappear into the single digits with the very best services down in the 3% range. Because administrative costs vary so greatly, I don’t include them here. On projects with which I’ve been involved, they are insignificantly small so don’t influence my thinking much. I’ve attached the spreadsheet in source form below so you can add in factors such as these if they are more relevant in your environment.


Late last year I updated the model for two reasons: 1) there has been considerable infrastructure innovation over the last couple of years and costs have changed dramatically during that period and 2) because of the importance of networking gear to the cost model, I factor out networking from overall IT costs. We now have IT costs with servers and storage modeled separately from networking. This helps us understand the impact of networking on overall capital cost and on IT power.


When I redo these data, I keep the facility server count in the 45,000 to 50,000 server range. This makes it an reasonable scale facility –big enough to enjoy the benefits of scale but nowhere close to the biggest data centers. Two years ago, 50,000 servers required a 15MW facility (25MW total load). Today, due to increased infrastructure efficiency and reduced individual server power draw, we can support 46k servers in an 8MW facility (12MW total load). The current rate of innovation in our industry is substantially higher than it has been any time in the past with much of this innovation driven by mega service operators.


Keep in mind, I’m only modeling those techniques well understood and reasonably broadly accepted as good quality data center design practices. Most of the big operators will be operating at efficiency levels far beyond those used here. For example, in this model we’re using a Power Usage Efficiency (PUE) of 1.45 but Google, for example, reports PUE across the fleet of under 1.2: Data Center Efficiency Measurements. Again, the spread sheet source is attached below so feel free to change to the PUE used by the model as appropriate.


These are the assumptions used by this year’s model:


Using these assumptions we get the following cost structure:




For those of you interested in playing with different assumptions, the spreadsheet source is here:


If you choose to use this spreadsheet directly or the data above, please reference the source and include the URL to this pointing.




James Hamilton



b: /


Saturday, September 18, 2010 12:56:19 PM (Pacific Standard Time, UTC-08:00)  #    Comments [16] - Trackback
 Tuesday, September 14, 2010

For those of you writing about your work on high scale cloud computing (and for those interested in a great excuse to visit Anchorage Alaska), consider submitting a paper to the Workshop on Data Intensive Cloud Computing in the Clouds (DataCloud 2011). The call for papers is below.





                                           *** Call for Papers ***
                  In conjunction with IPDPS 2011, May 16, Anchorage, Alaska

The First International Workshop on Data Intensive Computing in the Clouds (DataCloud2011) will be held in conjunction with the 25th IEEE International Parallel and Distributed Computing Symposium (IPDPS 2011), in Anchorage, Alaska.

Applications and experiments in all areas of science are becoming increasingly complex and more demanding in terms of their computational and data requirements. Some applications generate data volumes reaching hundreds of terabytes and even petabytes. As scientific applications become more data intensive, the management of data resources and dataflow between the storage and compute resources is becoming the main bottleneck. Analyzing, visualizing, and disseminating these large data sets has become a major challenge and data intensive computing is now considered as the “fourth paradigm” in scientific discovery after theoretical, experimental, and computational science.

DataCloud2011 will provide the scientific community a dedicated forum for discussing new research, development, and deployment efforts in running data-intensive computing workloads on Cloud Computing infrastructures. The DataCloud2011 workshop will focus on the use of cloud-based technologies to meet the new data intensive scientific challenges that are not well served by the current supercomputers, grids or compute-intensive clouds. We believe the workshop will be an excellent place to help the community define the current state, determine future goals, and present architectures and services for future clouds supporting data intensive computing.

Topics of interest include, but are not limited to:

- Data-intensive cloud computing applications, characteristics, challenges
- Case studies of data intensive computing in the clouds
- Performance evaluation of data clouds, data grids, and data centers
- Energy-efficient data cloud design and management
- Data placement, scheduling, and interoperability in the clouds
- Accountability, QoS, and SLAs
- Data privacy and protection in a public cloud environment
- Distributed file systems for clouds
- Data streaming and parallelization
- New programming models for data-intensive cloud computing
- Scalability issues in clouds
- Social computing and massively social gaming
- 3D Internet and implications
- Future research challenges in data-intensive cloud computing

Abstract submission: December 1, 2010
Paper submission: December 8, 2010
Acceptance notification: January 7, 2011
Final papers due: February 1, 2011

DataCloud2011 invites authors to submit original and unpublished technical papers. All submissions will be peer-reviewed and judged on correctness, originality, technical strength, significance, quality of presentation, and relevance to the workshop topics of interest. Submitted papers may not have appeared in or be under consideration for another workshop, conference or a journal, nor may they be under review or submitted to another forum during the DataCloud2011 review process. Submitted papers may not exceed 10 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. DataCloud2011 also requires submission of a one-age (~250 words) abstract one week before the paper submission deadline.

Tevfik Kosar, Louisiana State University
Ioan Raicu, Illinois Institute of Technology

Ian Foster, Univ of Chicago & Argonne National Lab
Geoffrey Fox, Indiana University
James Hamilton, Amazon Web Services
Manish Parashar, Rutgers University & NSF
Dan Reed, Microsoft Research
Rich Wolski, University of California, Santa Barbara
Liang-Jie Zhang, IBM Research

David Abramson, Monash University, Australia
Roger Barga, Microsoft Research
John Bent, Los Alamos National Laboratory
Umit Catalyurek, Ohio State University
Abhishek Chandra, University of Minnesota
Rong N. Chang, IBM Research
Alok Choudhary, Northwestern University
Brian Cooper, Google
Ewa Deelman, University of Southern California
Murat Demirbas, University at Buffalo
Adriana Iamnitchi, University of South Florida
Maria Indrawan, Monash University, Australia
Alexandru Iosup, Delft University of Technology, Netherlands
Peter Kacsuk, Hungarian Academy of Sciences, Hungary
Dan Katz, University of Chicago
Steven Ko, University at Buffalo
Gregor von Laszewski, Rochester Institute of Technology
Erwin Laure, CERN, Switzerland
Ignacio Llorente, Universidad Complutense de Madrid, Spain
Reagan Moore, University of North Carolina
Lavanya Ramakrishnan, Lawrence Berkeley National Laboratory
Ian Taylor, Cardiff University, UK
Douglas Thain, University of Notre Dame
Bernard Traversat, Oracle
Yong Zhao, Univ of Electronic Science & Tech of China



James Hamilton



b: /


Tuesday, September 14, 2010 12:45:05 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback

I’m dragging myself off the floor as I write this having watched this short video: MongoDB is Web Scale.


It won’t improve your datacenter PUE, your servers won’t get more efficient, and it won’t help you scale your databases but, still, you just have to check out that video.


Thanks to Andrew Certain of Amazon for sending it my way.




James Hamilton



b: /


Tuesday, September 14, 2010 5:55:41 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Thursday, September 09, 2010

You can now run an EC2 instance 24x7 for under $15/month with the announcement last night of the Micro instance type. The Micro includes 613 MB of RAM and can burst for short periods up to 2 EC2 Compute Units(One EC2 Compute Unit equals 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor). They are available in all EC2 supported regions and in 32 or 64 bit flavors.


The design point for Micro instances is to offer a small amount of consistent, always available CPU resources while supporting short bursts above this level. The instance type is well suited for lower throughput applications and web sites that consume significant compute cycles but only periodically.


Micro instances are available for Windows or Linux with Linux at $0.02/hour and windows at $0.03/hour. You can quickly start a Micro instance using the EC2 management console: EC2 Console.


The announcement: Announcing Micro Instances for Amazon EC2

More information: Amazon Elastic Compute Cloud (Amazon EC2)




James Hamilton



b: /

Thursday, September 09, 2010 4:53:17 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
 Thursday, August 05, 2010

I’m taking some time off and probably won’t blog again until the first week of September. Jennifer and I are taking the boat north to Alaska. Most summers we spend a bit of time between the northern tip of Vancouver island and the Alaska border. This year is a little different for 2 reasons. First, we’re heading further north than in the past and will spend some time in Glacier Bay National Park & Preserve. The second thing that makes this trip a bit different is, weather permitting, we’ll be making the nearly thousand mile one way trip as an off shore crossing.  It’ll take roughly 5 days to cover the distance running 24x7 off the cost of British Columbia.


You might ask why we would want to make the trip running 24x7 off shore when the shoreline of BC is one of the most beautiful in the world. It truly is wonderful and we do love the area. We’ve even written a book about it (Secret Coast). We’re skipping the coast and heading directly to Alaska as a way to enjoy Alaska by boat when we really can’t get enough time off work to do Alaska at a more conventional, relaxed pace. The other reason to run directly there is it’s a chance to try running 24x7 and see  how it goes. Think of it as an ocean crossing with training wheels. If it gets unpleasant, we can always turn right and head to BC. And, it’ll be an adventure.


We’ll be back the first week of September. Have a good rest of your summer,




James Hamilton



b: /


Thursday, August 05, 2010 11:50:16 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
 Sunday, August 01, 2010

A couple of weeks back Greg Linden sent me an interesting paper called Energy Proportional Datacenter Networks. The principal of energy proportionality was first coined by Luiz Barroso and Urs Hölzle in an excellent paper titled The Case for Energy-Proportional Computing. The core principal behind energy proportionality is that computing equipment should consume power in proportion to their utilization level. For example, a computing component that consumes N watts at full load, should consume X/100*N Watts when running at X% load. This may seem like a obviously important concept but, when the idea was first proposed back in 2007, it was not uncommon for a server running at 0% load to be consuming 80% of full load power. Even today, you can occasionally find servers that poor. The incredibly difficulty of maintaining near 100% server utilization makes energy proportionality a particularly important concept.


One of the wonderful aspects of our industry is, when an attribute becomes important, we focus on it. Power consumption as a whole has become important and, as a consequence, average full load server power has been declining for the last couple of years. It is now easy to buy a commodity server under 150W. And, with energy proportionality now getting industry attention, this metric is also improving fast. Today a small percentage of the server market has just barely cross the 50% power proportionality line which is to say, if they are idle, they are consuming less than 50% of the full load power.


This is great progress that we’re all happy to see. However, it’s very clear we are never going to achieve 100% power proportionality so raising server utilization will remain  the most important lever we have in reducing wasted power. This is one of the key advantages of cloud computing. When a large number of workloads are brought together with non-correlated utilization peaks and valleys, the overall peak to trough load ratio is dramatically flattened. Looked at it simplistically, aggregating workloads with dissimilar peak resource consumption levels tends to reduce the peak to trough ratios. There need to be resources available for peak resource consumption but the utilization level goes up and down with workload so reducing the difference between peak and trough aggregate load levels, cuts costs, save energy, and is better for the environment.


Understanding the importance of power proportionality, it’s natural to be excited by the Energy Proportional Datacenter Networks paper. In this paper, the authors observer “if the system is 15% utilized (servers and network) and the servers are fully power-proportional, the network will consume nearly 50% of the overall power.” Without careful reading, this could lead one to believe that network power consumption was a significant industry problem and immediate action at top priority was needed. But the statement has two problems. The first is the assumption that “full  energy proportionality” is even possible. There will always be some overhead in having a server running. And we are currently so distant from this 100% proportional goal, that any conclusion that follows from this assumption is unrealistic and potentially mis-leading.


The second issue is perhaps more important: the entire data center might be running at 15% utilization. 15% utilization means that all the energy (and capital) that went into the datacenter power distribution system, all the mechanical systems, the servers themselves, is only being 15% utilized. The power consumption is actually a tiny part of the problem.  The real problem is the utilization level means that most resources in a nearly $100M investment are being wasted by low utilization levels.There are many poorly utilized data centers running at 15% or worse utilization but I argue the solution to this problem is to increase utilization. Power proportionality is very important and many of us are working hard to find ways to improve power proportionality. But, power proportionality won’t reduce the importance of ensuring that datacenter resources are near fully utilized.


Just as power proportionality will never be fully realized, 100% utilization will never be realized either so clearly we need to do both. However, it’s important to remember that the gain from increasing utilization by 10% is far more than the possible gain to be realized by improving power proportionality by 10%. Both are important but utilization is by far the strongest lever in reducing cost and the impact of computing on the environment.


Returning to the negative impact of networking gear on overall datacenter power consumption, let’s look more closely. It’s easy to get upset when you look at net gear power consumption. It is prodigious. For example a Juniper EX8200 consumes nearly 10Kw. That’s roughly as much power consumption as an entire rack of servers (server rack powers range greatly but 8 to 10kW is pretty common these days). A fully configured Cisco Nexus 7000 requires 8 full 5kW circuits to be provisioned. That’s 40kW or roughly as much power provisioned to a single network device as would be required by 4 average racks of servers. These numbers are incredibly large individually but it’s important to remember that there really isn’t all that much networking gear in a datacenter relative to the number of servers.


To get precise, let’s build a model of the power required in a large datacenter to understand the details of networking gear power consumption relative to the rest of the facility. In this model, we’re going to build a medium to large sized datacenter with 45,000 servers. Let’s assume these servers are operating at an industry leading 50% power proportionality and consume only 150W each at full load. This facility has a PUE of 1.5 which is to say that for every watt of power delivered to IT equipment (servers, storage and networking equipment), there is ½ watt lost in power distribution and powering mechanical systems. PUEs as high as 1.2 are possible but rare and PUEs as poor as 3.0 are possible and unfortunately quite common. The industry average is currently 2.0 which says that in an average datacenter, for every watt delivered to the IT load, 1 watt is spent on overhead (power distribution, cooling, lighting, etc.).


For networking, we’ll first build a conventional, over-subscribed, hierarchical network.  In this design, we’ll use Cisco 3560 as a top of rack (TOR) switch. We’ll connect these TORs 2-way redundant to Juniper Ex8216 at the aggregation layer. We’ll connect the aggregation layer to Cisco 6509E in the core were we need only 1 device for a network of this size but we use 2 for redundancy. We also have two border devices in the model:


Using these numbers as input we get the following:


·         Total DC Power:                                10.74MW (1.5 * IT load)

·         IT Load:                                                7.16MW (netgear + servers and storage)

o   Servers & Storage:          6.75MW (45,100 * 150W)

o   Networking gear:             0.41 MW (from table above)


This would have the networking gear as consuming only 3.8% of the overall power consumed by the facility (0.41/10.74). If we were running at 0% utilization which I truly hope is far worse that anyone’s worst case today, what networking consumption would we see? Using the numbers above with the servers at 50% power proportionally (unusually good), we would have the networking gear at 7.2% of overall facility power (0.41/((6.75*0.5+0.41)*1.5)). 


This data argues strongly that networking is not the biggest problem or anywhere close. We like improvements wherever we can get them and so I’ll never walk away from a cost effective networking power solution. But, networking power consumption is not even close to our biggest problem so we should not get distracted.


What if we built an modern fat tree network using commodity high-radix networking gear along the lines alluded to in Data Center Networks are in my Way and covered in detail in the VL2 and PortLand papers? Using 48 port network devices we would need 1875 switches in the first tier, 79 in the next, then 4, and 1 at the root. Let’s use 4 at the root to get some additional redundancy which would put the switch count at 1,962 devices. Each network device dissipates roughly 150W and driving each of the 48 transceivers requires roughly 5w each  (a rapidly declining number). This totals to 390W per 48 port device. Any I’ve seen are better but let’s use these numbers to ensure we are seeing the network in its worst likely form. Using these data we get:


·         Total DC Power:                                11.28MW

·         IT Load:                                                7.52MW

o   Servers & Storage:          6.75MW

o   Networking gear:             0.77MW


Even assuming very high network power dissipation rates with 10GigE to every host in the entire datacenter with a constant bisection bandwidth network topology that requires a very large number of devices, we still only have the network at 6.8% of the overall data center. If we assume the very worst case available today where we have 0% utilization with 50% power proportional servers, we get 12.4% power consumed in the network (0.77/((6.75*0.5+0.77)*1.5)).


12% is clearly enough to worry about but, in order for the network to be that high, we had to be running at 0% utilization which is to say, that all the resources in the entire data center are being wasted. 0% utilization means we are wasting 100% of the servers, all the power distribution, all the mechanical systems, and the entire networking system, etc. At 0% utilization, it’s not the network that is the problem. It’s the server utilization that needs attention.


Note that in the above model more than 60% of the power consumed by the networking devices were the per-port transceivers. We used 5W/port for these but overall transceiver power is expected to drop down to 3W or less over the next couple of years so we should expect a drop in network power consumption of 30 to 40% in the very near future.


Summarizing the findings: My take from this data is it’s a rare datacenter where more than 10% of power is going to networking devices and most datacenters will be under 5%. Power proportionality in the network is of some value but improving server utilization is a far more powerful lever. In fact a good argument can be made to spend more on networking gear and networking power if you can use that investment to drive up server utilization by reducing constraints on workload placement.




James Hamilton



b: /


Sunday, August 01, 2010 9:44:07 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
 Tuesday, July 13, 2010

High Performance Computing (HPC) is defined by Wikipedia as:


High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems. Today, computer systems approaching the teraflops-region are counted as HPC-computers. The term is most commonly associated with computing used for scientific research or computational science. A related term, high-performance technical computing (HPTC), generally refers to the engineering applications of cluster-based computing (such as computational fluid dynamics and the building and testing of virtual prototypes). Recently, HPC has come to be applied to business uses of cluster-based supercomputers, such as data warehouses, line-of-business (LOB) applications, and transaction processing.


Predictably, I use the broadest definition of HPC including data intensive computing and all forms of computational science. It still includes the old stalwart applications of weather modeling and weapons research but the broader definition takes HPC from a niche market to being a big part of the future of server-side computing. Multi-thousand node clusters, operating at teraflop rates, running simulations over massive data sets is how petroleum exploration is done, it’s how advanced financial instruments are (partly) understood, it’s how brick and mortar retailers do shelf space layout and optimize their logistics chains, it’s how automobile manufacturers design safer cars through crash simulation, it’s how semiconductor designs are simulated, it’s how aircraft engines are engineered to be more fuel efficient, and it’s how credit card companies measure fraud risk. Today, at the core of any well run business, is a massive data store –they all have that. The measure of a truly advanced company is the depth of analysis, simulation, and modeling run against this data store.  HPC workloads are incredibly important today and the market segment is growing very quickly driven by the plunging cost of computing and the business value understanding large data sets deeply.

High Performance Computing is one of those important workloads that many argue can’t move to the cloud. Interestingly, HPC has had a long history of supposedly not being able to make a transition and then, subsequently, making that transition faster than even the most optimistic would have guessed possible. In the early days of HPC, most of the workloads were run on supercomputers. These are purpose built, scale-up servers made famous by Control Data Corporation and later by Cray Research with the Cray 1 broadly covered in the popular press. At that time, many argued that slow processors and poor performing interconnects would prevent computational clusters from ever being relevant for these workloads. Today more than ¾ of the fastest HPC systems in the world are based upon commodity compute clusters.


The HPC community uses the Top-500 list as the tracking mechanism for the fastest systems in the world. The goal of the Top-500 is to provide a scale and performance metric for a given HPC system. Like all benchmarks, it is a good thing in that it removes some of the manufacturer hype but benchmarks always fail to fully characterize all workloads. They abstract performance to a single or small set of metrics which is useful but this summary data can’t faithfully represent all possible workloads. Nonetheless, in many communities including HPC and Relational Database Management Systems, benchmarks have become quite important. The HPC world uses the Top-500 list which depends upon LINPACK as the benchmark.


Looking at the most recent Top-500 list published in June 2010, we see that Intel processors now dominate the list with 81.6% of the entries. It is very clear that the HPC move to commodity clusters has happened. The move that “couldn’t happen” is near complete and the vast majority of very high scale HPC systems are now based upon commodity processors.


What about HPC in the cloud, the next “it can’t happen” for HPC? In many respects, HPC workloads are a natural for the cloud in that they are incredibly high scale and consume vast machine resources. Some HPC workloads are incredibly spiky with mammoth clusters being needed for only short periods of time. For example semiconductor design simulation workloads are incredibly computationally intensive and need to be run at high-scale but only during some phases of the design cycle. Having more resources to throw at the problem can get a design completed more quickly and possibly allow just one more verification run to potentially save millions by avoiding a design flaw.  Using cloud resources, this massive fleet of servers can change size over the course of the project or be freed up when they are no longer productively needed. Cloud computing is ideal for these workloads. 


Other HPC uses tend to be more steady state and yet these workloads still gain real economic advantage from the economies of extreme scale available in the cloud. See Cloud Computing Economies of Scale (talk, video) for more detail.


When I dig deeper into “steady state HPC workloads”, I often learn they are steady state as an existing constraint rather than by the fundamental nature of the work. Is there ever value in running one more simulation or one more modeling run a day? If someone on the team got a good idea or had a new approach to the problem, would it be worth being able to test that theory on real data without interrupting the production runs? More resources, if not accompanied by additional capital expense or long term utilization commitment, are often valuable even for what we typically call steady state workloads. For example, I’m guessing BP, as it battles the Gulf of Mexico oil spill, is running more oil well simulations and tidal flow analysis jobs than originally called for in their 2010 server capacity plan.


No workload is flat and unchanging. It’s just a product of a highly constrained model that can’t adapt quickly to changing workload quantities. It’s a model from the past.


There is no question there is value to being able to run HPC workloads in the cloud. What makes many folks view HPC as non-cloud hostable is these workloads need high performance, direct access to underlying server hardware without the overhead of the virtualization common in most cloud computing offerings and many of these applications need very high bandwidth, low latency networking. A big step towards this goal was made earlier today when Amazon Web Services announced the EC2 Cluster Compute Instance type.


The cc1.4xlarge instance specification:

·         23GB of 1333MHz DDR3 Registered ECC

·         64GB/s main memory bandwidth

·         2 x Intel Xeon X5570   (quad-core Nehalem)

·         2 x 845GB 7200RPM HDDs

·         10Gbps Ethernet Network Interface


It’s this last point that I’m particularly excited about. The difference between just a bunch of servers in the cloud and a high performance cluster is the network. Bringing 10GigE direct to the host isn’t that common in the cloud but it’s not particularly remarkable. What is more noteworthy is it is a full bisection bandwidth network within the cluster.  It is common industry practice to statistically multiplex network traffic over an expensive network core with far less than full bisection bandwidth.  Essentially, a gamble is made that not all servers in the cluster will transmit at full interface speed at the same time. For many workloads this actually is a good bet and one that can be safely made. For HPC workloads and other data intensive applications like Hadoop, it’s a poor assumption and leads to vast wasted compute resources waiting on a poor performing network.


Why provide less than full bisection bandwidth?  Basically, it’s a cost problem. Because networking gear is still building on a mainframe design point, it’s incredibly expensive. As a consequence, these precious resources need to be very carefully managed and over-subscription levels of 60 to 1 or even over 100 to 1 are common. See Datacenter Networks are in my Way for more on this theme.


For me, the most interesting aspect of the newly announced Cluster Compute instance type is not the instance at all. It’s the network. These servers are on a full bisection bandwidth cluster network. All hosts in a cluster can communicate with other nodes in the cluster at the full capacity of the 10Gbps fabric at the same time without blocking. Clearly not all can communicate with a single member of the fleet at the same time but the network can support all members of the cluster communicating at full bandwidth in unison. It’s a sweet network and it’s the network that makes this a truly interesting HPC solution.


Each Cluster Compute Instance is $1.60 per instance hour. It’s now possible to access millions of dollars of servers connected by a high performance, full bisection bandwidth network inexpensively. An hour with a 1,000 node high performance cluster for $1,600. Amazing.


As a test of the instance type and network prior to going into beta Matt Klein, one of the HPC team engineers, cranked up LINPACK using an 880 server sub-cluster. It’s a good test in that it stresses the network and yields a comparative performance metric. I’m not sure what Matt expected when he started the run but the result he got just about knocked me off my chair when he sent it to me last Sunday.  Matt’s experiment yielded a booming 41.82 TFlop Top-500 run.


For those of you as excited as I am interested in the details from the Top-500 LINPACK run:

 This is phenomenal performance for a pay-as-you-go EC2 instance. But what makes it much more impressive is that result would place the EC2 Cluster Compute instance at #146 on the Top-500. It also appears to scale well which is to say bigger numbers look feasible if more nodes were allocated to LINPACK testing. As fun as that would be, it is time to turn all these servers over to customers so we won’t get another run but it was fun.


 You can now have one of the biggest super computers in the world for your own private use for $1.60 per instance per hour. I love what’s possible these days.


Welcome to the cloud, HPC!




James Hamilton



b: /


Tuesday, July 13, 2010 4:34:30 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
 Friday, July 09, 2010

Hierarchical storage management (HSM) also called tiered storage management is back but in a different form. HSM exploits the access pattern skew across data sets by placing cold, seldom accessed data on slow cheap media and frequently accessed data on fast near media.  In old days, HSM typically referred to system mixing robotically managed tape libraries with hard disk drive staging areas. HSM was actually never gone – its just a very old technique to exploit data access pattern skew to reduce storage costs.  Here’s an old unit from FermiLab.


Hot data or data currently being accessed is stored on disk and old data that has not been recently accessed is stored on tape. It’s a great way to drive costs down below disk but avoid the people costs of tape library management and to (slightly) reduce the latency of accessing data on tape.


The basic concept shows up anywhere where there is data access locality or skew in the access patterns where some data is rarely accessed and some data is frequently accessed. Since evenly distributed, non-skewed access pattern only show up in benchmarks, this concept works on a wide variety of workloads. Processors have cache hierarchies where the top of the hierarchy is very expensive register files and there are multiple layers of increasingly large caches between the register file and memory. Database management systems have large in memory buffer pools insulating access to slow disk. Many very high scale services like Facebook have mammoth in-memory caches insulating access to slow database systems. In the Facebook example, they have 2TB of Memcached in front of their vast MySQL fleet: Velocity 2010.


Flash memory again opens up the opportunity to apply HSM concepts to storage.  Rather than using slow tape and fast disk, we use (relatively) slow disk and fast NAND flash. There are many approaches to implementing HSM over flash memory and hard disk drives. Facebook implemented Flashcache which tracks access patterns at the logical volume layer (below the filesystem) in Linux with hot pages written to flash and cold pages to disk.  LSI is a good example implementation done at the disk controller level with their CacheCade product. Others have done it in application specific logic where hot indexing structures are put on flash and cold data pages are written to disk. Yet another approach that showed up around 3 years ago is a Hybrid Disk Drive.


Hybrid drives combine large, persistent flash memory caches with disk drives in a single package. When they were first announced, I was excited by them but I got over the excitement once I started benchmarking. It was what looked to be a good idea but the performance really was unimpressive.


Hybrid rives still looks like a good idea but now we actually have respectable performance with the Seagate Momentus XT. This part is actually aimed at the laptop market but I’m always watching client progress to understand what can be applied to servers. This finally looks like its heading in the right direction.  See the AnandTech article on this drive for more performance data: Seagate's Momentus XT Reviewed, Finally a Good Hybrid HDD. I still slightly prefer the Facebook FlashCache approach but these hybrid drives are worth watching.


Thanks to Greg Linden for sending this one my way.




James Hamilton



b: /


Friday, July 09, 2010 8:30:14 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
 Saturday, July 03, 2010

I didn’t attend the Hadoop Summit this year or last but was at the inaugural event back in 2008 and it was excellent. This year, the Hadoop Summit 2010 was held June 29 again in Santa Clara. This agenda for the 2010 event is at: Hadoop Summit 2010 Agenda. Since I wasn’t able to be there, Adam Gray of the Amazon AWS team was kind enough to pass on his notes and let me use them here:


Key Takeaways

·         Yahoo and Facebook operate the world largest Hadoop clusters, 4,000/2,300 nodes with 70/40 petabytes respectively. They run full cluster replicas to assure availability and data durability. 

·         Yahoo released Hadoop security features with Kerberos integration which is most useful for long running multitenant Hadoop clusters. 

·         Cloudera released paid enterprise version of Hadoop with cluster management tools and several dB connectors and announced support for Hadoop security.

·         Amazon Elastic MapReduce announced expand/shrink cluster functionality and paid support.

·         Many Hadoop users use the service in conjunction with NoSQL DBs like Hbase or Cassandra.



Yahoo had the opening keynote with talks by Blake Irving, Chief Products Officer, Shelton Shugar, SVP of Cloud Computing, and Eric Baldeschwieler, VP of Hadoop. They talked about Yahoo’s scale, including 38k Hadoop servers, 70 PB of storage, and more than 1 million monthly jobs, with half of those jobs written in Apache Pig. Further their agility is improving significantly despite this massive scale—within 7 minutes of a homepage click they have a completely reconfigured preference model for that user and an updated homepage. This would not be possible without Hadoop. Yahoo believes that Hadoop is ready for enterprise use at massive scale and that their use case proves it. Further, a recent study found that 50% of enterprise companies are strongly considering Hadoop, with the most commonly cited reason being agility. Initiatives over the last year include: further investment and improvement in Hadoop 0.20, integration of Hadoop with Kerberos, and the Oozie workflow engine.


Next, Peter Sirota gave a keynote for Amazon Elastic MapReduce that focused on how the service makes combining the massive scalability of MapReduce with the web-scale infrastructure of AWS more accessible, particularly to enterprise customers. He also announced several new features including expanding and shrinking the cluster size of running job flows, support for spot instances, and premium support for Elastic MapReduce. Further, he discussed Elastic MapReduce’s involvement in the ecosystem including integration with Karmasphere and Datameer. Finally, Scott Capiello, Senior Director of Products at Microstrategy, came on stage to discuss their integration with Elastic MapReduce.


Cloudera followed with talks by Doug Cutting, the creator of Hadoop, and Charles Zedlweski, Senior Director of Product Management. They announced Cloudera Enterprise, a version of their software that includes production support and additional management tools. These tools include improved data integration and authorization management that leverages Hadoops security updates. And they demoed a WebUI for using these management tools.


The final keynote was given by Mike Schroepfer, VP of Engineering at Facebook. He talked about Facebook’s scale with 36 PB of uncompressed storage, 2,250 machines with 23k processors, and 80-90 TB growth per day. Their biggest challenge is in getting all that data into Hadoop clusters. Once the data is there, 95% of their jobs are Hive-based. In order to ensure reliability they replicate critical clusters in their entirety.  As far as traffic, the average user spends more time on Facebook than the next 6 web pages combined. In order to improve user experience Facebook is continually improving the response time of their Hadoop jobs. Currently updates can occur within 5 minutes; however, they see this eventually moving below 1 minute. As this is often an acceptable wait time for changes to occur on a webpage, this will open up a whole new class of applications.


Discussion Tracks

After lunch the conference broke into three distinct discussion tracks: Developers, Applications, and Research. These tracks had several interesting talks including one by Jay Kreps, Principal Engineer at LinkedIn, who discussed LinkedIn’s data applications and infrastructure. He believes that their business data is ideally suited for Hadoop due to its massive scale but relatively static nature. This supports large amounts of  computation being done offline. Further, he talked about their use of machine learning to predict relationships between users. This requires scoring 120 billion relationships each day using 82 Hadoop jobs. Lastly, he talked about LinkedIn’s in-house developed workflow management tool, Azkaban, an alternative to Oozie.


Eric Sammer, Solutions Architect at Cloudera, discussed some best practices for dealing with massive data in Hadoop. Particularly, he discussed the value of using workflows for complex jobs, incremental merges to reduce data transfer, and the use of Sqoop (SQL to Hadoop) for bulk relational database imports and exports. Yahoo’s Amit Phadke discussed using Hadoop to optimize online content. His recommendations included leveraging Pig to abstract out the details of MapReduce for complex jobs and taking advantage of the parallelism of HBase for storage. There was also significant interest in the challenges of using Hadoop for graph algorithms including a talk that was so full that they were unable to let additional people in.


Elastic MapReduce Customer Panel

The final session was a customer panel of current Amazon Elastic MapReduce users chaired by Deepak Singh. Participants included: Netflix,  Razorfish, Coldlight Solutions, and Spiral Genetics. Highlights include:


·         Razorfish discussed a case study in which a combination of Elastic MapReduce and cascading allowed them to take a customer to market in half the time with a 500% return in ad spend. They discussed how using EMR has given them much better visibility into their costs, allowing them to pass this transparency on to customers.


·         Netflix discussed their use of Elastic MapRedudce to setup a hive-based data warehouseing infrastructure. They keep a persistent cluster with data backups in S3 to ensure durability. Further, they also reduce the amount of data transfer through pre-aggregation and preprocessing of data.


·         Spiral Genetics talked about they had to leverage AWS to reduce capital expenditures. By using Amazon Elastic MapReduce they were able to setup a running job in 3 hours. They are also excited to see spot instance integration.


·         Coldlight Solutions said that buying $1/2M in infrastructure wasn’t even an option when they started. Now it is, but they would rather focus on their strength: machine learning and Amazon Elastic MapReduce allows them to do this.


James Hamilton



b: /


Saturday, July 03, 2010 5:39:36 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Thursday, July 01, 2010

I did a talk at Velocity 2010 last week. The slides are posted at Datacenter Infrastructure Innovation and the video is available at Velocity 2010 Keynote. Urs Holze Google Senior VP of infrastructure also did a Velocity keynote. It was an excellent talk and is posted at Urs Holzle at Velocity 2010.  Jonathan Heilliger, Facebook VP of Technical Operations spoke at Velocity as well. A talk summary is up at: Managing Epic Growth in Real Time. Tim O’Reilly did a talk: O’Reilly Radar. Velocity really is a great conference.


Last week I posted two quick notes on Facebook: Facebook Software Use and 60,000 Servers at Facebook. Continuing on that theme, a few other Facebook Data points that I have been collecting of late:


From Qcon 2010 in Beijing (April 2010): memcached@facebook:

·         How big is Facebook:

o   400m active users

o   60m status updates per day

o   3b photo uploads per month

o   5b pieces of content shared each week

o   50b friend graph edges

§  130 friend per user on average

o   Each user clicks on 9 pieces of content each month

·         Thousands of servers in two regions [jrh: 60,000]

·         Memcached scale:

o   400m gets/second

o   28m sets/second

o   2T cached items

o   Over 200 TB

o   Networking scale:

§  Peak rx: 530m pkts/second (60GB/s)

§  Peak tx: 500m pkts/second (120GB/s)

·         Each memcached server:

o   Rx: 90k pkts/sec (9.7MB/s)

o   Tx 94k pkts/sec (19 MB/s)

o   80k gets/second

o   2k sets/s

o   200m items

·         Phatty Phatty Multiget

o   Php is single threaded and synchronous so need to get multiple objects in a single request to be efficient and fast

·         Cache segregration:

o   Different objects have different lifetimes so separate out

·         Incast problem:

o   The use of multiget increased performance but lead to incast problem


The talk is full of good data and worth a read.


From Hadoopblog, Facebook has the world’s Largest Hadoop Cluster:

  • 21 PB of storage in a single HDFS cluster
  • 2000 machines
  • 12 TB per machine (a few machines have 24 TB each)
  • 1200 machines with 8 cores each + 800 machines with 16 cores each
  • 32 GB of RAM per machine
  • 15 map-reduce tasks per machine

The Yahoo Hadoop cluster is reported to be twice the node count of the Facebook cluster at 4,000 nodes: Scaling Hadoop to 4000 nodes at Yahoo!. But, it does have less disk:

·         4000 nodes

·         2 quad core Xeons @ 2.5ghz per node

·         4x1TB SATA disks per node

·         8G RAM per node

·         1 gigabit ethernet on each node

·         40 nodes per rack

·         4 gigabit ethernet uplinks from each rack to the core

·         Red Hat Enterprise Linux AS release 4 (Nahant Update 5)

·         Sun Java JDK 1.6.0_05-b13

·         Over 30,000 cores with nearly 16PB of raw disk!


James Hamilton



b: /


Thursday, July 01, 2010 7:36:43 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
 Monday, June 28, 2010

Last week, I estimated that Facebook now had 50,000 servers in Facebook Software Use. Rich Miller of Datacenter Knowledge actually managed to sleuth out the accurate server count in: Facebook Server Count: 60,000 or more.


He took Tom Cook’s Velocity 2010 talk from last week that showed growth without absolute numbers. But Rich noticed it did have dates and Facebook had previously released the server count of 30k servers at a known data.  With the curve and the previous calibration point, we have the number: 60,000. Not really that large but the growth rate is amazing. Good sleuthing Rich.




James Hamilton



b: /

Monday, June 28, 2010 6:50:24 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Sunday, June 27, 2010

The NoSQL movement continues to gain momentum. I don’t see these systems as replacing relational systems for all applications but it is also crystal clear that relational systems are a poor choice for some workloads. See One Size Does Not Fit All for my take on the different types of systems that make up the structured storage market.


The Amazon Web Services entrant in the NoSQL market segment is SimpleDB. I’ve posted on SimpleDB in the past starting back in 2007 Amazon SimpleDB Announced and more recently in I Love Eventual Consistency but…  I recently came across a book by Prabhakar Chaganti and Rich Helms on SimpleDB.


Wait a second, I know that name.  Rich and I worked together more than 20 years ago at the IBM Toronto Software Lab where he was Chief Image Technology Architect and I was lead architect on DB2. It’s been a long time.


The book, Amazon SimpleDB Developers Guide is a detailed guide for developers with examples in PHP, Java, and Python. Very recent features like BatchPutAttributes() are covered. Towards the end of the book, the authors show an application of Memcached with SimpleDB. The table of contents:

Chapter 1: Getting to Know SimpleDB
Chapter 2: Getting Started with SimpleDB
Chapter 3: SimpleDB versus RDBMS
Chapter 4: The SimpleDB Data Model
Chapter 5: Data Types
Chapter 6: Querying
Chapter 7: Storing Data on S3
Chapter 8: Tuning and Usage Costs
Chapter 9: Caching
Chapter 10: Parallel Processing

SimpleDB really does have a developer guide from the Amazon Web Services SimpleDB team but more examples and more data is always good. If you interested in SimpleDB, check out: Amazon SimpleDB Developers Guide.




James Hamilton



b: /


Sunday, June 27, 2010 9:12:33 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Sunday, June 20, 2010

This morning I came across Exploring the software behind Facebook, the World’s Largest Site. The article doesn’t introduce new data not previously reported but it’s a good summary of the software used by Facebook and the current scale of the social networking site:

·         570 billion page views monthly

·         3 billion photo uploads monthly

·         1.2 million photos served per second

·         30k servers


The later metric, the 30k servers number is pretty old (Facebook has 30,000 servers). I would expect the number to be closer to 50k now based only upon external usage growth.


The article was vague on memcached usage saying only “Terrabytes”. I’m pretty interested in memcached and Facebook is, by far, the largest user, so I periodically check their growth rate. They now have 28 terabytes of memcached data behind 800 servers. See Scaling memcached at Facebook for more detail.


The mammoth memchached fleet at Facebook has had me wondering for years how close the cache is to the entire data store?  If you factor out photos and other large objects, how big is the entire remaining user database? Today the design is memecached insulating the fleet of database servers. What is the aggregate memory size of the memcached and database fleet? Would it be cheaper to store the entire database 2-way redundant in memory with changes logged to support recovery in the event that a two server loss?


Facebook is very close if not already able to store the entire data store minus large objects in memory and within a factor of two of being able to store in memory twice and have memcached be the primary copy completely omitting the database tier. It would be a fun project.




James Hamilton



b: /


Sunday, June 20, 2010 6:56:35 AM (Pacific Standard Time, UTC-08:00)  #    Comments [15] - Trackback
 Sunday, June 13, 2010

I’ve been talking about the application low-power, low-cost processors to server workloads for years starting with The Case for Low-Cost, Low-Power Servers. Subsequent articles get into more detail: Microslice Servers, Low-Power Amdahl Blades for Data Intensive Computing, and Successfully Challenging the Server Tax


Single dimensional measures of servers like “performance” without regard to server cost or power dissipation are seriously flawed. The right way to measure server performance is work done per dollar and work done by joule. If you adopt these measures of workload performance, we find that cold storage workload and highly partitionable workloads run very well on low-cost, low-power servers. And we find the converse as well. Database workloads run poorly on these servers (see When Very Low-Power, Low-Cost Servers Don’t Make Sense).


The reasons why scale-up workloads in general and database workload specifically run poorly on low-cost, low-powered servers are fairly obvious.  Workloads that don’t scale-out, need bigger single servers to scale (duh). And workloads that are CPU bound tend to run more cost effectively on higher powered nodes. The later isn’t strictly true. Even with scale-out losses, many CPU bound workloads still run efficiently on low-cost, low-powered servers because what is lost on scaling is sometimes more than gained by lower-cost and lower power consumption.


I find the bounds where a technology ceases to work efficiently to be the most interesting area to study for two reasons: 1) these boundaries teach us why current solutions don’t cross the boundary and often gives us clues on how to make the technology apply more broadly, and most important, 2) you really need to know where not to apply a new technology. It is rare that a new technology is a uniform across-the board win. For example, many of the current applications of flash memory make very little economic sense. It’s a wonderful solution for hot I/O-bound workloads where it is far superior to spinning media. But flash is a poor fit for many of the applications where it ends up being applied. You need to know where not to use a technology.


Focusing on the bounds of why low-cost, low-power servers don’t run a far broader class of workloads also teaches us what needs to change to achieve broader applicability. For example, if we ask what if the processor cost and power dissipation was zero, we quickly see, when scaling down processors costs and power, it is what surrounds the processor that begins to dominate. We need to get enough work done on each node to pay for the cost and power of all the surrounding components from northbridge, through  memory, networking, power supply, etc. Each node needs to get enough done to pay for the overhead components.


This shows us an interesting future direction: what if servers shared the infrastructure and the “all except the processor” tax was spread over more servers? It turns out this really is a great approach and applying this principle opens up the number of workloads that can be hosted on low-cost, low-power servers. Two examples of this direction are the Dell Fortuna and Rackable CloudRack C2. Both these shared infrastructure servers take a big step in this direction.


SeaMicro Releases Innovative Intel Atom Server

One of the two server startups I’m currently most excited about is SeaMicro. Up until today, they have been in stealth mode and I haven’t been able to discuss what they are building. It’s been killing me. They are targeting the low-cost, low-power server market and they have carefully studied the lessons above and applied the learning deeply. SeaMicro has built a deeply integrated, shared infrastructure, low-cost, low-power server solution with a broader potential market than any I’ve seen so far. They are able to run the Intel x86 instruction set avoiding the adoption friction of using different ISAs and they have integrated a massive number servers very deeply into an incredibly dense package. I continue to point out that rack density for densities sake is a bug not a feature (see Why Blades aren’t the Answer to All Questions) but the SeaMicro server module density is “good density” that reduces cost and increases efficiency. At under 2kw for a 10RU module, it is neither inefficient or challenging from a cooling perspective.


Potential downsides of the SeaMicro approach is that the Intel Atom CPU is not quite as power efficient as some of the ARM-based solutions and it doesn’t currently support ECC memory. However, the SeaMicro design is available now and it is a considerable advancement over what is currently in the market. See You Really do Need ECC Memory in Servers for more detail on why ECC can be important. What SeaMicro has built is actually CPU independent and can integrate other CPUs as other choice become available and the current Intel Atom-based solution will work well for many server workloads. I really like what they have done.


SeaMicro have taken shared infrastructure to a entirely new level in building a 512 server module that takes just 10 RU and dissipates just under 2Kw. Four of these modules will fit in an industry standard rack, consume a reasonable 8kW, and deliver more work done joule, work done per dollar, and more work done per rack than the more standard approaches currently on the market.

The SeaMicro server module is comprised of:

·         512 1.6Ghz Intel Atoms (2048 CPUs/rack)

·         1 TB DRAM

·         1.28 Tbps networking fabric

·         Up to 16x 10Gbps ingress/egress network or up to 64 1Gbps if running 1GigE

·         0 to 64 SATA SSD or HDDs

·         Standard x86 instruction set architecture (no recompilation)

·         Integrated software load-balancer

·         Integrated layer 2 networking switch

·         Under 2kW power


The server nodes are built 8 to a board in one of the nicest designs I’ve seen for years:



The SeaMicro 10 RU chassis can be hosted 4 to an industry standard rack:


This is an important hardware advancement and it is great to see the faster pace of innovation sweeping the server world driven by innovative startups like SeaMicro.




James Hamilton



b: /


Sunday, June 13, 2010 8:01:11 PM (Pacific Standard Time, UTC-08:00)  #    Comments [7] - Trackback
 Thursday, June 10, 2010

A couple of days ago I came across an interesting article by Microsoft Fellow Mark Russinovich. In this article, Mark hunts a random Internet Explorer crash with his usual tools: The Case of the Random IE Crash. He chases down the IE issue to a Yahoo! Toolbar. This caught my interest for two reasons: 1) the debug technique used to chase it down was interesting, and 2) it’s a two week old computer with no toolbars ever installed. From Mark’s blog:


This came as a surprise because the system on which the crash occurred was my home gaming system, a computer that I’d only had for a few weeks. The only software I generally install on my gaming systems are Microsoft Office and games. I don’t use browser toolbars and if I did, would obviously use the one from Bing, not Yahoo’s. Further, the date on the DLL showed that it was almost two years old. I’m pretty diligent about looking for opt-out checkboxes on software installers, so the likely explanation was that the toolbar had come onto my system piggybacking on the installation of one of the several video-card stress testing and temperature profiling tools I used while overclocking the system. I find the practice of forcing users to opt-out annoying and not giving them a choice even more so, so was pretty annoyed at this point. A quick trip to the Control Panel and a few minutes later and my system was free from the undesired and out-of-date toolbar.


It’s a messy world out there and its very tough to control what software gets installed on a computer. This broad class of problems are generally referred to as Drive-by Downloads:

The expression drive-by download is used in four increasingly strict meanings:

1.       Downloads which the user indirectly authorized but without understanding the consequences (eg. by installing an unknown ActiveX component or Java applet).

2.       Any download that happens without knowledge of the user.

3.       Download of spyware, a computer virus or any kind of malware that happens without knowledge of the user. Drive-by downloads may happen by visiting a website, viewing an e-mail message or by clicking on a deceptive popup window: the user clicks on the window in the mistaken belief that, for instance, an error report from the PC itself is being acknowledged, or that an innocuous advertisement popup is being dismissed; in such cases, the "supplier" may claim that the user "consented" to the download although actually unaware of having initiated an unwanted or malicious software download.

4.       Download of malware through exploitation of a web browser, e-mail client or operating system vulnerability, without any user intervention whatsoever. Websites that exploit the Windows Metafile vulnerability (eliminated by a Windows update of 5 January 2006) may provide examples of "drive-by downloads" of this sort.


This morning I came across what looks like a serious case of a drive-by download where the weapon of choice was the widely trusted Windows Update: Microsoft Secretly Installs Firefox Extension Through WU.


I’m a huge fan of Windows Update – I think its dramatically improved client-side security and reliability. The combination of Windows Error Reporting and Windows Update allow system failures to be statistically tracked, focus the resources on those causing the most problems, and then deliver the fixes broadly and automatically. These two tools are incredibly important to the health Windows ecosystem so I hope this report is inaccurate.




James Hamilton



b: /


Thursday, June 10, 2010 5:57:45 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback

One of the most important attributes needed in a cloud solution is what I call cloud data freedom. Having the ability to move data out of the cloud quickly, efficiently, cheaply, and without restriction is a mandatory prerequisite in my opinion to trusting a cloud. In fact, you need the ability to move the data both ways. Moving in cheaply, efficiently, and quickly is often required just to get the work done. And the ability to move out cheaply, efficiently, quickly, and without restriction is the only way to avoid lock-in.  Data movement freedom is the most important attribute of an open cloud and a required prerequisite to avoiding provider lock in.


The issue came up in the comments on this post: Netflix on AWS where Jan Miczaika asked:


James, as long as Amazon is growing constantly and has a great culture of smart people it will work out fine. Should the going ever get tough (what I of course don't hope) these principles may be thrown overboard. It would not be the first time companies sacrifice long-term values for short-term profits.

This is all very hypothetical. Still, for strategic long-term planning, I believe it should be taken into account.


And I responded:

Jan, it is inarguably true that there have been instance of previously good companies making incredibly short sighted decisions. It has happened before and it could happen again.

The point I'm making is not that there exists any company that is incapable of damn dumb decisions. My point is that the cloud computing model is a huge win economically. I agree with you that no company is assured to be great, customer focused, and thinking clearly forever. Disasters can happen. That's why I would never do business with a cloud provider that didn't have great support for export of LARGE amounts of data cost effectively. Its super important that the data not be locked in. I don't care so much about the low level control plane programming model -- I can change how I call those APIs. But its super important that the data can be moved to another service easily. And, this export service has to be cheap and there is no way I would use the network for very high scale data movements. You have to assume that the data is going to keep growing and so its physical media export that you want. Recall Andrew Tenenbaum's "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway" (

I'm saying you need to use cloud computing but I'm not saying you should trust one company to be the right answer for ever. Don't step in without a good quality export service based upon physical media at a known and reasonably price.

This morning AWS Import/Export announced that the service is now out of beta and it now supports a programmatic, web services interface. From the announce letter of earlier today:

AWS Import/Export accelerates moving large amounts of data into and out of AWS using portable storage devices for transport. The service is exiting beta and is now generally available. Also, a new web service interface augments the email-based interface that was available during the service's beta. Once a storage device is loaded with data for import or formatted for an export, the new web service interface makes it easy to initiate shipment to AWS in minutes, or to check import or export status in real-time.

You can use AWS Import/Export for:

·         Data Migration - If you have data you need to upload into the AWS cloud for the first time, AWS Import/Export is often much faster than transferring that data via the Internet.

·         Content Distribution Send data you are computing or storing on AWS to your customers on portable storage devices.

·         Direct Data Interchange - If you regularly receive content on portable storage devices from your business associates, you can have the data sent directly to AWS for import into your Amazon S3 buckets.

·         Offsite Backup - Send full or incremental backups to Amazon S3 for reliable and redundant offsite storage.

·         Disaster Recovery - In the event that you need to quickly retrieve a large backup stored in Amazon S3, use AWS Import/Export to transfer the data to a portable storage device and deliver it to your site.

To use AWS Import/Export, you just prepare a portable storage device, and submit a Create Job request with the open source AWS Import/Export command line application, a third party tool, or by programming directly against the web service interface. AWS Import/Export will return a unique identifier for the job, a digital signature for authenticating your device, and an AWS address to which you ship your storage device. After copying the digital signature to your device, ship it along with its interface connectors and power supply to AWS.

You can learn more about AWS Import/Export and get started using the web service at

Also announced this morning: AWS Management Console for S3 and 3 days ago: Amazon Cloudfront adds HTTPS Support, Lower prices, and Opens an NYC Edge Location. Things are moving pretty quickly right now.



James Hamilton



b: /


Thursday, June 10, 2010 5:18:00 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Monday, June 07, 2010

Last month I wrote about Solving World Problems with Economic Incentives. In that post I talked about the power of economic incentives when compared to regulatory body intervention. I’m not really against laws and regulations – the EPA, for example, has done some good work and much of what they do has improved the situation. But 9 times out of 10 good regulation is first blocked and/or water down by lobby groups, what finally gets enacted is often not fully through and brings unintended consequences, it is often overly prescriptive (see Right Problem but Wrong Approach), and regulations are enacted at the speed of government (think continental drift – there is movement but it’s often hard to detect).


If an economic incentive can be carefully crafted such that its squarely targeting the desired outcome rather than how to get there, wonderful things can happen. This morning I came across a great example of applying economic incentive to drive a positive outcome rapidly.  


First, some background on the base issue. I believe that web site latency has a fundamental impact on customer satisfaction and there is considerable evidence that it drives better economic returns. See The Cost of Latency for more detail on this issue.  Essentially I’m arguing that there really is a economic argument to reduce web page latency and astute companies are doing it today. If I’m right that economic incentives are enough, why isn’t the latency problem behind us?


The problem is that economic incentives only drive desired outcomes when there is a general, widely held belief that there is direct correlation between the outcome and the improved economic condition. In the case of web page latency, I’ll claim the evidence from Goggles Steve Souder, Jake Brutlag, and Marissa Mayer, Bing’s Eric Schurman (now Amazon), Dave Artz from AOL, Phil Dixon from Shopzilla and many others is very compelling. But, compelling isn’t enough. The reason I still write about it, is its not widely believed. Many argue that web page latency isn’t highly correlated with better economic outcomes.


Regardless of your take on this important topic, I urge you to read Steve Souder’s post Velocity and the Bottom Line. By the way, Velocity 2010 is coming up and you should consider doing the trip. It’s a good conference, the 2009 even produced some wonderful data and I expect 2010 to be at least as good.. I plan to be down for a day to give a talk.


Returning to web latency. I really believe there is an economic incentive to improve web site latency. But, if this belief is not widely held, it has no impact. I think that is about to change. Google recently announced Google Now Counts Site Speed as a Ranking Factor. Silly amounts of money is spent on search engine optimization. Getting to the top of the ranking is worth big bucks. This economic value of improved ranking is widely held and drives considerable behavior and investment today. It’s a very powerful tool. In fact, so valuable that an entire industry has grown up around helping achieve better ranking. Ranking is a very powerful incentive.


What Google has done here is a tiny first step but it’s a very cool first step with lots of potential upside. If ranking is believed to be materially impacted by site performance, we are going to see the entire web speed up. This could be huge if Google keeps taking steps down this path. Steve Souder’s books High Performance Web Sites and Even Faster Web Sites will continue to have a bright future.  Content Distribution Networks Like Limelight, Akamai, and Cloudfront will grow even faster. The very large cloud services providers like Amazon Web Services, with data centers all over the world will continue to grow quickly. We are going to see accelerated datacenter building in Asia. Lots will change.


If Google continues to move down the path of making web site latency a key factor in site ranking, we are going to see a faster web. More! Faster!


Thanks to Todd Hoff of High Scalability for pointing me towards this one in Web Speed Can Push You Off Googles Search Rankings.




James Hamilton



b: /


Monday, June 07, 2010 5:44:22 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
 Saturday, June 05, 2010

Industry trends come and go. The ones that stay with us and have lasting impact are those that fundamentally change the cost equation. Public clouds clearly pass this test. The potential savings approach 10x and, in cost sensitive industries, those that move to the cloud fastest will have a substantial cost advantage over those that don’t.


And, as much as I like saving money, the much more important game changer is speed of execution. Those companies depending upon public clouds will noticeably more nimble. Project approval to delivery times fall dramatically when there is no capital expense to be approved. When the financial risk of new projects is small, riskier projects can be tried. The pace of innovation increases. Companies where innovation is tied the financial approval cycle and the hardware ordering to install lag are at a fundamental disadvantage.


Clouds change companies for the better, clouds drive down costs, and clouds change the competitive landscape in industries. We have started what will be an exciting decade.


Earlier today I ran across a good article by Rodrigo Flores, CTO of newScale. In this article, Rodrigo says;


First, give up the fight: Enable the safe, controlled use of public clouds. There’s plenty of anecdotal and survey data indicating the use of public clouds by developers is large. A newScale informal poll in April found that about 40% of enterprises are using clouds – rogue, uncontrolled, under the covers, maybe. But they are using public clouds.


The move to the cloud is happening now. He also predicts:


IT operations groups are going to be increasingly evaluated against the service and customer satisfaction levels provided by public clouds. One day soon, the CFO may walk into the data center and ask, “What is the cost per hour for internal infrastructure, how do IT operations costs compare to public clouds, and which service levels do IT operations provide?” That day will happen this year.


This is a super important point. It was previously nearly impossible to know what it would cost to bring an application up and host it for its operational life. There was no credible alternative to hosting the application internally. Now, with care and some work, a comparison is possible and I expect that comparison to be made many times this year. This comparison won’t always be made accurately but the question will be asked and every company now has access to the data to be able to credibly make the comparison.  


I particularly like his point that self service is much better than “good service”.  Folks really don’t want to waste time calling service personal no matter how well trained those folks are. Customers just want to get their jobs done with as little friction as possible. Less phone calls are good.


Think like an ATM: Embrace self-service immediately. Bank tellers may be lovely people, but most consumers prefer ATMs for standard transactions. The same applies to clouds. The ability by the customer to get his or her own resources without an onerous process is critical.


Self service is cheaper, faster, and less frustrating for all involved. I’ve seen considerable confusion on this point. Many people tell me that customers want to be called on by sales representatives and they want the human interaction from the customer service team. To me, it just sounds like living in the past. These are old, slow, and inefficient models.


Public clouds are the new world order.  Read the full article at:   The Competitive Threat of Public Clouds.


James Hamilton



b: /


Saturday, June 05, 2010 5:34:58 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
 Monday, May 31, 2010

I did a talk at the Usenix Tech conference last year, Where does the Power Go in High Scale Data Centers. After the talk I got into a more detailed discussion with many folks from Netflix and Canada’s Research in Motion, the maker of the Blackberry. The discussion ended up in a long lunch over a big table with folks from both teams. The common theme of the discussion was predictably, given the companies and folks involved, innovation in high scale service and how to deal with incredible growth rates. Both RIM and Netflix are very successful and, until you have experienced and attempted to manage internet growth rates, you really just don’t know. I'm impressed with what they are doing. Growth brings super interesting problems and I learned from both and really enjoyed spending time with them.


I recently came across an interesting talk by Santosh Rau, the Netflix Cloud Infrastructure Engineering Manager. The fact that Netflix actually has a Cloud Infrastructure engineering manager is what caught my attention. Netflix continues to innovate quick and is moving fast with cloud computing.


My notes from Rau’s talk:

·         Details on Netflix

o   More than 10m subscribers

o   Over 100,000 DVD titles

o   50 distribution centers

o   Over 12,000 instant watch titles

·         Why is Netflix going to the cloud

o   Elastic infrastructure

o   Pay for what you use

o   Simple to deploy and maintain

o   Leverage datacenter geo-diversity

o   Leverage application services (queuing, persistence, security, etc.

·         Why did Netflix chose Amazon Web Services

o   Massive scale

o   More mature services

o   Thriving, active developer community of over 400,000 developers with excellent support

·         Netflix goals for move to the cloud:

o   Improved availability

o   Operational simplicity

o   Architect to exploit the characteristic of the cloud

·         Services in cloud:

o   Streaming control service: stream movie content to customers

§  Architecture: Three Netflix services running in EC2 (replication, queueing, and streaming) with inter-service communication via SQS and persistent state in SimpleDB.

§  Good cloud workload in that usage can vary greatly and there is value in having regional data centers and a better customer experience is possible by streaming content from locations near users

o   Encoding Service: Encodes movies in format required by diverse set of supported devices.

§  Good cloud workload in that its very computational intense and as new formats are introduced, massive encoding work needs to be done and there is value in doing it quickly (more servers for less time).

o   AWS Services used by Netflix

§  Elastic compute Cloud

§  Elastic Block Storage

§  Simple Queuing Service

§  SimpleDB

§  Simple Storage Service

§  Elastic Load Balancing

§  Elastic MapReduce

o   Developer Challenges:

§  Reliability and capacity

§  Persistence strategy

·         Oracle on EC2 over EBS vs MySQL vs SimpleDB

·         SimpleDB: Highly available replicating across zones

·         Eventually consistent (now supports full consistency (I love eventual consistency but…)

§  Data encryption and key management

§  Data replication and consistency


Predictably, the talk ended with “Netflix is hiring” but, in this case, it is actually worth mentioning. They are doing very interesting work and moving lightening fast. RIM is hiring as well:


The slides for the talk are at: slideshare.




James Hamilton



b: /


Monday, May 31, 2010 5:59:29 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

<October 2010>

This Blog
Member Login
All Content © 2015, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton