Sunday, January 09, 2011

Megastore is the data engine supporting the Google Application Engine. It’s a scalable structured data store providing full ACID semantics within partitions but lower consistency guarantees across partitions.

 

I wrote up some notes on it back in 2008 Under the Covers of the App Engine Datastore and posted Phil Bernstein’s excellent notes from a 2008 SIGMOD talk: Google Megastore. But there has been remarkably little written about this datastore over the intervening couple of years until this year’s CIDR conference papers were posted. CIDR 2011 includes Megastore: Providing Scalable, Highly Available Storage for Interactive Services.

 

My rough notes from the paper:

·         Megastore is built upon BigTable

·         Bigtable supports fault-tolerant storage within a single datacenter

·         Synchronous replication based upon Paxos and optimized for long distance inter-datacenter links

·         Partitioned into a vast space of small databases each with its own replicated log

·         Each log stored across a Paxos cluster

·         Because they are so aggressively partitioned, each Paxos group only has to accept logs for operations on a small partition. However, the design does serialize updates on each partition

·         3 billion writes and 20 billion read transactions per day

·         Support for consistency unusual for a NoSQL database but driven by (what I believe to be) the correct belief that inconsistent updates make many applications difficult to write (see I Love Eventual Consistency but …)

·         Data Model:

·         The data model is declared in a strongly typed schema

·         There are potentially many tables per schema

·         There are potentially many entities per table

·         There are potentially many strongly typed properties per entity

·         Repeating properties are allowed

·         Tables can be arranged hierarchically where child tables point to root tables

·         Megastore tables are either entity group root tables or child tables

·         The root table and all child tables are stored in the same entity group

·         Secondary indexes are supported

·         Local secondary indexes index a specific entity group and are maintained consistently

·         Global secondary indexes index across entity groups are asynchronously updates and eventually consistent

·         Repeated indexes: supports indexing repeated values (e.g. photo tags)

·         Inline indexes provide a way to denormalize data from source entities into a related target entity as a virtual repeated column.

·         There are physical storage hints:

·         “IN TABLE” directs Megastore to store two tables in the same underlying BigTable

·         “SCATTER” attribute prepends a 2 byte hash to each key to cool hot spots on tables with monotonically increasing values like dates (e.g. a history table).

·         “STORING” clause on an index supports index-only-access by redundantly storing additional data in an index. This avoids the double access often required of doing a secondary index lookup to find the correct entity and then selecting the correct properties from that entity through a second table access. By pulling values up into the secondary index, the base table doesn’t need to be accessed to obtain these properties.

·         3 levels of read consistency:

·         Current: Last committed value

·         Snapshot: Value as of start of the read transaction

·         Inconsistent reads: used for cross entity group reads

·         Update path:

·         Transaction writes its mutations to the entity groups write-ahead log and then apply the mutations to the data (write ahead logging).

·         Write transaction always begins with a current read to determine the next available log position. The commit operation gathers mutations into a log entry, assigns an increasing timestamp, and appends to log which is maintained using paxos.

·         Update rates within a entity group are seriously limited by:

·         When there is log contention, one wins and the rest fail and must be retried.

·         Paxos only accepts a very limited update rate (order 10^2 updates per second).

·         Paper reports that “limiting updates within an entity group to a few writes per second per entity group yields insignificant write conflicts”

·         Implication: programmers must shard aggressively to get even moderate update rates and consistent update across shards is only supported using two phase commit which is not recommended.

·         Cross entity group updates are supported by:

·         Two-phase commit with the fragility that it brings

·         Queueing and asynchronously applying the changes

·         Excellent support for backup and redundancy:

·         Synchronous replication to protect against media failure

·         Snapshots and incremental log backups

 

Overall, an excellent paper with lots of detail on a nicely executed storage system. Supporting consistent read and full ACID update semantics is impressive although the limitation of not being able to update an entity group at more than a “few per second” is limiting.

 

The paper: http://www.cidrdb.org/cidr2011/Papers/CIDR11_Paper32.pdf.

 

Thanks to Zhu Han, Reto Kramer, and Chris Newcombe for all sending this paper my way.

 

                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Sunday, January 09, 2011 10:38:39 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Services
 Thursday, December 16, 2010

Years ago I incorrectly believed special purpose hardware was a bad idea. What was a bad idea is high-markup, special purpose devices sold at low volume, through expensive channels. Hardware implementations are often best value measured in work done per dollar and work done per joule. The newest breed of commodity networking parts from Broadcom, Fulcrum, Dune (now Broadcom), and others is a beautiful example of Application Specific Integrated Circuits being the right answer for extremely hot code kernels that change rarely.

 

I’ve long been interested in highly parallel systems and in heterogeneous processing. General Purpose Graphics Processors are firmly hitting the mainstream with 17 of the Top 500 now using GPGPUs (Top 500: Chinese Supercomputer Reigns). You can now rent GPGPU clusters from EC2 $2.10/server/hour where each server has dual NVIDIA Tesla M2050 GPUs delivering a TeraFLOP per node. For more on GPGPUs, see HPC in the Cloud with GPGPUs and GPU Clusters in 10 minutes.

 

Some time ago Zach Hill sent me a paper writing up Radix sort using GPGPUs. The paper shows how to achieve a better than 3x on the NVIDIA GT200-hosted systems. For most of us, sort isn’t the most important software kernel we run, but I did find the detail behind the GPGPU-specific optimizations interesting. The paper is at http://www.mvdirona.com/jrh/TalksAndPapers/RadixSortTRv2.pdf and the abstract is below.

 

Abstract.

 

This paper presents efficient strategies for sorting large sequences of fixed-length keys (and values) using GPGPU stream processors. Compared to the state-of-the-art, our radix sorting methods exhibit speedup of at least 2x for all generations of NVIDIA GPGPUs, and up to 3.7x for current GT200-based models. Our implementations demonstrate sorting rates of 482 million key-value pairs per second, and 550 million keys per second (32-bit). For this domain of sorting problems, we believe our sorting primitive to be the fastest available for any fully-programmable microarchitecture. These results motivate a different breed of parallel primitives for GPGPU stream architectures that can better exploit the memory and computational resources while maintaining the flexibility of a reusable component. Our sorting performance is derived from a parallel scan stream primitive that has been generalized in two ways: (1) with local interfaces for producer/consumer operations (visiting logic), and (2) with interfaces for performing multiple related, concurrent prefix scans (multi-scan).

 

As part of this work, we demonstrate a method for encoding multiple compaction problems into a single, composite parallel scan. This technique yields a 2.5x speedup over bitonic sorting networks for small problem instances, i.e., sequences that can be entirely sorted within the shared memory local to a single GPU core.

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Thursday, December 16, 2010 9:06:43 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Hardware
 Monday, December 06, 2010

Even working in Amazon Web Services, I’m finding the frequency of new product announcements and updates a bit dizzying. It’s amazing how fast the cloud is taking shape and the feature set is filling out. Utility computing has really been on fire over the last 9 months. I’ve never seen an entire new industry created and come fully to life this fast. Fun times.


Before joining AWS, I used to say that I had an inside line on what AWS was working upon and what new features were coming in the near future.  My trick? I went to AWS customer meetings and just listened. AWS delivers what customers are asking for with such regularity that it’s really not all that hard to predict new product features soon to be delivered. This trend continues with today’s announcement. Customers have been asking for a Domain Name Service with consistency and, today, AWS is announcing the availability of Route 53, a scalable, highly-redundant and reliable, global DNS service.

 

The Domain Name System is essentially a global, distributed database that allows various pieces of information to be associated with a domain name.  In the most common case, DNS is used to look up the numeric IP address for an domain name. So, for example, I just looked up Amazon.com and found that one of the addresses being used to host Amazon.com is 207.171.166.252. And, when your browser accessed this blog (assuming you came here directly rather than using RSS) it would have looked up perspectives.mvdirona.com to get an IP address. This mapping is stored in an DNS “A” (address) record. Other popular DNS records are CNAME (canonical name), MX (mail exchange), and SPF (Sender Policy Framework). A full list of DNS record types is at: http://en.wikipedia.org/wiki/List_of_DNS_record_types. Route 53 currently supports:

                    A (address record)

                    AAAA (IPv6 address record)

                    CNAME (canonical name record)

                    MX (mail exchange record)

                    NS (name server record)

                    PTR (pointer record)

                    SOA (start of authority record)

                    SPF (sender policy framework)

                    SRV (service locator)

                    TXT (text record)

 

DNS, on the surface, is fairly simple and is easy to understand. What is difficult with DNS is providing absolute rock-solid stability at scales ranging from a request per day on some domains to billions on others. Running DNS rock-solid, low-latency, and highly reliable is hard.  And it’s just the kind of problem that loves scale. Scale allows more investment in the underlying service and supports a wide, many-datacenter footprint.

 

The AWS Route 53 Service is hosted in a global network of edge locations including the following 16 facilities:

·         United States

                    Ashburn, VA

                    Dallas/Fort Worth, TX

                    Los Angeles, CA

                    Miami, FL

                    New York, NY

                    Newark, NJ

                    Palo Alto, CA

                    Seattle, WA

                    St. Louis, MO

·         Europe

                    Amsterdam

                    Dublin

                    Frankfurt

                    London

·         Asia

                    Hong Kong

                    Tokyo

                    Singapore

 

Many DNS lookups are resolved in local caches but, when there is a cache miss, it will need to be routed back to the authoritative name server.  The right approach to answering these requests with low latency is to route to the nearest datacenter hosting an appropriate DNS server.  In Route 53 this is done using anycast. Anycast is a cool routing trick where the same IP address range is advertised to be at many different locations. Using this technique, the same IP address range is advertized as being in each of the world-wide fleet of datacenters. This results in the request being routed to the nearest facility from a network perspective.

 

Route 53 routes to the nearest datacenter to deliver low-latency, reliable results. This is good but Route 53 is not the only DNS service that is well implemented over a globally distributed fleet of datacenters. What makes Route 53 unique is it’s a cloud service. Cloud means the price is advertised rather than negotiated.  Cloud means you make an API call rather than talking to a sales representative. Cloud means it’s a simple API and you don’t need professional services or a customer support contact. And cloud means its running NOW rather than tomorrow morning when the administration team comes in. Offering a rock-solid service is half the battle but it’s the cloud aspects of Route 53 that are most interesting. 

 

Route 53 pricing is advertised and available to all:

·         Hosted Zones: $1 per hosted zone per month

·         Requests: $0.50 per million queries for first billion queries and $0.25 per million queries over 1B month

 

You can have it running in less time than it took to read this posting. Go to: ROUTE 53 Details. You don’t need to talk to anyone, negotiate a volume discount, hire a professional service team, call the customer support group, or wait until tomorrow. Make the API calls to set it up and, on average, 60 seconds later you are fully operating.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, December 06, 2010 5:37:15 AM (Pacific Standard Time, UTC-08:00)  #    Comments [7] - Trackback
Services
 Monday, November 29, 2010

I love high-scale systems and, more than anything, I love data from real systems. I’ve learned over the years that no environment is crueler, less forgiving, or harder to satisfy than real production workloads. Synthetic tests at scale are instructive but nothing catches my attention like data from real, high-scale, production systems. Consequently, I really liked the disk population studies from Google and CMU at FAST2007 (Failure Trends in a Large Disk Population, Disk Failures in the Real World: What does a MTBF of 100,000 hours mean to you).  These two papers presented actual results from independent production disk populations of 100,000 each. My quick summary of these 2 papers is basically “all you have learned about disks so far is probably wrong.”

 

Disk failures are often the #1 or #2 failing component in a storage system usually just ahead of memory. Occasionally fan failures lead disk but that isn’t the common case. We now have publically available data on disk failures but not much has been published on other component failure rates and even less on the overall storage stack failure rates. Cloud storage systems are multi-tiered, distributed systems involving 100s to even 10s of thousands of servers and huge quantities of software. Modeling the failure rates of discrete components in the stack is difficult but, with the large amount of component failure data available to large fleet operators, it can be done. What’s much more difficult to model are correlated failures.

 

Essentially, there are two challenges encountered when attempting to modeling overall storage system reliability: 1) availability of component failure data and 2) correlated failures. The former is available to very large fleet owners but is often unavailable publically. Two notable exceptions are disk reliability data from the two FAST’07 conference papers mentioned above. Other than these two data points, there is little credible component failure data publically available.  Admittedly, component manufacturers do publish MTBF data but these data are often owned by the marketing rather than engineering teams and they range between optimistic and epic works of fiction.

 

Even with good quality component failure data, modeling storage system failure modes and data durability remains incredibly difficult. What makes this hard is the second issue above: correlated failure. Failures don’t always happen alone, many are correlated, and certain types of rare failures can take down the entire fleet or large parts of it. Just about every model assumes failure independence and then works out data durability to many decimal points. It makes for impressive models with long strings of nines but the challenge is the model is only as good as the input. And one of the most important model inputs is the assumption of component failure independence which is violated by every real-world system of any complexity. Basically, these failure models are good at telling you when your design is not good enough but they can never tell you how good your design actually is nor whether it is good enough.

 

Where the models break down is in modeling rare events and non-independent failures. The best way to understand common correlated failure modes is to study storage systems at scale over longer periods of time. This won’t help us understand the impact of very rare events. For example, Two thousand years of history would not helped us model or predict that a airplane would be flown into the World Trade Center.  And certainly the odds of it happening again 16 min and 20 seconds later would be close to impossible. Studying historical storage system failure data will not help us understand the potential negative impact of very rare black swan events but it does help greatly in understanding the more common failure modes including correlated or non-independent failures.

 

Murray Stokely recently sent me Availability in Globally Distributed Storage Systems which is the work of a team from Google and Columbia University. They look at a high scale storage system at Google that includes multiple clusters of Bigtable which is layered over GFS which is implemented as a user–mode application over Linux file system. You might remember Stokely from my Using a post I did back in March titled Using a Market Economy. In this more recent paper, the authors study 10s of Google storage cells each of which is comprised of between 1,000 and 7,000 servers over a 1 year period. The storage cells studied are from multiple datacenters in different regions being used by different projects within Google.

 

I like the paper because it is full of data on a high-scale production system and it reinforces many key distributed storage system design lessons including:

·         Replicating data across multiple datacenters greatly improves availability because it protects against correlated failures.

o    Conclusion: Two way redundancy in two different datacenters is considerably more durable than 4 way redundancy in a single datacenter.

·         Correlation among node failures dwarfs all other contributions to unavailability in production environments.

·         Disk failures can result in permanent data loss but transitory node failures account for the majority of unavailability.

 

To read more: http://research.google.com/pubs/pub36737.html

 

The abstract of the paper:

Highly available cloud storage is often implemented with complex, multi-tiered distributed systems built on top of clusters of commodity servers and disk drives. Sophisticated management, load balancing and recovery techniques are needed to achieve high performance and availability amidst an abundance of failure sources that include software, hardware, network connectivity, and power issues. While there is a relative wealth of failure studies of individual components of storage systems, such as disk drives, relatively little has been reported so far on the overall availability behavior of large cloud-based storage services. We characterize the availability properties of cloud storage systems based on an extensive one year study of Google's main storage infrastructure and present statistical models that enable further insight into the impact of multiple design choices, such as data placement and replication strategies. With these models we compare data availability under a variety of system parameters given the real patterns of failures observed in our fleet.

 

                                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, November 29, 2010 5:55:51 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Sunday, November 21, 2010

Achieving a PUE of 1.10 is a challenge under any circumstances but the vast majority of facilities that do approach this mark are using air-side economization. Essentially using outside air to cool the facility. Air-side economization brings some complexities such as requiring particulate filters and being less effective in climates that are both hot and humid. Nonetheless, even with the challenges, air-side economization is one of the best techniques, if not the best, of improving datacenter efficiency.

 

As a heat transport water is both effective and efficient. The challenges of using water in open-circuit datacenter cooling designs are largely social and regulatory. Most of the planet is enshrouded by two fluids, air and water. It’s a bit ironic that emitting heat into one of the two is considered uninteresting while emitting heat into the other brings much more scrutiny. Dumping heat into air concerns few and is the cooling technique of choice for nearly all data centers (Google Saint-Ghislain Belgium being a notable exception). However, if the same heat in the same quantities is released in water, it causes considerably more environmental concern even though it’s the same amount of heat being released to the environment. Heat pollution is the primary concern.

 

The obvious technique to avoid some of the impact of thermal pollution is massive dilution. Use seawater or very high volumes of river water such that the local change is immeasurably small. Seawater has been used in industrial and building cooling applications. Seawater brings challenges in that it is incredibly corrosive which drives up build and maintenance costs but it has been used in a wide range of high-scale applications with success. Freshwater cooling relieves some of the corrosion concerns and has been used effectively for many large-scale cooling requirements including nuclear power plants. I’ve noticed there is often excellent fishing downstream of these facilities so there clearly is substantial environmental impact caused by these thermal emissions but this need  not be the case. There exist water cooling techniques with far less environmental impact.

 

For example, the cities of Zurich and Geneva and the universities of ETH and Cornell use water for some of their heating & cooling requirements. This technique is effective and its impact on the environment can be made arbitrarily small. In a slightly different approach, the city of Toronto employs deep lake water cooling to cool buildings in its downtown core. In this design, the warm water intake is taken in 3.1 miles off shore at a depth of 272’. The city of Toronto avoids any concerns about thermal pollution by using the exhaust water from the cooling system as their utility water intake so the slightly warmer water is not directly released back into the environment.

 

Given the advantages of water over air in cooling applications and given that the environmental concerns can be mitigated, why not use the technique more broadly in datacenters? One of the prime reasons is that water is not always available. Another is that regulatory concerns bring more scrutiny and, even excellent designs without measurable environmental impact, will still take longer to get approved than a conventional air-cooled approaches. However it can be done and it does produce a very power efficient facility. The DeepGreen datacenter project in Switzerland perhaps the best examples I’ve seen so far. 

Before looking at the innovative mechanical systems used in Deepgreen, the summary statistics look excellent:

·         46MW with 129k sq ft of raised floor (with upgrade to 70MW possible)

·         Estimated PUE of 1.1

·         Hydro and nuclear sourced power

·         356 W/sq ft average

·         5,200 racks with average rack power of 8.8kW and maximum rack power of 20kW

·         Power cost: $0.094/kW hr (compares well across EU).

·         28 mid-voltage 2.5 MW generators with 48 hours of onsite diesel

 

The 46MW facility is located in the heart of Switzerland on Lake Walensee:

 


Google Maps: http://maps.google.com/maps?f=q&source=s_q&hl=de&geocode=&q=Werkhof+Bi%C3%A4sche,+Mollis,+Schweiz&sll=37.0625,-95.677068&sspn=45.601981,84.990234&ie=UTF8&hq=&hnear=Werkhof,+8872+Mollis,+Glarus,+Schweiz&ll=47.129016,9.09462&spn=0.307857,0.663986&t=p&z=11

The overall datacenter is well engineered but it is the mechanical systems that are most worthy of note. Following on the diagram below, this facility is cooled using 43F source water from 197’ below the surface. The source water is brought in through dual redundant intake pipes to the pumping station with 6 high-capacity pumps in a 2*(N+1) configuration. The pumps move 668,000 gallons per hour at full cooling load.

 

The fairly clean lake water is run through a heat exchanger (not shown in the diagram below) to cool the closed-system, chilled water loop used  in the datacenter. The use of a heat exchanger avoids bringing impurities or life forms into the datacenter chilled water loop. The chilled water loop forms part of a conventional within-the-datacenter cooling system design. The difference is they have completely eliminated process-based cooling (air conditioning) and water towers avoiding both the purchase cost and the power these equipment would have consumed. In the diagrams below you’ll see the Deepgreen design followed by a conventional datacenter cooling system for comparison purposes:

 

Conventional datacenter mechanical system for comparision:

 

The Deepgreen facility mitigates the impact of thermal pollution through a combination of dilution, low deltaT, and deep water release.

 

I’ve been conversing with Andre Oppermann, the CTO of Deepgreen for nearly a year on this project. Early on, I was skeptical they would be able to work through the environmental concerns in any reasonable time frame. I wasn’t worried about the design – its well engineered. My concerns were primarily centered around slowdowns in permitting and environmental impact studies. They have done a good job of working through those issues and I really like the resultant design. Thanks to Andre for sending all this detail my way. It’s a super interesting project and I’m glad we can now talk about it publically.

 

If you are interested in a state of the art facility in Switzerland, I recommend you contact Andre, the CTO of Deepgreen, at: oppermann@deepgreen.ch.

 

                                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Sunday, November 21, 2010 9:19:42 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Saturday, November 20, 2010

I’m interested in low-cost, low-power servers and have been watching the emerging market for these systems since 2008 when I wrote CEMS: Low-Cost, Low-Power Servers for Internet Scale Services (paper, talk).  ZT Systems just announced the R1081e, a new ARM-based server with the following specs:

·         STMicroelectronics SPEAr 1310 with dual ARM® Cortex™-A9 cores

·         1 GB of 1333MHz DDR3 ECC memory embedded

·         1 GB of NAND Flash

·         Ethernet connectivity

·         USB

·         SATA 3.0

 It’s a shared infrastructure design where each 1RU module has 8 of the above servers. Each module includes:

·         8 “System on Modules“ (SOMs)

·         ZT-designed backplane for power and connectivity

·         One 80GB SSD per SOM

·         IPMI system management

·         Two Reatek 4+1 1Gb Ethernet switches on board with external uplinks

·         Standard 1U rack mount form factor

·         Ubuntu Server OS

·         250W 80+ Bronze Power Supply

 

Each module is under 80W so a rack with 40 compute modules would only draw 3.2kw for 320 low-power servers for a total of 740 cores/rack. Weaknesses of this approach are: only 2-cores per server, only 1GB/core, and the cores appear to be only 600 Mhz (http://www.32bitmicro.com/manufacturers/65-st/552-spear-1310-dual-cortex-a9-for-embedded-applications-). Four core ARM parts and larger physical memory support are both under development.

 

Competitors include SeaMicro with an Atom based design (SeaMicro Releases Innovative Intel Atom Server

)  and the recently renamed Calxeda (previously Smooth-Stone) has an ARM-based product under development.

 

Other notes on low-cost, low-powered servers:

·         SeaMicro Releases Innovative Intel Atom Server

·         When Very Low-Power, Low-Cost Servers don’t make snese

·         Very Low-Power Server Progress

·         The Case for Low-Cost, Low-Power Servers

·         2010 the Year of the Microslice Servers

·         Linux/Apache on ARM processors

·         ARM Cortex-A9 SMP Design Announced

  

From Datacenter Knowledge: New ARM-Based Server from ZT systems

 

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, November 20, 2010 8:06:34 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Hardware
 Wednesday, November 17, 2010

Earlier this week Clay Magouyrk sent me a pointer to some very interesting work: A Couple More Nails in the Coffin of the Private Compute Cluster: Benchmarks for the Brand New Cluster GPU Instance on Amazon EC2.

 

This detailed article has detailed benchmark results from runs on the new Cluster GPU Instance type and leads in with:

During the past few years it has been no secret that EC2 has been best cloud provider for massive scale, but loosely connected scientific computing environments. Thankfully, many workflows we have encountered have performed well within the EC2 boundaries. Specifically, those that take advantage of pleasantly parallel, high-throughput computing workflows. Still, the AWS approach to virtualization and available hardware has made it difficult to run workloads which required high bandwidth or low latency communication within a collection of distinct worker nodes. Many of the AWS machines used CPU technology that, while respectable, was not up to par with the current generation of chip architectures. The result? Certain use cases simply were not a good fit for EC2 and were easily beaten by in-house clusters in benchmarking that we conducted within the course of our research. All of that changed when Amazon released their Cluster Compute offering.

 

The author goes on to run the Saleable Heterogeneous cOmputing BenChmarking Suite and compare EC2 with Native performance and conclude:

 

With this new AWS offering, the line between internal hardware and virtualized, cloud-based hardware for high performance computing using GPUs has indeed been blurred.

 

Finally a run with a Cycle Computing customer workload:

Based on the positive results of our SHOC benchmarking, we approached a Fortune 500 Life Science and a Finance/Insurance clients who develop and use their own GPU-accelerated software, to run their applications on the GPU-enabled Cluster Compute nodes. For both applications, the applications perform a large number of Monte Carlo simulations for given set of initial data, all pleasantly parallel. The results, similar to the SHOC result, were that the EC2 GPU-enabled Cluster Compute nodes performed as well as, or better than, the in-house hardware maintained by our clients.

 

Even if you only have a second, give the results a scan: http://blog.cyclecomputing.com/2010/11/a-couple-more-nails-in-the-coffin-of-the-private-compute-cluster-gpu-on-cloud.html.

 

                                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Wednesday, November 17, 2010 6:15:32 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Monday, November 15, 2010

A year and half ago, I did a blog post titled heterogeneous computing using GPGPUs and FPGA. In that note I defined heterogeneous processing as the application of processors with different instruction set architectures (ISA) under direct application programmer control and pointed out that this really isn’t all that new a concept. We have had multiple ISAs in systems for years. IBM mainframes had I/O processors (Channel I/O Processors) with a different ISA than the general CPUs , many client systems have dedicated graphics coprocessors, and floating point units used to be independent from the CPU instruction set before that functionality was pulled up onto the chip. The concept isn’t new. 

 

What is fairly new is 1) the practicality of implementing high-use software kernels in hardware and 2) the availability of commodity priced parts capable of vector processing. Looking first at moving custom software kernels into hardware, Field Programmable Gate Arrays (FPGA) are now targeted by some specialized high level programming languages. You can now write in a subset of C++ and directly implement commonly used software kernels in hardware. This is still the very early days of this technology but some financial institutions have been experimenting with moving computationally expensive financial calculations into FPGAs to save power and cost. See Platform-Based Electronic System-Level (ESL) Synthesis for more on this.

 

The second major category of heterogeneous computing is much further evolved and is now beginning to hit the mainstream. Graphics Processing Units (GPUs) essentially are vector processing units in disguise.  They originally evolved as graphics accelerators but it turns out a general purpose graphics processor can form the basis of an incredible Single Instruction Multiple Data (SIMD) processing engine. Commodity GPUs have most of the capabilities of the vector units in early supercomputers. What’s missing is they have been somewhat difficult to program in that the pace of innovation is high and each model of GPU have differences in architecture and programming models. It’s almost impossible to write code that will directly run over a wide variety of different devices. And the large memories in these graphics processors typically are not ECC protected. An occasional pixel or two wrong doesn’t really matter in graphical output but you really do need ECC memory for server side computing.

 

Essentially we have commodity vector processing units that are hard to program and lack ECC. What to do? Add ECC memory and a abstraction  layer that hides many of the device-to-device differences. With those two changes, we have amazingly powerful vector units available at commodity prices.  One abstraction layer that is getting fairly broad pickup is Compute Unified Device Architecture or CUDA developed by NVIDIA. There are now CUDA runtime support libraries for many programming languages including C, FORTRAN, Python, Java, Ruby, and Perl.

 

Current generation GPUS are amazingly capable devices. I’ve covered the speeds and feeds of a couple in past postings: ATI RV770 and the NVIDIA GT200.

 

Bringing it all together, we have commodity vector units with ECC and an abstraction layer that makes it easier to program them and allows programs to run unchanged as devices are upgraded. Using GPUs to host general compute kernels is generically referred to as General Purpose Computation on Graphics Processing Units.  So what is missing at this point?  The pay-as-you go economics of cloud computing.

 

You may recall I was excited last July when Amazon Web Services announced the Cluster Compute Instance: High Performance Computing Hits the Cloud. The EC2 Cluster Compute Instance is capable but lacks a GPU:

·         23GB memory with ECC

·         64GB/s main memory bandwidth

·         2 x Intel Xeon X5570 (quad-core Nehalem)

·         2 x 845GB HDDs

·         10Gbps Ethernet Network Interface

 

What Amazon Web Services just announced is a new instance type with the same core instance specifications as the cluster compute instance above, the same high-performance network, but with the addition of two NVIDIA Tesla M2050 GPUs in each server. See supercomputing at 1/10th the Cost. Each of these GPGPUs is capable of over 512 GFLOPs and so, with two of these units per server, there is a booming teraFLOP per node.

 

Each server in the cluster is equipped with a 10Gbps network interface card connected to a constant bisection bandwidth networking fabric. Any node can communicate with any other node at full line rate. It’s a thing of beauty and a forerunner of what every network should look like.

 

There are two full GPUs in each Cluster GPU instance each of which dissipates 225W TDP. This felt high to me when I first saw it but, looking at work done per watt, it’s actually incredibly good for workloads amenable to vector processing.  The key to the power efficiency is the performance. At over 10x the performance of a quad core x86, the package is both power efficient and cost efficient.

 

The new cg1.4xlarge EC2 instance type:

·         2 x  NVIDIA Tesla M2050 GPUs

·         22GB memory with ECC

·         2 x Intel Xeon X5570 (quad-core Nehalem)

·         2 x 845GB HDDs

·         10Gbps Ethernet Network Interface

 

With this most recent announcement, AWS now has dual quad core servers each with dual GPGPUs connected by a 10Gbps full-bisection bandwidth network for $2.10 per node hour. That’s $2.10 per teraFLOP. Wow.

·         The Amazon Cluster GPU Instance type announcement: Announcing Cluster GPU Instances for EC2

·         More information on the EC2 Cluster GPU instances: http://aws.amazon.com/ec2/hpc-applications/

                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

 

Monday, November 15, 2010 4:57:46 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Sunday, November 14, 2010

The Top 500 Super Computer Sites list was just updated and AWS Compute Cluster is now officially in the top ½ of the list. That means when you line up all the fastest, multi-million dollar, government lab sponsored super computers from #1 through to #500, the AWS Compute Cluster Instance is at  #231. 

 

231

Amazon Web Services
United States

Amazon EC2 Cluster Compute Instances - Amazon EC2 Cluster, Xeon X5570 2.95 Ghz, 10G Ethernet / 2010
Self-made

7040

41.82

82.51

One of the fastest supercomputers in the world for $1.60/node hour. Cloud computing changes the economics in a pretty fundamental way.

More details at: High Performance Computing Hits the Cloud.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

Sunday, November 14, 2010 11:21:19 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Friday, November 12, 2010

Kushagra Vaid presented at Datacenter Power Efficiency at Hotpower ’10. The URL to the slides is below and my notes follow. Interesting across the board but most notable for all the detail on the four major Microsoft datacenters:

 

·         Services speeds and feeds:

o   Windows LiveID: More than 1B authentications/day

o   Exchange Hosted Services: 2 to 4B email messages per day

o   MSN: 550M unique visitors monthly

o   Hotmail: 400M active users

o   Messenger: 320M active users

o   Bing: >2B queries per month

·         Microsoft Datacenters: 141MW total, 2.25M sq ft

o   Quincy:

§  27MW

§  500k sq ft total

§  54W/sq ft

o   Chicago:

§  60MW

§  707k sq ft total

§  85W/sq ft

§  Over $500M invested

§  $8.30/W (assuming $500M cost)

§  3,400 tons of steel

§  190 miles of conduit

§  2,400 tons of copper

§  26,000 yards of concrete

§  7.5 miles of chilled water piping

§  1.5M man hours of labor

§  Two floors of IT equipment:

·         Lower floor: medium reliability container bay (standard IOS containers supplied by Dell)

·         Upper floor: high reliability traditional colo facility

§  Note the picture of the upper floor shows colo cages suggesting that Microsoft may not be using this facility for their internal workloads.

§  PUE: 1.2 to 1.5

§  3,000 construction related jobs

o   Dublin:

§  27MW

§  570k sq ft total

§  47W/sq ft

o   San Antonio:

§  27MW

§  477k sq ft total

§  57W/sq ft

·         The power densities range between 45 and 85W/sq ft range which is incredibly low for relatively new builds. I would have expected something in the 200W/sq ft to 225W/sq ft range and perhaps higher. I suspect the floor space numbers are gross floor space numbers including mechanical, electrical, and office space rather than raised floor.

·         Kushagra reports typical build costs range from $10 to $15/W

o   based upon the Chicago data point,  the Microsoft build costs are around $8/sq ft. Perhaps a bit more in the other facilities in that Chicago isn’t fully redundant on the lower floor whereas the generator count at the other facilities suggest they are.

·         Gen 4 design

o   Modular design but the use of ISO standard containers has been discontinued. However, they modules are prefabricated and delivered by flatbed truck

o   No mechanical cooling

§  Very low water consumption

o   30% to 50% lower costs

o   PUE of 1.05 to 1.1

·         Analysis of AC vs DC power distribution concluding that DC more efficient at low loads and AC at high loads

o   Over all the best designs are within 1 to 2% of each other

·         Recommends higher datacenter temperatures

·         Key observation:

o   The best price/performance and power/performance is often found in lower-power processors

·         Kushagra found substantial power savings using C6 and showed server idle power can be dropped to 31% of full load power

o   Good servers typically run in the 45 to 50% range and poor designs can be as bad as 80%

 

The slides: http://mvdirona.com/jrh/TalksAndPapers/KushagraVaidMicrosoftDataCenters.pdf

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Friday, November 12, 2010 12:02:55 PM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Services
 Sunday, October 31, 2010

I did a talk earlier this week on the sea change currently taking place in datacenter networks. In Datacenter Networks are in my Way I start with an overview of where the costs are in a high scale datacenter. With that backdrop, we note that networks are fairly low power consumers relative to the total facility consumption and not even close to the dominant cost. Are they actually a problem? The rest of the talk is arguing networks are actually a huge problem across the board including cost and power. Overall, networking gear lags behind the rest of the high-scale infrastructure world, block many key innovations, and actually are both cost and power problems when we look deeper.

 

The overall talk agenda:

·         Datacenter Economics

·         Is Net Gear Really the Problem?

·         Workload Placement Restrictions

·         Hierarchical & Over-Subscribed

·         Net Gear: SUV of the Data Center

·         Mainframe Business Model

·         Manually Configured & Fragile at Scale

·         New Architecture for Networking

 

In a classic network design, there is more bandwidth within a rack and more within an aggregation router than across the core. This is because the network is over-subscribed. Consequently, instances of a workload often needs to be placed topologically near to other instances of the workload, near storage, near app tiers, or on the same subnet. All these placement restrictions make the already over-constrained workload placement problem even more difficult. The result is either the constraints are not met which yields poor workload performance or the constraints are met but overall server utilization is lower due to accepting these constraints. What we want is all points in the datacenter equidistant and no constraints on workload placement.

 

Continuing on the over-subscription problem mentioned above, data intensive workloads like MapReduce and high performance computing workloads run poorly on oversubscribed networks.  Its not at all uncommon for a MapReduce workload to transport the entire data set at least once over the network during job run. The cost of providing a flat, all-points-equidistant network are so high, that most just accept the constraint and other run MapReduce poorly or only run them in narrow parts of the network (accepting workload placement constraints).

 

Net gear doesn’t consume much power relative to total datacenter power consumption – other gear in the data center are, in aggregate much worse. However, network equipment power is absolutely massive today and it is trending up fast. A fully configured Cisco Nexus 7000 requires 8 circuits of 5kw each. Admittedly some of that power is for redundancy but how can 120 ports possibly require as much power provisioned as 4 average sized full racks of servers? Net gear is the SUV of the datacenter.

 

The network equipment business model is broken. We love the server business model where we have competition at the CPU level, more competition at the server level, and an open source solution for control software.  In the networking world, it’s a vertically integrated stack and this slows innovation and artificially holds margins high. It’s a mainframe business model.

New solutions are now possible with competing merchant silicon from Broadcom, Marvell, and Fulcrum and competing switch designs built on all three. We don’t yet have the open source software stack but there are some interesting possibilities on the near term horizon with OpenFlow being perhaps the most interesting enabler. More on the business model and why I’m interested in OpenFlow at: Networking: The Last Bastion of the Mainframe Computing.

 

Talk slides: http://mvdirona.com/jrh/TalksAndPapers/JamesHamilton_POA20101026_External.pdf

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Sunday, October 31, 2010 11:30:28 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Hardware | Services
 Thursday, October 21, 2010

What happens when you really, really, focus on efficient infrastructure and driving down costs while delivering a highly available, high performance service? Well, it works.  Costs really do fall and the savings can be passed on to customers.

 

AWS prices have been falling for years but this is different.  Its now possible to offer an small EC2 instance for free. You can now have 750 hours of EC2 usage per month without charge.

 

The details:

 

·         750 hours of Amazon EC2 Linux Micro Instance usage (613 MB of memory and 32-bit and 64-bit platform support) – enough hours to run continuously each month*

·         750 hours of an Elastic Load Balancer plus 15 GB data processing*

·         10 GB of Amazon Elastic Block Storage, plus 1 million I/Os, 1 GB of snapshot storage, 10,000 snapshot Get Requests and 1,000 snapshot Put Requests*

·         5 GB of Amazon S3 storage, 20,000 Get Requests, and 2,000 Put Requests*

·         30 GB per of internet data transfer (15 GB of data transfer “in” and 15 GB of data transfer “out” across all services except Amazon CloudFront)*

·         25 Amazon SimpleDB Machine Hours and 1 GB of Storage**

·         100,000 Requests of Amazon Simple Queue Service**

·         100,000 Requests, 100,000 HTTP notifications and 1,000 email notifications for Amazon Simple Notification Service**

 

More information: http://aws.amazon.com/free/

 

                                      --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

 

Thursday, October 21, 2010 4:35:44 PM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Services
 Tuesday, October 19, 2010

Long time Amazon Web Services Alum Jeff Barr has written a book on AWS. Jeff’s been with AWS since the very early days and he knows the services well. The new book Host Your Web Site in the Cloud: Amazon Web Services Made Easy, covers each of the major AWS services, how to write code against them, with code examples in PHP. It covers S3, EC2, SQS, EC2 Monitoring, Auto Scaling, Elastic Load Balancing, and SimpleDB. The table of contents:

 

 Recommended if you are interested in Cloud Computing and AWS: http://www.amazon.com/Host-Your-Web-Site-Cloud/dp/0980576830.

 

                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, October 19, 2010 7:21:29 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Sunday, October 10, 2010

This morning I came across an article written by  Sid Anand, an architect at Netflix that is super interesting. I liked it for two reasons: 1) it talks about the move of substantial portions of a high-scale web site to the cloud, some of how it was done, and why it was done, and 2) its gives best practices on AWS SimpleDB usage.

 

I love articles about how high scale systems work. Some past postings:

 

The article starts off by explaining why Netflix decided to move their infrastructure to the cloud:

 

Circa late 2008, Netflix had a single data center. This single data center raised a few concerns. As a single-point-of-failure, it represented a liability – data center outages meant interruptions to service and negative customer impact. Additionally, with growth in both streaming adoption and subscription levels, Netflix would soon outgrow this data center – we foresaw an imminent need for more power, better cooling, more space, and more hardware.

 

Our option was to build more data centers. Aside from high upfront costs, this endeavor would likely tie up key engineering resources in data center scale out activities, making them unavailable for new product initiatives.  Additionally, we recognized the management of multiple data centers to be a complex task. Building out and managing multiple data centers seemed a risky distraction.

 

Rather than embarking on this path, we chose a more radical one. We decided to leverage one of the leading IAAS offerings at the time, Amazon Web Services. With multiple, large data centers already in operation and multiple levels of redundancy in various web services (e.g. S3 and SimpleDB), AWS promised better availability and scalability in a relatively short amount of time.

 

By migrating various network and back-end operations to a 3rd party cloud provider, Netflix chose to focus on its core competency: to deliver movies and TV shows.

 

I’ve read considerable speculation over the years on the difficulty of moving to cloud services. Some I agree with – these migrations do take engineering investment  – while other reports seem to less well thought through focusing mostly on repeating concerns speculated upon by others. Often, the information content is light.

 

I know the move to the cloud can be done and is done frequently because, where I work, I’m lucky enough to see it happen every day. But the Netflix example is particularly interesting in that 1) Netflix is a fairly big enterprise with a market capitalization of $7.83B – moving this infrastructure is substantial and represents considerable complexity. It is a great example of what can be done; 2) Netflix is profitable and has no economic need to make the change – they made the decision to avoid distraction and stay focused on the innovation that made the company as successful as it is; and 3) they are willing to contribute their experiences back to the community. Thanks to Sid and Netflix for the later.

 

For more detail, check out the more detailed document that Sid Anand has posted: Netflix’s Transition to High-Availability Storage Systems.

 

For those SimpleDB readers, here’s a set of SimpleDB best practices from Sid’s write-up:

 

·         Partial or no SQL support. Loosely-speaking, SimpleDB supports a subset of SQL

o   Do GROUP BY and JOIN operations in the application layer

o   One way to avoid the need for JOINs is to denormalize multiple Oracle tables into a single logical SimpleDB domain.

·         No relations between domains

o   Compose relations in the application layer

·         No transactions

o   Use SimpleDB’s Optimistic Concurrency Control API: ConditionalPut and ConditionalDelete

·         No Triggers

o   Do without

·         No PL/SQL

o   Do without

·         No schema - This is non-obvious. A query for a misspelled attribute name will not fail with an error

o   Implement a schema validator in a common data access layer

·         No sequences

o   Sequences are often used as primary keys

§  In this case, use a naturally occurring unique key. For example, in a Customer Contacts domain, use the customer’s mobile phone number as the item key

§  If no naturally occurring unique key exists, use a UUID

o   Sequences are also often used for ordering

§  Use a distributed sequence generator

·         No clock operations

o   Do without

·         No constraints. Specifically, no uniqueness, no foreign key, no referential & no integrity constraints

o   Application can check constraints on read and repair data as a side effect. This is known as read-repair. Read repair should use the CondtionalPut or ConditionalDelete API to make atomic changes

 

If you are interested high sale web sites or cloud computing in general, this one is worth a read: Netflix’s Transition to High-Availability Storage Systems.

 

                                                --jrh

Update:  Netflix architects Adrian Cockcroft and Sid Anand are both presenting at QconSF between November 1st and 5th in San Francisco: http://qconsf.com/sf2010.

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Sunday, October 10, 2010 11:53:44 AM (Pacific Standard Time, UTC-08:00)  #    Comments [13] - Trackback
Services
 Saturday, October 09, 2010

Hosting multiple MySQL engines with MySQL Replication between them is a common design pattern for scaling read-heavy MySQL workloads. As with all scaling techniques, there are workloads for which it works very well but there are also potential issues that need to be understood. In this case, all write traffic is directed to the primary server and, consequently is not scaled which is why this technique works best for workloads heavily skewed towards reads. But, for those fairly common read heavy workloads, the techniques works very well and allows scaling the read workload across over a fleet of MySQL instances.  Of course, as with any asynchronous replication scheme, the read replicas are not transactionally updated. So any application running on MySQL read replica’s must be tolerant of eventually consistent updates.

 

Load balancing high read traffic over multiple MySQL instances works very well but this is only one of the possible tools used to scale this type of workload. Another very common technique is to put a scalable caching layer in front of the relational database fleet. By far the most common caching layer used by high-scale services is Memcached.

 

Another database scaling technique is to simply not use a relational database. For workloads that don’t need schema enforcement and complex query, NoSQL databases offer both a cheaper and a simpler approach to hosting the workload. SimpleDB is the AWS hosted NoSQL database with Netflix being one of the best known users (slide deck from Netflix’s Adrian Cockcroft: http://www.slideshare.net/adrianco/netflix-oncloudteaser). Cassandra is another common RDBMS alternative in heavy use by many high-scale sites including Facebook where it was originally conceived. Cassandra is also frequently run on AWS with the Cassandra Wiki offering scripts to make it easy install and configure on Amazon EC2.

 

For those workloads where a relational database is the chosen solution, MySQL read replication is a good technique to have in your scaling tool kit. Last week Amazon announced read replica support for the AWS Relational Database Service. The press release is at: Announcing Read Replicas, Lower High Memory DB Instance Price for Amazon AWS.

 

You can now create one or more replicas of a given “source” DB Instance and serve incoming read traffic from multiple copies of your data. This new database deployment option enables you to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads. You can use Read Replicas in conjunction with Multi-AZ replication for scalable, reliable, and highly available production database deployments.

 

If you are running MySQL and wish you had someone else to manage it for you, check out Amazon RDS. The combination of read replicas to scale read workloads and Multi-AZ support for multi-data center, high availability make it a pretty interesting way to run MySQL.

 

                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, October 09, 2010 10:05:02 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Services
 Saturday, September 18, 2010

A couple of years ago, I did a detailed look at where the costs are in a modern , high-scale data center. The primary motivation behind bringing all the costs together was to understand where the problems are and find those easiest to address. Predictably, when I first brought these numbers together, a few data points just leapt off the page: 1) at scale, servers dominate overall costs, and 2) mechanical system cost and power consumption seems unreasonably high. Both of these areas have proven to be important technology areas to focus upon and there has been considerable industry-wide innovation particularly in cooling efficiency over the last couple of years.

 

I posted the original model at the Cost of Power in Large-Scale Data Centers. One of the reasons I posted it was to debunk the often repeated phrase “power is the dominate cost in a large-scale data center”. Servers dominate with mechanical systems and power distribution  close behind. It turns out that power is incredibly important but it’s not the utility kWh charge that makes power important. It’s the cost of the power distribution equipment required to consume power and the cost of the mechanical systems that take the heat away once the power is consumed. I referred to this as fully burdened power. 

 

Measured this way, power is the second most important cost. Power efficiency is highly leveraged when looking at overall data center costs, it plays an important role in environmental stewardship, and it is one of the areas where substantial gains continue to look quite attainable. As a consequence, this is where I spend a considerable amount of my time – perhaps the majority – but we have to remember that servers still dominate the overall capital cost.

 

This last point is a frequent source of confusion.  When server and other IT equipment capital costs are directly compared with data center capital costs, the data center portion actually is larger. I’ve frequently heard “how can the facility cost more than the servers in the facility – it just doesn’t make sense.”  I don’t know whether or not it makes sense but it actually is not true at this point. I could imagine the infrastructure costs one day eclipsing those of servers as server costs continue to decrease but we’re not there yet. The key point to keep in mind is the amortization periods are completely different. Data center amortization periods run from 10 to 15 years while server amortizations are typically in the three year range. Servers are purchased 3 to 5 times during the life of a datacenter so, when amortized properly, they continue to dominate the cost equation.

 

In the model below, I normalize all costs to a monthly bill by taking consumable like power and billing them monthly by consumption and taking capital expenses like servers, networking or datacenter infrastructure, and amortizing over their useful lifetime using a 5% cost of money and, again, billing monthly. This approach allows us to compare non-comparable costs such as data center infrastructure with servers and networking gear each with different lifetimes. The model includes all costs “below the operating system” but doesn’t include software licensing costs mostly because open source is dominant in high scale centers and partly because licensing costs very can vary so widely. Administrative costs are not included for the same reason. At scale, hardware administration, security, and other infrastructure-related people costs disappear into the single digits with the very best services down in the 3% range. Because administrative costs vary so greatly, I don’t include them here. On projects with which I’ve been involved, they are insignificantly small so don’t influence my thinking much. I’ve attached the spreadsheet in source form below so you can add in factors such as these if they are more relevant in your environment.

 

Late last year I updated the model for two reasons: 1) there has been considerable infrastructure innovation over the last couple of years and costs have changed dramatically during that period and 2) because of the importance of networking gear to the cost model, I factor out networking from overall IT costs. We now have IT costs with servers and storage modeled separately from networking. This helps us understand the impact of networking on overall capital cost and on IT power.

 

When I redo these data, I keep the facility server count in the 45,000 to 50,000 server range. This makes it an reasonable scale facility –big enough to enjoy the benefits of scale but nowhere close to the biggest data centers. Two years ago, 50,000 servers required a 15MW facility (25MW total load). Today, due to increased infrastructure efficiency and reduced individual server power draw, we can support 46k servers in an 8MW facility (12MW total load). The current rate of innovation in our industry is substantially higher than it has been any time in the past with much of this innovation driven by mega service operators.

 

Keep in mind, I’m only modeling those techniques well understood and reasonably broadly accepted as good quality data center design practices. Most of the big operators will be operating at efficiency levels far beyond those used here. For example, in this model we’re using a Power Usage Efficiency (PUE) of 1.45 but Google, for example, reports PUE across the fleet of under 1.2: Data Center Efficiency Measurements. Again, the spread sheet source is attached below so feel free to change to the PUE used by the model as appropriate.

 

These are the assumptions used by this year’s model:

 

Using these assumptions we get the following cost structure:

 

 

 

For those of you interested in playing with different assumptions, the spreadsheet source is here: http://mvdirona.com/jrh/TalksAndPapers/PerspectivesDataCenterCostAndPower.xls.

 

If you choose to use this spreadsheet directly or the data above, please reference the source and include the URL to this pointing.

 

                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, September 18, 2010 12:56:19 PM (Pacific Standard Time, UTC-08:00)  #    Comments [16] - Trackback
Services
 Tuesday, September 14, 2010

For those of you writing about your work on high scale cloud computing (and for those interested in a great excuse to visit Anchorage Alaska), consider submitting a paper to the Workshop on Data Intensive Cloud Computing in the Clouds (DataCloud 2011). The call for papers is below.

 

                                                                --jrh

 

-------------------------------------------------------------------------------------------

                                           *** Call for Papers ***
      WORKSHOP ON DATA INTENSIVE COMPUTING IN THE CLOUDS (DATACLOUD 2011)
                  In conjunction with IPDPS 2011, May 16, Anchorage, Alaska
                                
http://www.cct.lsu.edu/~kosar/datacloud2011
-------------------------------------------------------------------------------------------

The First International Workshop on Data Intensive Computing in the Clouds (DataCloud2011) will be held in conjunction with the 25th IEEE International Parallel and Distributed Computing Symposium (IPDPS 2011), in Anchorage, Alaska.

Applications and experiments in all areas of science are becoming increasingly complex and more demanding in terms of their computational and data requirements. Some applications generate data volumes reaching hundreds of terabytes and even petabytes. As scientific applications become more data intensive, the management of data resources and dataflow between the storage and compute resources is becoming the main bottleneck. Analyzing, visualizing, and disseminating these large data sets has become a major challenge and data intensive computing is now considered as the “fourth paradigm” in scientific discovery after theoretical, experimental, and computational science.

DataCloud2011 will provide the scientific community a dedicated forum for discussing new research, development, and deployment efforts in running data-intensive computing workloads on Cloud Computing infrastructures. The DataCloud2011 workshop will focus on the use of cloud-based technologies to meet the new data intensive scientific challenges that are not well served by the current supercomputers, grids or compute-intensive clouds. We believe the workshop will be an excellent place to help the community define the current state, determine future goals, and present architectures and services for future clouds supporting data intensive computing.

Topics of interest include, but are not limited to:

- Data-intensive cloud computing applications, characteristics, challenges
- Case studies of data intensive computing in the clouds
- Performance evaluation of data clouds, data grids, and data centers
- Energy-efficient data cloud design and management
- Data placement, scheduling, and interoperability in the clouds
- Accountability, QoS, and SLAs
- Data privacy and protection in a public cloud environment
- Distributed file systems for clouds
- Data streaming and parallelization
- New programming models for data-intensive cloud computing
- Scalability issues in clouds
- Social computing and massively social gaming
- 3D Internet and implications
- Future research challenges in data-intensive cloud computing

IMPORTANT DATES:
Abstract submission: December 1, 2010
Paper submission: December 8, 2010
Acceptance notification: January 7, 2011
Final papers due: February 1, 2011

PAPER SUBMISSION:
DataCloud2011 invites authors to submit original and unpublished technical papers. All submissions will be peer-reviewed and judged on correctness, originality, technical strength, significance, quality of presentation, and relevance to the workshop topics of interest. Submitted papers may not have appeared in or be under consideration for another workshop, conference or a journal, nor may they be under review or submitted to another forum during the DataCloud2011 review process. Submitted papers may not exceed 10 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. DataCloud2011 also requires submission of a one-age (~250 words) abstract one week before the paper submission deadline.

WORKSHOP and PROGRAM CHAIRS:
Tevfik Kosar, Louisiana State University
Ioan Raicu, Illinois Institute of Technology

STEERING COMMITTEE:
Ian Foster, Univ of Chicago & Argonne National Lab
Geoffrey Fox, Indiana University
James Hamilton, Amazon Web Services
Manish Parashar, Rutgers University & NSF
Dan Reed, Microsoft Research
Rich Wolski, University of California, Santa Barbara
Liang-Jie Zhang, IBM Research

PROGRAM COMMITTEE:
David Abramson, Monash University, Australia
Roger Barga, Microsoft Research
John Bent, Los Alamos National Laboratory
Umit Catalyurek, Ohio State University
Abhishek Chandra, University of Minnesota
Rong N. Chang, IBM Research
Alok Choudhary, Northwestern University
Brian Cooper, Google
Ewa Deelman, University of Southern California
Murat Demirbas, University at Buffalo
Adriana Iamnitchi, University of South Florida
Maria Indrawan, Monash University, Australia
Alexandru Iosup, Delft University of Technology, Netherlands
Peter Kacsuk, Hungarian Academy of Sciences, Hungary
Dan Katz, University of Chicago
Steven Ko, University at Buffalo
Gregor von Laszewski, Rochester Institute of Technology
Erwin Laure, CERN, Switzerland
Ignacio Llorente, Universidad Complutense de Madrid, Spain
Reagan Moore, University of North Carolina
Lavanya Ramakrishnan, Lawrence Berkeley National Laboratory
Ian Taylor, Cardiff University, UK
Douglas Thain, University of Notre Dame
Bernard Traversat, Oracle
Yong Zhao, Univ of Electronic Science & Tech of China

 

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, September 14, 2010 12:45:05 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings

I’m dragging myself off the floor as I write this having watched this short video: MongoDB is Web Scale.

 

It won’t improve your datacenter PUE, your servers won’t get more efficient, and it won’t help you scale your databases but, still, you just have to check out that video.

 

Thanks to Andrew Certain of Amazon for sending it my way.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, September 14, 2010 5:55:41 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Ramblings
 Thursday, September 09, 2010

You can now run an EC2 instance 24x7 for under $15/month with the announcement last night of the Micro instance type. The Micro includes 613 MB of RAM and can burst for short periods up to 2 EC2 Compute Units(One EC2 Compute Unit equals 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor). They are available in all EC2 supported regions and in 32 or 64 bit flavors.

 

The design point for Micro instances is to offer a small amount of consistent, always available CPU resources while supporting short bursts above this level. The instance type is well suited for lower throughput applications and web sites that consume significant compute cycles but only periodically.

 

Micro instances are available for Windows or Linux with Linux at $0.02/hour and windows at $0.03/hour. You can quickly start a Micro instance using the EC2 management console: EC2 Console.

 

The announcement: Announcing Micro Instances for Amazon EC2

More information: Amazon Elastic Compute Cloud (Amazon EC2)

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

Thursday, September 09, 2010 4:53:17 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Services
 Thursday, August 05, 2010

I’m taking some time off and probably won’t blog again until the first week of September. Jennifer and I are taking the boat north to Alaska. Most summers we spend a bit of time between the northern tip of Vancouver island and the Alaska border. This year is a little different for 2 reasons. First, we’re heading further north than in the past and will spend some time in Glacier Bay National Park & Preserve. The second thing that makes this trip a bit different is, weather permitting, we’ll be making the nearly thousand mile one way trip as an off shore crossing.  It’ll take roughly 5 days to cover the distance running 24x7 off the cost of British Columbia.

 

You might ask why we would want to make the trip running 24x7 off shore when the shoreline of BC is one of the most beautiful in the world. It truly is wonderful and we do love the area. We’ve even written a book about it (Secret Coast). We’re skipping the coast and heading directly to Alaska as a way to enjoy Alaska by boat when we really can’t get enough time off work to do Alaska at a more conventional, relaxed pace. The other reason to run directly there is it’s a chance to try running 24x7 and see  how it goes. Think of it as an ocean crossing with training wheels. If it gets unpleasant, we can always turn right and head to BC. And, it’ll be an adventure.

 

We’ll be back the first week of September. Have a good rest of your summer,

 

                                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Thursday, August 05, 2010 11:50:16 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
Ramblings

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<January 2011>
SunMonTueWedThuFriSat
2627282930311
2345678
9101112131415
16171819202122
23242526272829
303112345

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton