Thursday, September 03, 2009

The server tax is what I call the mark-up applied to servers, enterprise storage, and high scale networking gear.  Client equipment is sold in much higher volumes with more competition and, as a consequence, is priced far more competitively. Server gear, even when using many of the same components as client systems, comes at a significantly higher price. Volumes are lower, competition is less, and there are often many lock-in features that help maintain the server tax.  For example, server memory subsystems support Error Correcting Code (ECC) whereas most client systems do not. Ironically both are subject to many of the same memory faults and the cost of data corruption in a client before the data is sent to a server isn’t obviously less than the cost of that same data element being corrupted on the server. Nonetheless, server components typically have ECC while commodity client systems usually do not. 

 

Back in 1987 Garth Gibson, Dave Patterson, and Randy Katz invented Redundant Array of Inexpensive Disks (RAID). Their key observation was that commodity disks in aggregate could be more reliable than very large, enterprise class proprietary disks. Essentially they showed that you didn’t have to pay the server tax to achieve very reliable storage. Over the years, the “inexpensive” component of RAID was rewritten by creative marketing teams as “independent” and high scale RAID arrays are back to being incredibly expensive. Large Storage Area Networks (SANs) are essentially RAID arrays of “enterprise” class disk, lots of CPU and huge amounts of cache memory with a fiber channel attach. The enterprise tax is back with a vengeance and an EMC NS-960 prices in at $2,800 a terabyte.

 

BackBlaze, a client compute backup company, just took another very innovative swipe at destroying the server tax on storage.  Their work shows how to bring the “inexpensive” back to RAID storage arrays and delivers storage at $81/TB. Many services are building secret, storage subsystems that deliver super reliable storage at very low cost.  What makes the BackBlaze work unique is they have published the details on how they built the equipment. It’s really very nice engineering.

 

In Petabytes on a budget: How to Build Cheap Cloud Storage they outline the details of the storage pod:

·         1 storage pod per 4U of standard rack space

·         1 $365 mother board and 4GB of ram per storage pod

·         2 non-redundant Power Supplies

·         4 SATA cards

·         Case with 6 fans

·         Boot drive

·         9 backplane multipliers

·         45 1.5 TB commodity hard drives at $120 each.

 

Each storage pod runs Apache TomCat 5.5 on Debian Linux and implements 3 RAID6 volumes of 15 drives each.  They provide a hardware full bill of materials in Appendix A of Petabytes on a budget: How to Build Cheap Cloud Storage.

 

Predictably some have criticized the design as inappropriate for many workloads and they are right. The I/O bandwidth is low so this storage pod would be a poor choice for data intensive applications like OLTP databases. But, it’s amazingly good for cold storage like the BackBlaze backup application. Some folks have pointed out that the power supplies are very inefficient at around 80% peak efficiency and the configuration chosen will have them far below peak efficiency. True again but it wouldn’t be hard to replace these two PSUs with a single, 90+% efficiency, commodity unit. Many are concerned with cooling and vibration. I doubt cooling is an issue and, in the blog posting, they addressed the vibration issue and talked briefly about how they isolated the drives. The technique they chose might not be adequate for high IOPS arrays but it seems to be working for their workload. Some are concerned by the lack of serviceability in that the drives are not hot swappable and the entire 67TB storage pod has to be brought offline to do drive replacements. Again, this concern is legitimate but I’m actually not a big fan of hot swapping drives – I always recommend bringing down a storage server before service (I hate risk and complexity). And, I hate paying for hot swamp gear and there isn’t space for hot swap in very high density designs.  Personally, I’m fine with a “shut-down to service” model but others will disagree.

 

The authors compared their hardware storage costs to a wide array of storage sub-systems from EMC through Sun and Netapp. They also compared to Amazon S3 and made what is a fairly unusual mistake for a service provider. They compared on-premise storage equipment purchase cost (just the hardware) with a general storage service. The storage pod costs include only hardware while the S3 costs include data center rack space, power for the array, cooling, administration, inside the data center networking gear, multi-data center redundancy, a general I/O path rather than one only appropriate for cold storage, and all the software to support a highly reliable, geo-redundant storage service. So I’ll quibble on their benchmarking skills – the comparison is of no value as currently written -- but, on the hardware front, it’s very nice work.

 

Good engineering and a very cool contribution to the industry to publish the design. One more powerful tool to challenge the server tax. Well done Backblaze.

 

VentureBeat article: http://venturebeat.com/2009/09/01/backblaze-sets-its-cheap-storage-designs-free/.


 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, September 03, 2009 8:13:05 AM (Pacific Standard Time, UTC-08:00)  #    Comments [11] - Trackback
Hardware
 Friday, August 28, 2009

We’re back from China last Saturday night and, predictably, I’m swamped catching up on three weeks worth of queued work.  The trip was wonderful (China Trip) but it’s actually good to be back at work. Things are changing incredibly quickly industry-wide and it’s a fun time to be part of AWS.

 

An AWS feature I’ve been looking particularly looking forward to seeing announced is Virtual Private Cloud (VPC). It went into private beta two nights back. VPC allows customers to extend their private networks to the cloud through a virtual private network (VPN) to access their Amazon Web Service Elastic Compute Cloud (EC2) instances with the security they are used to having on their corporate networks. This one is a game changer. 

 

Virtual Private Cloud news coverage: http://news.google.com/news/search?pz=1&ned=us&hl=en&q=amazon+virtual+private+cloud.

 

Werner Vogels on VPC: Seamlessly Extending the Data Center – Introducing Amazon Virtual Private Cloud.

 

With VPC, customers can have applications running on EC2 “on” their private corporate networks and accessible only from their corporate networks just like any other locally hosted application.  This is important because it makes it easier to put enterprise applications in the cloud and support the same access right and restrictions that customers are used to enforcing on locally hosted resources. Applications can more easily move between private, enterprise data centers and the cloud and hybrid deployments are easier to create and more transparent.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Friday, August 28, 2009 7:07:48 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
Services
 Saturday, August 01, 2009

I’ll be taking a brief hiatus from blogging during the first three weeks of August. Tomorrow we leave for China. You might wonder why we would go to China during the hottest time of the year. For example, our first stop, Xiamen, is expected to hit 95F today, which is fairly typical weather for this time of year (actually its comparable to the unusual weather we’ve been having in Seattle over the last week). The timing of the trip is driven by a boat we’re buying nearing completion in a Xiamen China boat yard: Boat Progress. The goal is to see the boat roughly 90% complete so we can catch any issues early and get them fixed before the boat leaves the yard. And, part of the adventure of building a boat, is to get a chance to visit the yard and see how they are built.

 

We love boating but, having software jobs, we end up working a lot. Consequently, the time we do get off, we spend boating between Olympia, Washington and Alaska. Since we seldom have the time for non-boat related travel, we figured we should take advantage of visiting China and see more than just the boat yard. 

 

After the stop at the boat yard in Xiamen, we’ll visit Hong Kong, Guilin, Yangshou, Chengdu, and do a cruise of the Yangtze River and then travel to Xian followed by Beijing before returning home.  

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, August 01, 2009 3:26:27 PM (Pacific Standard Time, UTC-08:00)  #    Comments [8] - Trackback
Ramblings
 Wednesday, July 29, 2009

Search is a market driven by massive networking effects and economies of scale. The big get better, the big get cheaper, and the big just keep getting bigger. Google has 65% of the Search market and continues to grow. In a deal announced yesterday, Microsoft will supply search to Yahoo and now has a combined share of 28%. For the first time ever, Microsoft has enough market share to justify continuing large investments. And, more importantly, they now have enough market to get good data on usage to tune the ranking engine to drive better quality search. And, although Microsoft and Yahoo! will continue to have separate advertising engines and separate sales forces, they will have more user data available to drive the analytics behind their advertising businesses.  The Search world just got more interesting.

 

The market will continue to unequally reward the big player if nothing else changes. Equal focus of skill and investment will continue to yield unequal results. But, at 28% rather than 8%, its actually possible to gain share and grow even with the negative network effects and economies of scale.  This is good for the Search market, good for the Microsoft Search team, and good for users.

 

NY Times: http://www.nytimes.com/2009/07/30/technology/companies/30soft.html?hpw

WSJ: http://online.wsj.com/article/BT-CO-20090729-709160.html

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, July 29, 2009 4:55:40 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Saturday, July 25, 2009

MapReduce has created some excitement in the relational database community. Dave Dewitt and Michael Stonebraker’s MapReduce: A Major Step Backwards is perhaps the best example.  In that posting they argued that map reduce is a poor structured storage technology, the execution engine doesn’t include many of the advances found in modern, parallel RDBMS execution engines, it’s not novel, and its missing features.

 

In Mapreduce: A Minor Step Forward I argued that MapReduce is an execution model rather than storage engine. It is true that it is typically run over a file system like GFS or HDFS or simple structured storage system like BigTable or Hbase. But, it could be run over a full relational database.

 

Why would we want to run Hadoop over a full relational database?  Hadoop scales: Hadoop has been scaled to 4,000 nodes at Yahoo! Scaling Hadoop to 4000 nodes at Yahoo!.  Scaling a clustered RDBMS too 4k nodes is certainly possible but the high scale single system image cluster I’ve seen was 512 nodes (what was then called DB2 Parallel Edition). Getting to 4k is big.  Hadoop is simple: automatic parallelism has been an industry goal for decades but progress has been limited. There really hasn’t been success in allowing programmers of average skill to write massively parallel programs except for SQL and Hadoop. Programmers of bounded skill can easily write SQL that will be run in parallel over high scale clusters. Hadoop is the only other example I know where this is possible and happening regularily. 

 

Hadoop makes the application of 100s or even 1000s of nodes of commodity computers easy  so why not Hadoop over full RDBMS nodes?  Daniel Abadi and team from Yale and Brown have done exactly that.  In this case, Hadoop over PostgresSQL. From Daniel’s blog:

 

HadoopDB is:

1.       A hybrid of DBMS and MapReduce technologies targeting analytical query workloads

2.       Designed to run on a shared-nothing cluster of commodity machines, or in the cloud

3.       An attempt to fill the gap in the market for a free and open source parallel DBMS

4.       Much more scalable than currently available parallel database systems and DBMS/MapReduce hybrid systems (see longer blog post).

5.       As scalable as Hadoop, while achieving superior performance on structured data analysis workloads

See: http://dbmsmusings.blogspot.com/2009/07/announcing-release-of-hadoopdb-longer.html for more detail and http://sourceforge.net/projects/hadoopdb/ for source code for HadoopDB.

 

A more detailed paper has been accepted for publication at VLDB: http://db.cs.yale.edu/hadoopdb/hadoopdb.pdf.

 

The development work for HadoopDB was done using AWS Elastic Compute Cluster. Nice work Daniel.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, July 25, 2009 9:59:47 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
Services | Software
 Saturday, July 18, 2009

I presented Where does the Power Go in High Scale Data Centers the opening keynote at SIGMETRICS/Performance 2009 last month. The video of the talk was just posted: SIGMETRICS 2009 Keynote.

 

The talk starts after the conference kick-off at 12:20. The video appears to be incompatible with at least some versions of Firefox. I was only able to stay for the morning of the conference but I met lots of interesting people and got to catch up with some old friends. Thanks to Albert Greenberg and John Douceur for inviting me.

 

I also did the keynote talk at this year’s USENIX Technical Conference 2009 in San Diego. Man, I love San Diego and USENIX was, as usual, excellent. I particularly enjoyed discussions with the Research in Motion team from Waterloo and the Netflix folks. Both are running high-quality, super-high growth services with lots of innovation. Thanks to Alec Wolman for inviting me down to this years USENIX conference.

 

                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, July 18, 2009 6:02:13 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
Services
 Saturday, July 11, 2009

I’m a boater and I view reading about boating accidents as important. The best source that I’ve come across is the UKs Marine Accident Investigation Branch (MAIB). I’m an engineer and again, I view it as important to read about engineering failures and disasters. One of the best sources I know of is Peter G. Neumann’s RISKS Digest.

 

There is no question that firsthand experience is a powerful teacher but few of us have time (or enough lives) to make every possible mistake. There are just too many ways to screw-up. Clearly, it’s worth learning from others when trying to make our own systems more safe or more reliable. On that belief I’m a avid reader of service post mortems. I love understanding what went wrong, thinking of those same issues could impact a service in which I’m involved, and what should be done to avoid the class of problems under discussion. Some of what I’ve learned around services over the years is written up in this best practices document: http://mvdirona.com/jrh/talksAndPapers/JamesRH_Lisa.pdf originally published at USENIX LISA.

 

One post mortem I came across recently and enjoyed was: Message from discussion Information Regarding 2 July 2009 outage. I liked it because there was enough detail to educate and it presented many lessons. If you own or operate a service or mission critical application, it’s worth a read.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, July 11, 2009 8:15:09 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Friday, July 10, 2009

There have been many reports of the Fisher Plaza data center fire. An early one was the Data Center Knowledge article: Major Outage at Seattle Data Center. Data center fires aren’t as rare as any of us would like but this one is a bit unusual in that fires normally happen in the electrical equipment or switchgear whereas this one appears to have been a bus duct fire. The bus duct fire triggered the sprinkler system. Several sprinkler heads were triggered and considerable water was sprayed making it more difficult to get the facility back online quickly.

 

Several good pictures showing the fire damage were recently published in Tech Flash Photos: Inside the Fisher Fire.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Friday, July 10, 2009 5:08:58 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1] - Trackback
Ramblings
 Thursday, July 09, 2009

MIT’s Barbara Liskov was awarded the 2008 Association of Computing Machinery Turing Award.  The Turning award is the highest distinction in computer science and is often referred to as the Nobel price of computing. Past award winners are listed at: http://en.wikipedia.org/wiki/Turing_Award.

The full award citation:

Barbara Liskov has led important developments in computing by creating and implementing programming languages, operating systems, and innovative systems designs that have advanced the state of the art of data abstraction, modularity, fault tolerance, persistence, and distributed computing systems.

The Venus operating system was an early example of principled operating system design. The CLU programming language was one of the earliest and most complete programming languages based on modules formed from abstract data types and incorporating unique intertwining of both early and late binding mechanisms. ARGUS extended many of the CLU ideas to distributed programming, and incorporated the first versions of nested transactions to maintain predictable consistencies. Other advances include solutions elegantly combining theory and pragmatics in the areas of decentralized information flow, replicated storage and caching of persistent objects, and modular upgrading of distributed systems. Her contributions have been incorporated into the practice of programming, thereby influencing many of the most important systems used today: for programming, specification, systems design, and distributed architectures.

From: http://awards.acm.org/citation.cfm?id=1108679&srt=year&year=2008&aw=140&ao=AMTURING

 

The cover article in the July Communications of the ACM was on the award: http://cacm.acm.org/magazines/2009/7/32083-liskovs-creative-joy/fulltext.

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, July 09, 2009 8:43:43 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Wednesday, July 08, 2009

Our industry has always moved quickly but the internet and high-scale services have substantially quickened the pace. Search is an amazingly powerful productivity tool and available effectively to free to all. The internet makes nearly all information available to anyone who can obtain time on an internet connection. Social networks and interest-area specific discussion groups are bringing together individuals of like interest from all over the globe.  The cost of computing is falling rapidly and new services are released daily.  The startup community has stayed viable through one of the most severe economic downturns since the great depression. Infrastructure as a service offerings allow new businesses to be build with very little seed investment. I’m amazed at the quality of companies I’m seeing that have 100% bootstrapped without VC funding.  Everything is changing.

 

Netbooks have made low end computers close to free and, in fact, some are released on the North American cell phone model where a multi-year service contract subsidies the device. I’ve seen netbooks for free with a three wireless contract. This morning I came across yet more evidence of healthy change: a new client operating system alterative. The Wall Street Journal reports that Google Plans to Launch Operating Systems for PC (http://online.wsj.com/article/SB124702911173210237.html).  Other articles: http://news.google.com/news?q=google+to+launch+operating+system&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a&um=1&ie=UTF-8&hl=en&ei=s5hUSsTlO4PUsQPX7dCaDw&sa=X&oi=news_group&ct=title&resnum=1.

 

The new O/S is Linux based and Linux has long been an option on Netbooks. What’s different in this case is a huge commercial interest is behind advancing the O/S and intends to make it a viable platform on more capable client systems rather than just netbooks. These new lightweight, connected products are made viable by the combination of the wide-spread connectivity and the proliferation of very high-quality, high-fuction services. Having a new O/S player in the game will almost certainly increase the rate of improvement.

 

Alternatives continue to emerge, the cost of computing continues to fall, the pace of change continues to quicken, and everyone from individual consumers through the largest enterprises are gaining from the increased pace of innovation. It’s a fun time to participate in this industry.

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, July 08, 2009 5:16:35 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Tuesday, June 30, 2009

Microsoft announced yesterday that it was planning to bring both Chicago and Dublin online next month. Chicago is initially to be a 30MW critical load facility with a plan to build out to a booming 60MW.  2/3 of the facility is a high scale containerized facility. It’s great to see the world’s second modular data center going online (See  http://perspectives.mvdirona.com/2009/04/01/RoughNotesDataCenterEfficiencySummitPosting3.aspx for details on an earlier Google facility).

 

The containers in Chicago will hold 1,800 to 2,500 servers each. Assuming 200W/server, that’s 1/2 MW for each container with 80 containers on the first floor and a 40MW container critical load. The PUE estimate for the containers is 1.22 which is excellent but it’s very common to include all power conversions below 480VAC and all air moving equipment in the container as critical load so these data can end up not mean much. See: http://perspectives.mvdirona.com/2009/06/15/PUEAndTotalPowerUsageEfficiencyTPUE.aspx for more details on why a better definition of what is infrastructure and what is critical load is needed.

 

Back on April 10th, Data Center Knowledge asked Is Microsoft still committed to containers?  It looks like the answer is unequivocally YES!

 

Dublin is a non-containerized facility initially 5MW with plans to grow to 22MW as demand requires it. The facility is heavily dependent on air-side economization which should be particularly effective in Dublin.

 

More from:

·         Microsoft Blog: http://blogs.technet.com/msdatacenters/archive/2009/06/29/microsoft-brings-two-more-mega-data-centers-online-in-july.aspx

·         Data Center Knowledge: http://www.datacenterknowledge.com/archives/2009/06/29/microsoft-to-open-two-massive-data-centers/

·         MJF: http://blogs.zdnet.com/microsoft/?p=3200

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Tuesday, June 30, 2009 5:44:41 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Hardware
 Wednesday, June 24, 2009

I presented the keynote at the International Symposium on Computer Architecture 2009 yesterday.  Kathy Yelick kicked off the conference with the other keynote on Monday: How to Waste a Parallel Computer.

 

Thanks to ISCA Program Chair Luiz Borroso for the invitation and for organizing an amazingly successful conference.  I’m just sorry I had to leave a day early to attend a customer event this morning. My slides: Internet-Scale Service Infrastructure Efficiency.

 

Abstract: High-scale cloud services provide economies of scale of five to ten over small-scale deployments, and are becoming a large part of both enterprise information processing and consumer services. Even very large enterprise IT deployments have quite different cost drivers and optimizations points from internet-scale services. The former are people-dominated from a cost perspective whereas internet-scale service costs are driven by server hardware and infrastructure with people costs fading into the noise at less than 10%.

 

In this talk we inventory where the infrastructure costs are in internet-scale services. We track power distribution from 115KV at the property line through all conversions into the data center tracking the losses to final delivery at semiconductor voltage levels. We track cooling and all the energy conversions from power dissipation through release to the environment outside of the building. Understanding where the costs and inefficiencies lie, we ll look more closely at cooling and overall mechanical system design, server hardware design, and software techniques including graceful degradation mode, power yield management, and resource consumption shaping.


James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, June 24, 2009 6:21:40 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Monday, June 22, 2009

Title: Ten Ways to Waste a Parallel Computer

Speaker: Katherine Yelick

 

An excellent keynote talk at ISCA 2009 in Austin this morning. My rough notes follow:

·         Moore’s law continues

o   Frequency growth replaced by core count growth

·         HPC has been working on this for more than a decade but HPC concerned as well

·         New World Order

o   Performance through parallelism

o   Power is overriding h/w concern

o   Performance is now a software concern

·         What follows are Yellnick’s top 10 ways to waste a parallel computer

·         #1: Build system with insufficient memory bandwidth

o   Multicore puts us on the wrong side of the memory wall

o   Key metrics to look at:

§  Memory size/bandwidth (time to fill memory)

§  Memory size * alg intensity / op-per-sec (time to process memory)

·         #2: Don’t Take Advantage of hardware performance features

o   Showed example of speedup from tuning nearest-neighbor 7 point stencil on a 3D array

o   Huge gains but hard to do by hand.  Need to do it automatically at code gen time.

·         #3: Ignore Little’s Law

o   Required concurrency = bandwidth * latency

o   Observation is that most apps are running WAY less than full memory bandwidth [jrh: this isn’t because these apps aren’t memory bound. They are waiting on memory with small requests. Essentially they are memory request latency bound rather than bandwidth bound. They need larger requests or more outstanding requests]

o   To make effective use of the machine, you need:

§  S/W prefetch

§  Pass memory around caches in some cases

·         #4: Turn functional problems into performance problems

o   Fault resilience introduces inhomogeneity in execution rates

o   Showed a graph that showed ECC recovery rates (very common) but that the recovery times are substantial and the increased latency of correction is substantially slowing the computation. [jrh: more evidence that non-ECC designs such as current Intel Atom are not workable in server applications.  Given ECC correction rates, I’m increasingly becoming convinced that non-ECC client systems don’t make sense.]

·         #5: Over-Synchronize Applications

o   View parallel executions as directed acyclic graphs of the computation

o   Hiding parallelism in a library tends to over serialize (too many barriers)

o   Showed work from Jack Dongarra on PLASMA as an example

·         #6: Over-synchronize Communications

o   Use a programming model in which you can’t utilize b/w or “low” latency

o   As an example, compared GASNet and MPI with GASNet delivering far higher bandwidth

·         #7: Run Bad Algorithms

o   Algorithmic gains have far outstripped Moore’s law over the last decade

o   Examples: 1) adaptive meshes rather than uniform, 2) sparse matrices rather than dense, and 3) reformulation of problem back to basics.

·         #8: Don’t rethink your algorithms

o   Showed examples of sparse iterative methods and optimizations possible

·         #9: Choose “hard” applications

o   Examples of such systems

§  Elliptic: stead state, global space dependence

§  Hyperbolic: time dependent, local space dependence

§  Parabolic: time dependent, global space dependence

o   There is often no choice – we can’t just ignore hard problems

·         #10: Use heavy-weight cores optimized for serial performance

o   Used Power5 as an example of a poor design by this measure and show a stack of “better” performance/power

§  Power5:

·         389 mm^2

·         120W @ 1900 MHz

§  Intel Core2 sc

·         130 mm^2

·         15W @ 1000 MHz

§  PowerPC450 (BlueGene/P)

·         8mm^2

·         3W @ 850

§  Tensilica (cell phone processor)

·         0.8mm^2

·         0.09W @ 650W

o   [jrh: This last point is not nearly well enough understood. Far too many systems are purchased on performance when they should be purchased on work done per $ and work done per joule.]

·         Note: Large scale machines have 1 unrecoverable memory error (UME) per day [jrh: again more evidence that no-ECC server designs such as current Intel Atom boards simply won’t be acceptable in server applications, nor embedded, and with memory sizes growing evidence continues to mount that we need to move to ECC on client machines as well]

·         HPC community shows that parallelism is key but serial performance can’t be ignored.

·         Each factor of 10 increase in performance, tends to require algorithmic rethinks

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Monday, June 22, 2009 7:04:50 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Sunday, June 14, 2009

I like Power Usage Effectiveness as a course measure of data center infrastructure efficiency. It gives us a way of speaking about the efficiency of the data center power distribution  and mechanical equipment without having to qualify the discussion on the basis of server and storage used or utilization levels, or other issues not directly related to data center design. But, there are clear problems with the PUE metric. Any single metric that attempts reduce a complex system to a single number is going to both fail to model important details and it is going to be easy to game. PUE suffers from some of both nonetheless, I find it useful.

 

In what follows, I give an overview of PUE, talk about some the issues I have with it as currently defined, and then propose some improvements in PUE measurement using a metric called tPUE.

 

What is PUE?

PUE is defined in Christian Belady’s Green Grid Data Center Power Efficiency Metrics: PUE and DCiE. It’s a simple metric and that’s part of why it’s useful and it’s the source of some of the sources of the flaws in the metric.  PUE is defined to be

 

                                PUE = Total Facility Power / IT Equipment Power

 

Total Facility Power is defined to be “power as measured at the utility meter”.  IT Equipment Power is defined as “the load associated with all of the IT equipment”. Stated simply, PUE is the ratio of the power delivered to the facility divided by the power actually delivered to the servers, storage, and networking gear. It gives us a measure of what percentage of the power actually gets to the servers with the rest being lost in the infrastructure.  These infrastructure losses include power distribution (switch gear, uninterruptable power supplies, Power Distribution Units, Remote Power Plugs, etc.) and mechanical systems (Computer Room Air Handlers/Computer Room Air Conditioners, cooling water pumps, air moving equipment outside of the servers, chillers, etc.).   The inverse of PUE is called Data Center Infrastructure Efficiency (DCiE):

 

                                DCiE = IT Equipment Power / Total Facility Power * 100%

 

So, if we have a PUE of 1.7 that’s a DCiE of 59%.  In this example, the data center infrastructure is dissipating 41% of the power and the IT equipment the remaining 59%.

 

This is useful to know in that allows us to compare different infrastructure designs and understand their relative value.  Unfortunately, where money is spent, we often see metrics games and this is no exception. Let’s look at some of the issues with PUE and then propose a partial solution.

 

Issues with PUE

Total Facility Power: The first issue is the definition of total facility power. The original Green Grid document defines total facility power as “power as measured at the utility meter”. This sounds fairly complete at first blush but its not nearly tight enough.  Many smaller facilities meter at 480VAC but some facilities meter at mid-voltage (around 13.2kVAC in North America). And a few facilities meter at high voltage (~115kVAC in North America). Still others purchase and provided the land for the 115kVAC to 13.2kVAC step down transformer layer but still meter at mid-voltage.

 

Some UPS are installed at medium voltage whereas others are at low (480VAC). Clearly the UPS has to be part of the infrastructure overhead. 

 

The implication of the above observations is that some PUE numbers include the losses on two voltage conversion layers getting down to 480VAC, some include 1 conversion, and some don’t include any of them. This muddies the water considerably and makes small facilities look somewhat better than they should and it’s an just another opportunity to inflate numbers beyond what the facility can actually produce.

 

Container Game: Many modular data centers are built upon containers that take 480VAC as input. I’ve seen modular data center suppliers that chose to call the connection to the container “IT equipment” which means the normal conversion from 480VAC to 208VAC (or sometimes even to 110VAC) is not included.  This seriously skews the metric but the negative impact is even worse on the mechanical side. The containers often have the CRAH or CRAC units in the container. This means that large parts of the mechanical infrastructure is being included under “IT load” and this makes these containers look artificially good.  Ironically, the container designs I’m referring to here actually are pretty good. They really don’t need to play metrics games but it is happening so read the fine print.

 

Infrastructure/Server Blur: Many rack based modular designs use large rack levels fans rather than multiple inefficient fans in the server. For example, the Rackable CloudRack C2 (SGI is still Rackable to me :)) moves the fans out of the servers and puts them at the rack level. This is a wonderful design that is much more efficient than tiny 1RU fans. Normally the server fans are included as “IT load” but in these modern designs that move fans out of the servers, its considered infrastructure load.

 

In extreme cases, fan power can be upwards of 100W (please don’t buy these servers). This makes a data center running more efficient servers potentially have to report a lower PUE number. We don’t want to push the industry in the wrong direction. Here’s one more.  The IT load normally includes the server Power Supply Unit (PSU) but in many designs such as IBM iDataPlex the individual PSUs are moved out of the server and placed at the rack level. Again, this is a good design and one we’re going to see a lot more of but it takes losses that were previously IT load and makes them infrastructure load. PUE doesn’t measure the right thing in these cases.

 

PUE less than 1.0: In the Green Grid document, it says that “the PUE can range from 1.0 to infinity” and goes on to say “… a PUE value approaching 1.0 would indicate 100% efficiency (i.e. all power used by IT equipment only).   In practice, this is approximately true. But PUEs better than 1.0 is absolutely possible and even a good idea.  Let’s use an example to better understand this.  I’ll use a 1.2 PUE facility in this case. Some facilities are already exceeding this PUE and there is no controversy on whether its achievable. 

 

Our example 1.2 PUE facility is dissipating 16% of the total facility power in power distribution and cooling. Some of this heat may be in transformers outside the building but we know for sure that all the servers are inside which is to say that at least 83% of the dissipated heat will be inside the shell. Let’s assume that we can recover 30% of this heat and use it for commercial gain.  For example, we might use the waste heat to warm crops and allow tomatoes or other high value crops to be grown in climates that would not normally favor them.  Or we can use the heat as part of the process to grow algae for bio-diesel.  If we can transport this low grade heat and net only 30% of the original value, we can achieve a 0.90 PUE.  That is to say if we are only 30% effective at monetizing the low-grade waste heat, we can achieve a better than 1.0 PUE.

 

Less than 1.0 PUE are possible and I would love to rally the industry around achieving a less than 1.0 PUE.  In the database world years ago, we rallied around the achieving 1,000 transactions per second.  The High Performance Transactions Systems conference was originally conceived with a goal of achiving these (at the time) incredible result.  1,000 TPS was eclipsed decades ago but HPTS remains a fantastic conference. We need to do the same with PUE and aim to get below 1.0 before 2015. A PUE less than 1.0 is hard but it can and will be done.

 

tPUE Defined

Christian Belady, the editor of the Green Grid document, is well aware of the issues I raise above.  He proposes that it be replaced long haul by the Data Center Productivity (DCP) index. DCP is defined as:

 

                                DCP = Useful Work / Total Facility Power

 

I love the approach but the challenge is defining “useful work” in a general way. How do we come up with a measure of useful work that spans all interesting workloads over all host operating systems.  Some workloads use floating point and some don’t. Some use special purpose ASICs and some run on general purpose hardware. Some software is efficient and some is very poorly written.  I think the goal is the right one but there never will be a way to measure it in a fully general way. We might be able to define DCP for a given workload type but I can’t see a way to use it to speak about infrastructure efficiency in a fully general way.

 

Instead I propose tPUE which is a modification of PUE that mitigates some of the issues above. Admittedly it is more complex than PUE but it has the advantage of equalizing different infrastructure designs and allows comparison across workload types. Using tPUE, HPC facility can compare how they are doing against commercial data processing facilities.

 

tPUE standardizes where the total facility power is to be measured from and precisely where the IT equipment starts and what portions of the load are infrastructure vs server. With tPUE we attempt to remove some of the negative incentive to the blurring of the lines between IT equipment and infrastructure. Generally, this blurring is very good thing.  1RU fans are incredibly inefficient so replacing them with large rack or container level impellers is a good thing.  Multiple central PSUs can be more efficient and so moving the PSU from the server out to the module or rack again is a good thing. We want a metric that measure the efficiency of these changes correctly. PUE, as currently designed, will actually show a negative “gain” in both examples.

 

We define as:

 

tPUE =Total Facility Power / Productive IT Equipment Power

 

This is almost identical to PUE. It’s the next level of definitions that are important.  The tPUE definition of “Total Facility Power” is fairly simple. It’s power delivered to  the medium voltage (~13.2kVAC) source prior to any UPS or power conditioning. Most big facilities are delivered at this voltage level or higher. Smaller facilities may get 480VAC delivered, in which case, this number is harder to get. We solve the problem by using a transformer manufacturer specified number if measurement is not possible.  Fortunately, the efficiency numbers for high voltage transformers are accurately specified by manufacturers. 

 

For tPUE the facility voltage must be actually measured at medium voltage if possible. If not possible, it is permissible to measure at low voltage (480VAC in North America and 400VAC in many other geographies) as long as the efficiency loss of the medium voltage transformer(s) is included. Of course, all measurements must be before UPS or any form of power conditioning. This definition permits using a non-measured, manufacturer-specified efficiency number for the medium voltage to low transformer but it does ensure that all measurements are using medium voltage as the baseline.

 

The tPUE definition of “Productive IT Equipment Power” is somewhat more complex.  PUE measure IT load as the power delivered to the IT equipment. But, high scale data centers IT equipment are breaking the rules. Some have fans inside and some use the infrastructure fans. Some have no PSU and are delivered 12VDC by the infrastructure whereas most still have some form of PSU. tPUE “charges” all fans and all power conversions to the infrastructure component.  I define “Productive IT Equipment Power” to be all power delivered to semiconductors (memory, CPU, northbridge, southbridge, NICs), disks, ASIC, FPGAs, etc. Essentially we’re moving the PSU losses, the voltage regulator down (VRD) and/or voltage regulator modules (VRM), and cooling fans from “IT load” to infrastructure.  In this definition, infrastructure losses unambiguously includes all power conversions, UPS, switch gear, and other losses in distribution. And it includes all cooling costs whether they be in the server or not.

 

This hard part is how to measure tPUE. It achieves our goals of being comparable since everyone would be using the same definitions. And doesn’t penalize innovative designs that blur the conventional lines between server and infrastructure.  I would argue we have a better metric but the challenge will be how to measure it? Will data center operators be able to measure it and track improvements in their facilities and understand how they compare with others?

 

We’ve discussed how to measure total facility power. The short summary is it must be measured prior to all UPS and power conditioning at medium voltage.  If high voltage is delivered directly to your facility, you should measure after the first step down transformer.  If your facility is delivered low voltage, then ask your power supplier whether it be the utility, the colo-facility owner, or your companies infrastructure group, the efficiency of the medium to low step down transformer at your average load. Add this value in mathematically. This is not perfect but it better than where we are right now when we look at a PUE.

 

At the low voltage end where we are delivering “productive IT equipment power” we’re also forced to use estimate with our measures.  What we want to measure is the power delivered to individual components. We want to measure the power delivered to memory, CPU, etc. Our goal is to get power after the last conversion and this is quite difficult since VRDs are often on the board near the component they are supplying.  Given that non-destructive power measurement at this level is not easy, we use an inductive ammeter on each conductor delivering power to the board. Then we get the VRD efficiencies from the system manufacturer (you should be asking for these anyway – they are an important factor in server efficiency). In this case, we often can only get efficiency at rated power and the actually efficiency of the VRD will be less in your usage.  Nonetheless, we use this single efficiency number since it at least is an approximation and more detailed data is either unavailable or very difficult to obtain. We don’t include fan power (server fans typically run on a 12 volt rail). Essentially what we are doing is taking the definition of IT Equipment load used by the PUE definition and subtracting off VRD, PSU, and fan losses.   These measurement needs to be taken at full server load.

 

The measurements above are not as precise as we might like but I argue the techniques will produce a much more accurate picture of infrastructure efficiency than the current PUE definitions and yet these metrics are both measurable and workload independent.

 

Summary:

We have defined tPUE to be:

 

tPUE =Total Facility Power / Productive IT Equipment Power

 

We defined total facility power to be measured before all UPS and power conditioning at medium voltage.  And we defined Productive IT Equipment Power to be server power not including PSU, VRD and other conversion losses nor including fan or cooling power consumption.

 

Please consider helping to evangelize tPUE and use tPUE. And, for you folks designing and building commercial servers, if you can help by measuring the Productive IT Equipment Power for one or more of your SKUs, I would love to publish your results.  If you can supply Productive IT Equipment Power measurement for one of your newer servers, I’ll publish it here with a picture of the server.

 

Let’s make the new infrastructure rallying cry achieving a tPUE<1.0.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, June 14, 2009 4:53:34 PM (Pacific Standard Time, UTC-08:00)  #    Comments [9] - Trackback
Hardware
 Saturday, June 13, 2009

Erasure coding provides redundancy for greater than single disk failure without 3x or higher redundancy. I still like full mirroring for hot data but the vast majority of the worlds data is cold and much of it never gets referenced after writing it: Measurement and Analysis of Large-Scale Network File System Workloads. For less-than-hot workloads, erasure coding is an excellent solution. Companies such as EMC, Data Domain, Maidsafe, Allmydata, Cleversafe, and Panasas are all building products based upon erasure coding.

 

At FAST 2009 in late February, A Performance Evaluation and Examination of Open-Source Erasure Coding Libraries For Storage will be presented. This paper looks at 5 open source erasure coding systems and compares there relative performance. The open source erasure coding packages implement Read-Solomon, Cauchy Read-Solomon, Even-Odd, Row-Diagonal Parity (RDP), and Minimal Density RAID-6 codes.

 

The authors found:

·         The special-purpose RAID-6 codes vastly outperform their general-purpose counterparts. RDP performs the best of these by a narrow margin.

·         Cauchy Reed-Solomon coding outperforms classic Reed-Solomon coding significantly, as long as attention is paid to generating good encoding matrices.

·         An optimization called Code-Specific Hybrid Reconstruction  is necessary to achieve good decoding speeds in many of the codes.

·         Parameter selection can have a huge impact on how well an implementation performs. Not only must the number of computational operations be considered, but also how the code interacts with the memory hierarchy, especially the caches.

·         There is a need to achieve the levels of improvement that the RAID-6 codes show for higher numbers of failures.

 

The paper also provides a good introduction of how erasure coding works.  Recommended. I expect erasure codes to spring up in many more application in the near future.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, June 13, 2009 9:42:58 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Software
 Wednesday, June 03, 2009

Don MacAskill did one of his usual excellent talks at MySQL Conf 09 this. My rough notes follow.

 

Speaker: Don MacAskill

Video at: http://mysqlconf.blip.tv/file/2037101

·         SmugMug:

o   Bootstrapped in ’02 and still operating without external funding

o   Profitable and without debt

o   Top 400 website

o   Doubling yearly

·         SmugMug Challenge:

o   Users get unlimited storage & bandwidth            

o   Photos up to 48Mpix (more than 500m)

o   Video up to 1920x180p

·         300+ four core hosts (mostly diskless)

o   Mostly AMD but really excited by Intel Nehalem [JRH: so am I]

·         5 datacenters (3 in Silicon Valley, 1 in Seattle, and 1 in Virginia) [JRH: corrected from 4 to 5 -- thanks Modesto Alexandre]

·         Only 2 ops guys

·         Lots of AWS use (Simple Storage Service, Elastic Compute Cloud, etc.)

·         Service deployment model: servers automatically load their config from a central role database. On reboot, the configured role is loaded.  Role change is a DB update followed by a reboot. [JRH: very nice]

·         Binary data all stored in Amazon S3 (PB of data at this point)

·         Akamai for content distribution network

·         Structured data

o   MySQL (InnoDB mostly)

o   Scaled up and out using cheap multi-core CPUs with lots of memory

o   4+ cores, 64GB memory, >2TB storage

·         Heavy use of MemcacheD (over 1TB of memory)

o   Over 96% hit rate and fall back to MySQL for cold data access

o   Been using it since first released 4 to 5 years back

·         Compute:

o   Amazon EC2 for photo and video processing and encoding

o   Depend upon EC2 for scaling up to high traffic times and, more importantly, being able to scale down to low traffic times such as the middle of the night (SmugMug is predominantly a North American service at this point). During scale down periods 10’s of cores and during scale up periods 100s if not 1000s of cores)

§  Totally autonomous scaling up and down using SkyNet (written by SmugMug)

·         Web Servers:

o   Diskless with PXE boot

·         MySQL:

o   Most important technology in use at SmugMug

o   Super dependent on replication for performance, reliability, and high availability

o   No data loss in over 7 years

o   No joins or other 4.x+ features

§  Like the Drizzle project (http://en.wikipedia.org/wiki/Drizzle_(database_server)) since its re-focuses MySQL on the core they actually use – lean and mean.

o   Vertically partitioned. They have looked at sharding several times but have always managed to find a way to avoid it so far

·         InnoDB

o   Running 1.0.3+ patches (Percona XtraDB) in production (great for concurrency bound issues)

§  Great relationship with Percona (“Crazy concentration of talent under 1 roof”) who does MySQL support

·         MySQL Details:

o   Data integrity is number 1 issue

o   Next most important is write latency since scaling reads is relatively easy.

o   Replication kept at less than 1sec behind

o   Big RAM (64GB+) to keep indexes in memory

o   Previously had many concurrency issues (better now).

·         MySQL Usage:

o   Not very relational. Mostly a key-value store

o   Very denormalized

o   No  joins or complex selects

o   96% MemcacheD hit rate to cool MySQL

·         MySQL Issues:

o   Need a better filesystem:

§  They use the CentOS linux distro

§  MySQL is storage intensive (IOPS & capacity)

§  Ext3 is broken and sucks. Fsck sucks as well

§  Ext4 is also old and busted

§  Want good volume management

§  Ext3 serialized writes to a given file

§  Love ZFS

·         Transactional, copy-on-write, end-to-end data integrity, on the fly corruption detection and repair, integrated volume management, snapshots and clones supported, and open source software

·         Unfortunately ZFS doesn’t run on Linux and SmugMug is a Linux shop

o   Replication:

§  Unknown state on crash

§  Did *.info get written at commit or 2 months out of date (in one instance)?

·         Transactional replication to the rescue

§  Bringing up TB+ slaves is slow

§  Backups using LVM/ZFS a pain

§  Single thread for replication can fall behind

§  Transactional replication patches from Google are GREAT and solves these issues

·         InnoDB only

·         Taking these patches to production next week.

·         Sun Sushi Toro aka S7410

o   NAS box with a few twists:

§  2x quad-core Opterond with 64GB RAM

§  100GB Readzilla SSD

§  2x 18GB Writezilla SSd (20k write IOPS)

§  22x 1TB 7200 RPM HDD

§  Clustered for HA

§  SSD performance with HDD economy

§  Toro supports ZFS on Linux

§  Can access using : NFS, iSCSI, CIFS, HTTP, FTP, etc.

§  Supports compression (1.5 compression ratio on their workload)

§  Cost: $80k ($142k clustered) – nobody pays list price though

§  SmugMug has 5 of these devices

§  5 different MySQL workloads hosted on a single shared cluster

§  Backups are a breeze (great snapshot support with roll back)

·         Rollback can selectively skip operations

·         Investigating 10GigE and actively testing

o   Intel NICS with Arista switches at less than $500/port

o   Using copper twinax SFP+

·         Expect 100% SSD in the future (not for bulk data)

·         Excited about Drizzle (scaled down MySQL)

·         Request from Oracle:

o   MySQL is a crown jewel – take care of it

o   GPL ZFS (lots of applause)

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, June 03, 2009 6:57:34 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Thursday, May 28, 2009

I’ve brought together links to select past postings and posted them to: http://mvdirona.com/jrh/AboutPerspectives/. It’s linked to the blog front page off the “about” link. I’ll add to this list over time. If there is a Perspectives article not included that you think should be, add a comment or send me email.

 

Talks and Presentations

Data Center Architecture and Efficiency

Service Architectures

Storage

Server Hardware

High-Scale Service Optimizations, Techniques, & Random Observations

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, May 28, 2009 4:45:07 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
Ramblings
 Friday, May 22, 2009

Two years ago I met with the leaders of the newly formed Dell Data Center Solutions team and they explained they were going to invest deeply in R&D to meet the needs of very high scale data center solutions.  Essentially Dell was going to invest in R&D for a fairly narrow market segment. “Yeah, right” was my first thought but I’ve been increasingly impressed since then. Dell is doing very good work and the announcement of Fortuna this week is worthy of mention.

  

Fortuna, the Dell XS11-VX8, is an innovative server design. I actually like the name as proof that the DCS team is an engineering group rather than a marketing team. What marketing team would chose XS11-VX8 as a name unless they just didn’t like the product? 

 

The name aside, this server is excellent work. It is based on the Via Nano and the entire server is just over 15W idle and just under 30W at full load. It’s a real server with 1GigE ports, full remote management via IPMI 2.0 (stick with the DCMI subset). In a fully configured rack, they can house 252 servers only requiring 7.3KW. Nice work DCS!

 

 

6 min video with more data: http://www.youtube.com/watch?v=QT8wEgjwr7k.

 

                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

Friday, May 22, 2009 10:15:01 AM (Pacific Standard Time, UTC-08:00)  #    Comments [7] - Trackback
Hardware
 Thursday, May 21, 2009

Cloud services provide excellent value but it’s easy to underestimate the challenge of getting large quantities of data to the cloud. When moving very large quantities of data, even the fastest networks are surprisingly slow.  And, many companies have incredibly slow internet connections. Back in 1996 MInix author and networking expert, Andrew Tanenbaum said “Never underestimate the bandwidth of a station wagon  full of tapes hurtling down the highway”.  For large data transfers, it’s faster (and often cheaper) to write to local media and ship the media via courier.

 

This morning the Beta release Amazon Web Services Import/Export was announced. This service essentially implements sneakernet allowing the efficient transfer of very large quantities of data into or out of the AWS Simple Storage Service. This initial beta release only supports import but the announcement reports that “the service will be expanded to include export in the coming months”.

 

To use the service, the data is copied to a portable storage device formatted using NTFS, FAT, ext2, or ext3 file systems. The manifest that describes the data load job is digitally signed using the sending users AWS access secret key and shipped to Amazon for loading.  Load charges are:

Device Handling

·         $80.00 per storage device handled.

Data Loading Time

·         $2.49 per data-loading-hour. Partial data-loading-hours are billed as full hours.

Amazon S3 Charges

·         Standard Amazon S3 Request and Storage pricing applies.

·         Data transferred between AWS Import/Export and Amazon S3 is free of charge (i.e. $0.00 per GB).

In addition to allowing much faster data ingestion, AWS Import/Export reduces networking costs since there is no charge for the transfer of data from the Import/Export service and S3.  A calculator is provided to compare estimated electronic transfer costs vs import/export costs.  It’s a clear win for larger data sets.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, May 21, 2009 5:49:27 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
Services
 Wednesday, May 20, 2009

From an interesting article in Data Center Knowledge Who has the Most Web Servers:

The article continues to speculate on server counts at the companies that publically disclose server counts but are likely over 50k.  Google is likely around a million, microsoft is over 200k, and "Amazon says very little about its data center operations".

 

                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, May 20, 2009 4:45:41 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<September 2009>
SunMonTueWedThuFriSat
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton