Saturday, March 17, 2012

I love solar power, but in reflecting carefully on a couple of high profile datacenter deployments of solar power, I’m really developing serious reservations that this is the path to reducing data center environmental impact. I just can’t make the math work and find myself wondering if these large solar farms are really somewhere between a bad idea and pure marketing, where the environmental impact is purely optical.

 

Facebook Prineville

The first of my two examples is the high profile installation of a large solar array at the Facebook Prineville Oregon Facility. The installation of 100 kilowatts of solar power was the culmination of the unfriend coal campaign run by Greenpeace. Many in the industry believe the campaign worked.  In the purest sense, I suppose it did. But let’s look at the data more closely and make sure this really is environmental progress. What was installed in Prineville was a 100 kilowatt solar array at a more than 25 megawatt facility (Facebook Installs Solar Panels at new Data Center ). Even though this is actually a fairly large solar array, its only providing 0.4% of the overall facility power.

 

Unfortunately, the actually numbers are further negatively impacted by weather and high latitude. Solar arrays produce far less than their rated capacity due to night duration, cloud cover, and other negative impacts from weather.  I really don’t want to screw up my Seattle recruiting pitch too much but let’s just say  that occasionally there are clouds in the pacific northwest :-). Clearly there fewer clouds at 2,868’ elevation in the Oregon desert but, even at that altitude, the sun spends the bulk of the time poorly positioned for power generation.

 

Using this solar panel output estimator, we can see that the panels at this location and altitude, yield an effective output of 13.75%. That means that, on average, this array will only put out 13.75 killowatts. That would have this array contributing 0.055% of the facility power or, worded differently, it might run the lights in the datacenter but it has almost no measurable possible impact on the overall energy consumed. Although this is pointed to as an environmentally conscious decisions, it really has close to no influence on the overall environmental impact of this facility. As a point of comparison, this entire solar farm produces approximately as much output as one high density rack of servers consumes. Just one rack of servers is not success, it doesn’t measurably change the coal consumption, and almost certainly isn’t good price/performance.

 

Having said that the Facebook solar array is very close to purely marketing expense, I hasten to add that Facebook is one of the most power-efficient and environmentally-focused large datacenter operators. Ironically, they are in fact very good environmental stewards, but the solar array isn’t really a material contributor to what they are achieving.

 

Apple iDataCenter, Maiden, North Carolina

The second example I wanted to look at is Apple’s facility at Maiden, North Carolina, often referred as iDataCenter.  In the Facebook example discussed above, the solar array was so small as to have nearly no impact on the composition or amount of power consumed by the facility. However, in this example, the solar farm deployed at the Apple Maiden facility is absolutely massive. In fact, this photo voltaic deployment is reported to be largest commercial deployment in the US at 20 megawatts. Given the scale of this deployment, it has a far better chance to work economically.

 

The Apple Maiden facility is reported to cost $1B for the 500,000 sq ft datacenter.  Apple wisely chose not to publicly announce their power consumption numbers but estimates have been as high as 100 megawatts. If you conservatively assume that only 60% of the square footage is raised floor and they are averaging a fairly low 200W/sq ft, the critical load would still be 60MW (the same as the 700,000 sq ft Microsoft Chicago datacenter).  At a moderate Power Usage Efficiency (PUE) of 1.3, Apple Maiden would be at 78MW of total power. Even using these fairly conservative numbers for a modern datacenter build, it would be 78MW total power, which is huge. The actual number is likely somewhat higher.

 

Apple elected to put in a 20MW solar array at this facility. Again, using the location and elevation data from Wikipedia and the solar array output model referenced above, we see that the Apple location is more solar friendly than Oregon. Using this model, we see that the 20MW photo voltaic deployment has an average output of 15.8% which yields 3.2MW.

 

The solar array requires 171 acres of land which is 7.4 million sq ft. What if we were to build an solar array large enough to power the entire facility using these solar and land consumption numbers? If the solar farm were to be able to supply all the power of the facility it would need to be 24.4 times larger. It would be a 488 megawatt capacity array requiring 4,172 acres which is 181 million sq ft.  That means that a 500,000 sq ft facility would require 181 million sq ft of power generation or, converted to a ratio, each data center sq ft would require 362 sq ft of land.

 

Do we really want to give up that much space at each data center? Most data centers are in highly populated areas, where a ratio of 1 sq ft of datacenter floor space requiring 362 sq ft of power generation space is ridiculous on its own and made close to impossible by the power generation space needing to be un-shadowed. There isn’t enough roof top space across all of NY to take this approach. It is simply not possible in that venue.

 

Let’s focus instead on large datacenters in rural areas where the space can be found. Apple is reported to have cleared trees off of 171 acres of land in order to provide photo voltaic power for 4% of their overall estimate data center consumption. Is that gain worth clearing and consuming 171 acres? In Apple Planning Solar Array Near iDataCenter, the author Rich Miller of Data Center Knowledge quotes local North Carolina media reporting that “local residents are complaining about smoke in the area from fires to burn off cleared trees and debris on the Apple property.”

 

I’m personally not crazy about clearing 171 acres in order to supply only 4% of the power at this facility. There are many ways to radically reduce aggregate data center environmental impact without as much land consumption. Personally, I look first to increasing the efficiency of power distribution, cooling, storage, networking and server and increasing overall utilization and the best routes to lowering industry environmental impact.

 

Looking more deeply at the Solar Array at Apple Maiden, the panels are built by SunPower.  Sunpower is reportedly carrying $820m in debt and has received a $1.2B federal government loan guarantee. The panels are built on taxpayer guarantees and installed using tax payer funded tax incentives. It might possibly be a win for the overall economy but, as I work through the numbers, it seems less clear. And, after the spectacular failure of solar cell producer Solyndra which failed in bankruptcy with a $535 million dollar federal loan guarantee, it’s obvious there are large costs being carried by tax payers in these deployments. Generally, as much as I like data centers, I’m not convinced that tax payers should by paying to power them.

 

As I work through the numbers from two of the most widely reported upon datacenter solar array deployments, they just don’t seem to balance out positively without tax incentives. I’m not convinced that having the tax base fund datacenter deployments is a scalable solution. And, even if it could be shown that this will eventually become tax neutral, I’m not convinced we want to see datacenter deployments consuming 100s of acres of land on power generation. And, when trees are taken down to allow the solar deployment, it’s even harder to feel good about it.  From what I have seen so far, this is not heading in the right direction. If we had $x dollars to invest in lowering datacenter environmental impact and the marketing department was not involved in the decision, I’m not convinced the right next step will be solar.

 

James Hamilton 
e: jrh@mvdirona.com 
w: 
http://www.mvdirona.com 
b: 
http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, March 17, 2012 12:22:58 PM (Pacific Standard Time, UTC-08:00)  #    Comments [30] - Trackback
Hardware
 Sunday, February 26, 2012

Every couple of weeks I get questions along the lines of “should I checksum application files, given that the disk already has error correction?” or “given that TCP/IP has error correction on every communications packet, why do I need to have application level network error detection?” Another frequent question is “non-ECC mother boards are much cheaper -- do we really need ECC on memory?” The answer is always yes. At scale, error detection and correction at lower levels fails to correct or even detect some problems. Software stacks above introduce errors. Hardware introduces more errors. Firmware introduces errors. Errors creep in everywhere and absolutely nobody and nothing can be trusted.

Over the years, each time I have had an opportunity to see the impact of adding a new layer of error detection, the result has been the same. It fires fast and it fires frequently. In each of these cases, I predicted we would find issues at scale. But, even starting from that perspective, each time I was amazed at the frequency the error correction code fired.

On one high scale, on-premise server product I worked upon, page checksums were temporarily added to detect issues during a limited beta release. The code fired constantly, and customers were complaining that the new beta version was “so buggy they couldn’t use it”. Upon deep investigation at some customer sites, we found the software was fine, but each customer had one, and sometimes several, latent data corruptions on disk. Perhaps it was introduced by hardware, perhaps firmware, or possibly software. It could have even been corruption introduced by one of our previous release when those pages where last written. Some of these pages may not have been written for years.

I was amazed at the amount of corruption we found and started reflecting on how often I had seen “index corruption” or other reported product problems that were probably corruption introduced in the software and hardware stacks below us. The disk has complex hardware and hundreds of thousands of lines of code, while the storage area network has complex data paths and over a million lines of code. The device driver has tens of thousands of lines of code. The operating systems has millions of lines of code. And our application had millions of lines of code. Any of us can screw-up, each has an opportunity to corrupt, and its highly likely that the entire aggregated millions of lines of code have never been tested in precisely the combination and on the hardware that any specific customer is actually currently running.

Another example. In this case, a fleet of tens of thousands of servers was instrumented to monitor how frequently the DRAM ECC was correcting. Over the course of several months, the result was somewhere between amazing and frightening. ECC is firing constantly.

The immediate lesson is you absolutely do need ECC in server application and it is just about crazy to even contemplate running valuable applications without it. The extension of that learning is to ask what is really different about clients? Servers mostly have ECC but most clients don’t. On a client, each of these corrections would instead be a corruption. Client DRAM is not better and, in fact, often is worse on some dimensions. These data corruptions are happening out there on client systems every day. Each day client data is silently corrupted. Each day applications crash without obvious explanation. At scale, the additional cost of ECC asymptotically approaches the cost of the additional memory to store the ECC. I’ve argued for years that Microsoft should require ECC for Windows Hardware Certification on all systems including clients. It would be good for the ecosystem and remove a substantial source of customer frustration. In fact, it’s that observation that leads most embedded systems parts to support ECC. Nobody wants their car, camera, or TV crashing. Given the cost at scale is low, ECC memory should be part of all client systems.

Here’s an interesting example from the space flight world. It caught my attention and I ended up digging ever deeper into the details last week and learning at each step. The Russian space mission Phobos-Grunt (also written Fobos-Grunt both of which roughly translate to Phobos Ground) was a space mission designed to, amongst other objectives, return soil samples from the Martian moon Phobos. This mission was launched atop the Zenit-2SB launch vehicle taking off from the Baikonur Cosmodrome 2:16am on November 9th 2011. On November 24th it was officially reported that the mission had failed and the vehicle was stuck in low earth orbit. Orbital decay has subsequently sent the satellite plunging to earth in a fiery end of what was a very expensive mission.

What went wrong aboard Phobos-Grunt? February 3rd the official accident report was released: The main conclusions of the Interdepartmental Commission for the analysis of the causes of abnormal situations arising in the course of flight testing of the spacecraft "Phobos-Grunt". Of course, this document is released in Russian but Google Translate actually does a very good job with it. And, IEEE Spectrum Magazine reported on the failing as well. The IEEE article, Did Bad Memory Chips Down Russia’s Mars Probe, is a good summary and the translated Russian article offers more detail if you are interested in digging deeper.

The conclusion of the report is that there was a double memory fault on board Phobos-Grunt. Essentially both computers in a dual-redundant set failed at the same or similar times with a Static Random Access Memory failure. The computer was part of the newly-developed flight control system that had focused on dropping the mass of the flight control systems from 30 kgs (66 lbs) to 1.5 kgs (3.3 lbs). Less weight in flight control is more weight that can be in payload, so these gains are important. However, this new flight control system was blamed for the delay of the mission by 2 years and the eventual demise of the mission.

The two flight control computers are both identical TsM22 computer systems supplied by Techcom, a spin-off of the Argon Design Bureau Phobos Grunt Design). The official postmortem reports that both computers suffered an SRAM failure in a WS512K32V20G24M SRAM. These SRAMS are manufactured by White Electronic Design and the model number can be decoded as “W” for White Electronic Design, “S” for SRAM, “512K32” for a 512k memory by 32 bit wide access, “V” is the improvement mark, “20” for 20ns memory access time, “G24” is the package type, and “M” indicates it is a military grade part.

In the paper " Extreme latchup susceptibility in modern commercial-off-the-shelf (COTS) monolithic 1M and 4M CMOS static random-access memory (SRAM) devices" Joe Benedetto reports that these SRAM packages are very susceptible to “latchup”, a condition which requires power recycling to return to operation and can be permanent in some cases. Steven McClure of NASA Jet Propulsion Laboratory is the leader of the Radiation Effects Group. He reports these SRAM parts would be very unlikely to be approved for use at JPL (Did Bad Memory Chips Down Russia’s Mars Probe).

It is rare that even two failures will lead to disaster and this case is no exception. Upon double failure of the flight control systems, the spacecraft autonomously goes into “safe mode” where the vehicle attempts to stay stable in low-earth orbit and orients its solar cells towards the sun so that it continues to have sufficient power. This is a common design pattern where the system is able to stabilize itself in an extreme condition to allow flight control personal back on earth to figure out what steps to take to mitigate the problem. In this case, the mitigation is likely fairly simple in just restarting both computers (which probably happened automatically) and restarting the mission would likely have been sufficient.

Unfortunately there was still one more failure, this one a design fault. When the spacecraft goes into safe mode, it is incapable of communicating with earth stations, probably due to spacecraft orientation. Essentially if the system needs to go into safe mode while it is still in earth orbit, the mission is lost because ground control will never be able to command it out of safe mode.

I find this last fault fascinating. Smart people could never make such an obviously incorrect mistake, and yet this sort of design flaw shows up all the time on large systems. Experts in each vertical area or component do good work. But the interaction across vertical areas are complex and, if there is not sufficiently deep, cross-vertical-area technical expertise, these design flaws may not get seen. Good people design good components and yet there often exist obvious fault modes across components that get missed.

Systems sufficiently complex enough to require deep vertical technical specialization risk complexity blindness. Each vertical team knows their component well but nobody understands the interactions of all the components. The two solutions are 1) well-defined and well-documented interfaces between components, be they hardware or software, and 2) and very experienced, highly-skilled engineer(s) on the team focusing on understanding inter-component interaction and overall system operation, especially in fault modes. Assigning this responsibility to a senior manager often isn’t sufficiently effective.

The faults that follow from complexity blindness are often serious and depressingly easy to see in retrospect, as was the case in this example.

Summarizing some of the lessons from this loss: The SRAM chip probably was a poor choice. The computer systems should restart, scrub memory for faults, and be able to detect and load corrupt code from secondary locations before going into safe-mode. Safe-mode has to actually allow mitigating actions to be taken from a ground station or it is useless. Software systems should be constantly scrubbing memory for faults and check-summing the running software for corruption. A tiny amount of processor power spent on continuous, redundant checking and a few more lines of code to implement simple recovery paths when fault is encountered may have saved the mission. Finally we have to all remember the old adage “nothing works if it is not tested.” Every major fault has to be tested. Error paths are the common ones to not be tested so it is particularly important to focus on them. The general rule is to keep error paths simple, use the fewest possible, and test frequently.

Back in 2007, I wrote up a set of best practices on software design, testing, and operations of high scale systems: On Designing and Deploying Internet-Scale Services. This paper targets large-scale services but it’s surprising to me that some, and perhaps many, of the suggestions could be applied successfully to a complex space flight system. The common theme across these two only partly-related domains is that the biggest enemy is complexity, and the exploding number of failure modes that follow from that complexity.

This incident reminds us of the importance of never trusting anything from any component in a multi-component system. Checksum every data block and have well-designed, and well-tested failure modes for even unlikely events. Rather than have complex recovery logic for the near infinite number of faults possible, have simple, brute-force recovery paths that you can use broadly and test frequently. Remember that all hardware, all firmware, and all software have faults and introduce errors. Don’t trust anyone or anything. Have test systems that bit flips and corrupts and ensure the production system can operate through these faults – at scale, rare events are amazingly common.

To dig deeper in the Phobos-Grunt loss:


James Hamilton
e: jrh@mvdirona.com
w: http://www.mvdirona.com
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

Sunday, February 26, 2012 10:48:54 AM (Pacific Standard Time, UTC-08:00)  #    Comments [9] - Trackback
Ramblings
 Tuesday, February 21, 2012

In the past, I’ve written about the cost of latency and how reducing latency can drive more customer engagement and increase revenue. Two example of this are: 1) The Cost of Latency and 2) Economic Incentives applied to Web Latency. Nowhere is latency reduction more valuable than in high frequency trading applications.  Because these trades can be incredibly valuable, the cost of the infrastructure on which they trade is more or less an afterthought.  Good people at the major trading firms work hard to minimize costs but, if the cost of infrastructure was to double tomorrow, high frequency trading would continue unabated.

 

High frequency trading is very sensitive to latency and it is nearly insensitive to costs. That makes it an interesting application area and its one I watch reasonably closely. It’s a great domain to test ideas that might not yet make economic sense more broadly.  Some of these ideas will never see more general use but many ideas get proved out in high frequency trading and can be applied to more cost sensitive application areas once the techniques have been refined or there is more volume. 

 

One suggestion that comes up in jest on nearly every team upon which I have worked is the need to move bits faster than the speed of light. Faster than the speed of light communications would help cloud hosted applications and cloud computing in general but physics blocks progress in this area resolutely.

 

What if it really were possible to transmit data at roughly 33% faster than the speed of light? It turns out this is actually possible and may even make economic sense in high frequency trading. Before you cancel your RSS feed to this blog, let’s look more deeply at what is being sped up, how much, and why it really is possible to substantially beat today’s optical communication links. 

 

When you get into the details, every “law” is actually more complex than the simple statement that gets repeated over and over. This is one of the reasons I tell anyone who joins Amazon that the only engineering law around here is there are no unchallengeable laws. It’s all about understanding the details and applying good engineering judgment.

 

For example the speed of light is 186,000 miles per second right?  Absolutely. But the fine print is that the speed of light is 186k m/s in a vacuum. The actual speed of light is dependent upon the medium in which the light is propagating. In an optical fiber, the speed of light is actually roughly 33% slower than a in a vacuum. More specifically, the index of refraction of most common optical fibers is 1.52. What this means is that the speed of light in a fiber is actually just over 122,000 miles/second.

 

The index of refraction of light in air is very close to 1 which is to say that the speed of light in air is just about the same as the speed of light in vacuum. This means that free space optics -- the use of light for data communications without a fiber wave guide -- is roughly 50% faster than the speed of light in a fiber. Unfortunately, this only matters over long distances but its only practical over short distances. There have been test deployments over metro-area distances – we actually have one where I work – but, generally, it’s a niche technology that hasn’t proven practical and widely applicable. On this approach, I’m not particularly excited.

 

Continuing this search for low refraction index data communications, we find that microwaves transmitted in air are again have a refraction index near 1 which is to say that microwave is around 50% faster than light in a fiber. As before, this is only of interest over longer distances but, unlike free space optics, Microwave is very practical over longer distances.  On longer runs, it needs to be received and retransmitted periodically but this is practical, cost effective, and is fairly heavily used in the telecom industry. What hasn’t been exploited in the past is that Microwave is actually faster than the speed of light in a fiber.

 

The 50% speed-up of Microwave over fiber optics seems exploitable and an enterprising set of entrepreneurs are doing exactly that. This plan was outlined in the Gigaom article from yesterday titled Wall Street gains edge by trading over microwave.

 

In this approach, McKay Brothers are planning on linking New York city with Chicago using microwave transmission. This is a 790 mile distance but fiber seldom takes the most direct route. Let’s assume a fiber path distance of 850 miles which will yield 6.9 msec propagation delay if there are no routers or other networking gear in the way. Give that both optical and microwave require repeaters, I’m not including their impact in this analysis.  Covering the 790 miles using microwave will require 4.2 msec. Using these data, we would have the microwave link a full 2.7 msec faster. That’s a very substantial time difference and, in the world of high frequency trading and 2.7 msec is very monetizable. In fact, I’ve seen HFT customers extremely excited about very small portions of a msec. Getting 2.7msec back is potentially a very big deal.

 

From the McKay Brothers web site:

 

Profitability in High Frequency Trading (“HFT”) is about being the first to respond to market events. Events which occur in Chicago markets impact New York markets. The first to learn about this information in New York can take appropriate positions and benefit. There is nothing new in this principle. Paul Reuters, founder of the Reuters news agency, used carrier pigeons to fill a gap in the telegraph lines and bring financial news from Berlin to Paris. The groundbreaking idea of the time was to use an old technology – the carrier pigeon – to fill a gap. What Paul Reuters did 160 years ago is being done again.

 

Today, we are revisiting an old technology, microwave transmission, to connect Chicago and New York at speeds faster than fiber optic transmission will ever be able to deliver.

 

This technology is emerging just two years after Spread Networks is reported to have spent 300 million dollars developing a low latency fiber optic connection between Chicago and New York. Spread’s fiber connection will soon be much slower than routes available by microwave.

 

The Gigaom article is at: http://gigaom.com/broadband/wall-street-gains-an-edge-by-trading-over-microwaves. The McKay Brothers web site is at: http://www.mckay-brothers.com/. Thanks for Amazon’s Alan Judge for pointing me to this one.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, February 21, 2012 3:41:23 PM (Pacific Standard Time, UTC-08:00)  #    Comments [14] - Trackback
Hardware | Services
 Saturday, February 11, 2012

Last week I wrote up Studying the Costa Concordia Grounding.  Many folks sent me mail with interesting perspectives. Two were sufficiently interesting that I wanted to repeat them here. The first was from someone who was actually on the ship on that final cruise. The latter is from a professional captain with over 35 years’ experience as a certified Ocean Master.

 

Experiences From a Costa Concordia Passenger

 

One of the engineers I work with at Amazon was actually on the Costa Concordia when it grounded. Rory Browne works in the Amazon.com Dublin office and he made an excellent and very detailed presentation on what took place that final trip.  He asked me not to post his slides but OK me posting my notes from his presentation.

 

Here are my notes from Rory Browne’s experiences on the final cruise of the Costa Concordia:

·         Boarded the ship at 1400

·         Went to bed at 1700 (long trip from Ireland)

·         Woke at 2140 and started getting dressed

·         Fell towards mirror a few minutes later and the lights went out

·         The next hour:

o   Public address announcement stating that an electrical fault had been experienced but the situation under control

o   I Explored ship and noticed some “foam or froth on one side of the boat” – thought it might be a maneuvering thruster but, in retrospect, this was likely the side of the boat that had been ripped open by the grounding

o   Noticed the crew had asked restaurant customers to put their dishes on the floor

o   Returned to cabin to get out of the way of the crew

·         Seven whistles were subsequently sounded indicated abandon ship

o   Proceeded to muster station #4

o   People still blocking stairwells and pushing to get onto lifeboats

o   Lifeboat entrances were very crowded with long lines but I noticed a second lifeboat entrance with only a couple of people in line and was able to get on quickly

o   People on lifeboat didn’t move away from the entrance but it was easy to slip past them to the far corner

o   Estimate that they could easily have fit another 10 on the lifeboat (there were roughly 25 on it)

·         When lifeboat was lowered the roof hit something and the fiberglass roof was bashed in behind my head. Now slightly worried about lifeboat integrity

 

Observations from a Licensed Master

 

What follows is one of the more interesting notes I got after blogging the Costa Concordia incident. This one from a professional captain. He’s given me permission to reprint it here but preferred not to include his name:

 

The original Letter:

Your January 29th blog discussing the Costa Concordia incident was an excellent presentation, and the links you provided were excellent as well.  Because of your boating experience you have an understanding better than most of what took place with the cruise ship.

 

Like you, I often look further into incidents and disasters in order to have a better understanding of what actually took place, primarily because I know press releases and news reports of an incident rarely if ever delve into the underlying facts of a case.  More often than not the media isn’t interested in much beyond sensationalism.  Costa Concordia is the perfect example, as was Fukashima Dai-1.  The miracle of Costa Concordia of course was that more lives weren’t lost.

 

There were two points you made in your discussion that might not be correct.  Notice that I say, “might.”  The first point being your mention that Captain Schettino was “clearly very experienced,” and the second point you made was that Captain Schettino’s ship handling after the initial grounding “appeared excellent.”

 

Regarding the first point, I’m not sure much is known about the quality and extent of Schettino’s actual hands-on experience at sea.  At this point I think about all we can safely assume is that Schettino’s personality and demeanor were well suited to representing the cruise line to the paying passengers.  Beyond that, I think we know little.  Its one thing to set for and obtain a Master’s license, but it’s quite another to have the practical experience to captain a 114,000 GT vessel.

 

An experienced captain of a vessel, no matter what size, would never approach landfall at night (or even in daylight with good visibility) without repeatedly checking his radar.  An experienced captain would know the maneuvering characteristics of his vessel, the turn radius, the advance and transfer when making a turn, the use and calculation of turn bearings, etc.  On the other hand, I’m not sure at this point we know which officer actually had command of the vessel during the interval leading up to the initial grounding. 

 

The second point you touched on was that Schettino’s handling of the vessel after the initial grounding “appeared excellent.”  It’s well that you included the qualifier “appeared.”  I’m not sure we know or will ever know what Schettino’s thinking was after the grounding, so at this point I believe all we can go on is to speculate what was he was doing based upon the available AIS data.  Schettino might have been taking the action a prudent seaman would take, once propulsion power was lost; however, I’m not sure we know yet what effects the wind, the current and the attitude of the vessel were having.  Perhaps there wasn’t enough force to overcome these and other outside influences on the maneuverability of the vessel, so perhaps the vessel once it went almost dead in the water was at the mercy of influences outside the control of the captain.  Perhaps the vessel was simply lucky to have found itself grounded back on the island.

 

One of the many things that haven’t been explored fully regarding the Costa Concordia is the vessels stability, and in particular the stability after the ingress of the water began when she was initially holed on the port side.  It some point in time there will be a computerized animation showing the progressive changes to her stability, the free surface effects, which compartments were impacted by the initial flooding, how the flooding progressed through the vessel, the effects of maintaining or not maintaining water tight integrity in her various compartments, the effects of wind and current, etc.  That will be interesting.     

 

I have over 35 years experience on the water and at sea and was a licensed oceans Master, so I have a little understanding of how this ship stuff works.

 

Again, I want to complement you on your Costa Concordia blog.  You did a super job.

My response:

A super interesting note. I really enjoyed your background points.

 

One point you argued was where he had experience at anything beyond essentially being the front man for a 1,500 room hotel. Specifically you said “An experienced captain of a vessel, no matter what size, would never approach landfall at night (or even in daylight with good visibility) without repeatedly checking his radar.  An experienced captain would know the maneuvering characteristics of his vessel, the turn radius, the advance and transfer when making a turn, the use and calculation of turn bearings, etc.  On the other hand, I’m not sure at this point we know which officer actually had command of the vessel during the interval leading up to the initial grounding.

 

It’s hard to not agree with your conclusion. Bringing that large a ship that near the rocks at over 15 kts is incredibly bad judgment. But, that is my point.  Very experienced operators sometimes make catastrophically bad judgment. Lapses that are incredibly hard to explain. For example the Captain of the Washington State Ferry Elwha going on a 15 mile unauthorized pleasure cruise that ended in grounding (http://www.emd.wa.gov/hazards/haz_transportation.shtml). The captain of the Valdez drunk, not at the helm, and trusting his 3rd mate to take the ship through the most dangerous part of their entire trip. I have been to Bligh rock in Prince William Sound and it’s a LOOONG way from the shipping lanes. Even the 3rd mate had too much experience to have put the boat there. There are many, many stories of operators “buzzing the tower” even though they have experience and should absolutely know better.

 

My conclusion is that experience is not a cure. Perhaps it’s because bad judgment isn’t expressed frequently enough that it gets filtered out before the person has a significant command. Or perhaps the bad judgment actually comes from the over-confidence that experience can bring.

 

I’m not debating your point that it was crazy to head for the rocks at 15kts but I am arguing that very experienced people really do make some incredibly bad judgments.  

 

Your point on boat handling is well taken.  It’s not possible to establish whether the captain made good decisions after his one catastrophically bad one. The helm orders appear correct for the conditions. The use of the thruster seemed to work. But, some have speculated the ship would have been better out in the channel so it could launch life racks (they are speculating that it wouldn’t have developed the significant list so quickly). And, you are right, current conditions and other factors, may have put the ship where it landed with commands form the Captain not being the dominant influence.  Certainly all possible.

 

My conclusion in the article was “pilot error” and my main point is that experience is either not a solution or perhaps it was a contributor to what was very poor judgment that led to loss of life.

 

Thanks for the your observations from experience with commercial vessels.

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, February 11, 2012 3:36:19 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback

 Sunday, January 29, 2012

Don't be a show-off. Never be too proud to turn back. There are old pilots and bold pilots, but no old, bold pilots.

 

I first heard the latter part of this famous quote made by US Airmail Pilot E. Hamilton Lee back when I raced cars. At that time, one of the better drivers in town, Gordon Monroe, used a variant of that quote (with pilots replaced by racers) when giving me driving advice. Gord’s basic message was that it is impossible to win a race if you crash out of it.

 

Nearly all of us have taken the odd chance and made some decisions that, in retrospect, just didn’t make sense from a risk vs reward perspective. Age and experience clearly helps but mistakes still get made and none of us are exempt. Most people’s mistakes at work don’t have life safety consequences and their mistakes are not typically picked up widely by the world news services as was the case in the recent grounding of the Costa Concordia cruise ship. But, we all make mistakes.

 

I often study engineering disasters and accidents in the belief that understanding mistakes, failures, and accidents deeply is a much lower cost way of learning.  My last note on this topic was What Went Wrong at Fukushima Dai-1 where we looked at the nuclear release following the 2011 Tohuku Earthquake and Tsunami

 

Living on a boat and cruising extensively (our boat blog is at http://blog.mvdirona.com/) makes me particularly interested in the Costa Concordia incident of January 13th 2012. The Concordia is a 114,137 gross ton floating city that cost $570m when it was delivered in 2006. It is 952’ long, has 17 decks, and is power by 6 Wartsila diesel engines with a combined output of 101,400 horse power. The ship is capable of 23 kts (26.5 mph) and has a service speed of 21 kts. At capacity, it carries 3,780 passengers with a crew of 1,100.

 

From: http://en.wikipedia.org/wiki/Costa_Concordia_disaster:

 

The Italian cruise ship Costa Concordia partially sank on Friday the 13th of January 2012 after hitting a reef off the Italian coast and running aground at Isola del Giglio, Tuscany, requiring the evacuation of 4,197 people on board. At least 16 people died, including 15 passengers and one crewman; 64 others were injured (three seriously) and 17 are missing. Two passengers and a crewmember trapped below deck were rescued.

 

The captain, Francesco Schettino, had deviated from the ship's computer-programmed route in order to treat people on Giglio Island to the spectacle of a close sail-past. He was later arrested on preliminary charges of multiple manslaughter, failure to assist passengers in need and abandonment of ship. First Officer Ciro Ambrosio was also arrested.

 

It is far too early to know exactly what happened on the Costa Concordia and, because there was loss of life and considerable property damage, the legal proceedings will almost certainly run for years. Unfortunately, rather than illuminating the mistakes and failures and helping us avoid them in the future, these proceedings typically focus on culpability and distributing blame. That’s not our interest here. I’m mostly focused on what happened and getting all the data I could find on the table to see what lessons the situation yields.

 

A fellow boater, Milt Baker pointed me towards an excellent video that offers considerable data into exactly what happened in the final 1 hour and 30 min. You can find the video at: Grounding of Costa Concordia. Another interesting data source is the video commentary available at: John Konrad Narrates the Final Maneuvers of the Costa Concordia. In what follows, I’ve combined snapshots of the first video intermixed with data available from other sources including the second video.

 

The source data for the two videos above is a wonderful safety system called Automatic Identification System. AIS is a safety system required on larger commercial craft and also used on many recreational boats as well. AIS works by frequently transmitting (up to every 2 seconds for fast moving ships) via VHF radio the ships GPS position, course, speed, name, and other pertinent navigational data. Receiving stations on other ships automatically plot transmitting AIS targets on electronic charts. Some receiving systems are also able to plot an expected target course and compute the time and location of the estimated closest point of approach. AIS an excellent tool to help reduce the frequency of ship-to-ship collisions.

 

Since AIS data is broadcast over VHF radio, it is widely available to both ships and land stations and this data can be used in many ways. For example, if you are interested in the boats in Seattle’s Elliott Bay, have a look at MarineTraffic.com and enter “Seattle” as the port in the data entry box near the top left corner of the screen (you might see our boat Dirona there as well).

 

AIS data is often archived and, because of that, we have a very precise record of the Costa Concordia’s course as well as core navigational data as it proceeded towards the rocks. In the pictures that follow, the red images of the ship are at the ship’s position as transmitted by the Costa Concordia’s AIS system. The black line between these images is the interpolated course between these known locations. The video itself (Costa Concordia Interpolated.wmv) uses a roughly 5:1 time compression.

In this screen shot, you can see the Concordia already very close to the Italian Isol del Giglio. From the BBC report the Captain has said he turned too late (Costa Concordia: Captain Schettino ‘Turned Too Late’). From that article:

 

According to the leaked transcript quoted by Italian media, Capt Schettino said the route of the Costa Concordia on the first day of its Mediterranean cruise had been decided as it left the port of Civitavecchia, near Rome, on Friday.

 

The captain reportedly told the investigating judge in the city of Grosseto that he had decided to sail close to Giglio to salute a former captain who had a home on the Tuscan island. "I was navigating by sight because I knew the depths well and I had done this maneuver three or four times," he reportedly said.

 

"But this time I ordered the turn too late and I ended up in water that was too shallow. I don't know why it happened."

 

In this screen shot of the boat at 20:44:47 just prior to the grounding, you can see the boat turned to 348.8 degrees but the massive 114,137 gross ton vessel is essentially plowing sideways through the water on a course of 332.7 degrees. The Captain can and has turned the ship with the rudder but, at 15.6 kts, it does not follow the exact course steered with inertia tending to widen and straiten the intended turn. 

 

Given the speed of the boat and nearness of shore at this point, the die is cast and the ship is going to hit ground.

 

This screen shot was taken is just past the point of impact. You will note that it has slowed to 14.0 kts. You might also notice the Captain is turning aggressively to the starboard. He has the ship turned to a 8.9 degrees heading whereas the actual ships course lags behind at 356.2 degrees.

 

This screen shot is only 44 seconds after the previous one but the boat has already slowed from 14.0 kts to 8.1 and is still slowing quickly.  Some of the slowing will have come from the grounding itself but passengers report that they heard the boat hard astern after the grounding.

 

You can also see the captain has swung the helm over from the starboard course he was steering trying to avoid the rocks over to port course now that he has struck them. This is almost certainly in an effort to minimize damage. What makes this (possibly counter-intuitive) decision a good one is the ships pivot point is approximately 1/3 of the way back from the bow so turning to port (towards the shore) will actually cause the stern to rotate away from the rocks they just struck.

 

The ship decelerated quickly to just under 6.0 knots but, in the two minutes prior to this screen shot, it has only slowed a further 0.9 kts down to 5.1. There were reports of a loss of power on the Concordia. Likely what happened is ship was hard astern taking off speed until a couple of minutes prior to this screen shot when water intrusion caused a power failure. The ship is a diesel electric and likely lost power to its main prop due to rapid water ingress.

 

At 5 kts and very likely without main engine power, the Concordia is still going much too quickly to risk running into the mud and sand shore so the Captain now turns hard away from shore and he is heading back out into the open channel.

 

With the helm hard over the starboard with the likely assistance of the bow thrusters the ship is turning hard which is pulling speed off fairly quickly. It is now down to 3.0 kts and it continues to slow.

 

The Concordia is now down to 1.6 kts and the Captain is clearly using the bow thrusters heavily as the bow continues to rotate quickly. He has now turned to a 41 degree heading.

 

It now has been just over 29 min since the ship first struck the rocks. It has essentially stopped and the bow is being brought all the way back round using bow thrusters in an effort to drive the ship back in towards shore presumably because the Captain believes it is at risk of sinking so he is seeking shallow water.

 

The Captain continues to force the Concordia to shore under bow thruster power. In this video narrative (John Konrad Narrates the Final Maneuvers of the Costa Concordia), the commentator reported that the combination of bow thrusters and the prevailing currents where being used in combination by the Captain to drive the boat into shore.

 

A further 11 min and 22 seconds have past and the ship has now accelerated back up to 0.9 kts now heading towards shore.

 

It has been more than an hour and 11 minutes since the original contact with the rocks and the Costa Concordia is now at rest in its final grounding point.

 

The Coast Guard transcript of the radio communications with the Captain are at Costa Concordia Transcript: Coastguard Orders Captain to return to Stricken Ship. In the following text De Falco is the Coast Guard Commander and Schettino is the Captain of the Costa Concordia:

 

De Falco: "This is De Falco speaking from Livorno. Am I speaking with the commander?"

Schettino: "Yes. Good evening, Cmdr De Falco."

De Falco: "Please tell me your name."

Schettino: "I'm Cmdr Schettino, commander."

De Falco: "Schettino? Listen Schettino. There are people trapped on board. Now you go with your boat under the prow on the starboard side. There is a pilot ladder. You will climb that ladder and go on board. You go on board and then you will tell me how many people there are. Is that clear? I'm recording this conversation, Cmdr Schettino …"

Schettino: "Commander, let me tell you one thing …"

De Falco: "Speak up! Put your hand in front of the microphone and speak more loudly, is that clear?"

Schettino: "In this moment, the boat is tipping …"

De Falco: "I understand that, listen, there are people that are coming down the pilot ladder of the prow. You go up that pilot ladder, get on that ship and tell me how many people are still on board. And what they need. Is that clear? You need to tell me if there are children, women or people in need of assistance. And tell me the exact number of each of these categories. Is that clear? Listen Schettino, that you saved yourself from the sea, but I am going to … really do something bad to you … I am going to make you pay for this. Go on board, (expletive)!"

Schettino: "Commander, please …"

De Falco: "No, please. You now get up and go on board. They are telling me that on board there are still …"

Schettino: "I am here with the rescue boats, I am here, I am not going anywhere, I am here …"

De Falco: "What are you doing, commander?"

Schettino: "I am here to co-ordinate the rescue …"

De Falco: "What are you co-ordinating there? Go on board! Co-ordinate the rescue from aboard the ship. Are you refusing?"

Schettino: "No, I am not refusing."

De Falco: "Are you refusing to go aboard, commander? Can you tell me the reason why you are not going?"

Schettino: "I am not going because the other lifeboat is stopped."

De Falco: "You go aboard. It is an order. Don't make any more excuses. You have declared 'abandon ship'. Now I am in charge. You go on board! Is that clear? Do you hear me? Go, and call me when you are aboard. My air rescue crew is there."

Schettino: "Where are your rescuers?"

De Falco: "My air rescue is on the prow. Go. There are already bodies, Schettino."

Schettino: "How many bodies are there?"

De Falco: "I don't know. I have heard of one. You are the one who has to tell me how many there are. Christ!"

Schettino: "But do you realize it is dark and here we can't see anything …"

De Falco: "And so what? You want to go home, Schettino? It is dark and you want to go home? Get on that prow of the boat using the pilot ladder and tell me what can be done, how many people there are and what their needs are. Now!"

Schettino: "… I am with my second in command."

De Falco: "So both of you go up then … You and your second go on board now. Is that clear?"

Schettino: "Commander, I want to go on board, but it is simply that the other boat here … there are other rescuers. It has stopped and is waiting …"

De Falco: "It has been an hour that you have been telling me the same thing. Now, go on board. Go on board! And then tell me immediately how many people there are there."

Schettino: "OK, commander."

De Falco: "Go, immediately!"

 

At least 16 died in the accident and 17 were still missing when this was written (Costa Concordia Disaster).The Captain of the Costa Concordia, Francesco Schettino, has been charged with manslaughter and abandoning ship.

 

At the time of the grounding, the ship was carrying 2,200 metric tons of heavy fuel oil and 185 metric tons of diesel and remains environmental risk remains (Costa Concordia Salvage Experts Ready to Begin Pumping Fuel from Capsized Cruise Ship Off Coast of Italy). The 170 year old salvage firm Smit Salvage will be leading the operation.

 

All situations are complex and few disasters have only a single cause. However, the facts as presented to this point pretty strongly towards pilot error as the primary contributor in this event.  The Captain is clearly very experienced and his ship handling after the original grounding appear excellent. But, it’s hard to explain why the ship was that close to the rocks, the captain has reported that he turned too late, and public reports have him on the phone at or near the time of the original grounding.

 

What I take away from the data points presented here is that experience, ironically,  can be our biggest enemy. As we get increasingly proficient at a task, we often stop paying as much attention. And, with less dedicated focus on a task, over time, we run the risk of a crucial mistake that we probably wouldn’t have made when we were effectively less experienced and perhaps less skilled. There is danger in becoming comfortable.

 

The videos referenced in the above can be found at:

·         Grounding of Costa Concordia Interpolated

·         gCaptain’s John Konrad Narrates the Final Maneuvers of the Costa Concordia

 

If you are interested in reading more:

·         http://www.masslive.com/news/index.ssf/2012/01/costa_concordia_salvage_expert.html

·         http://www.bbc.co.uk/news/world-europe-16620807

·         http://www.washingtonpost.com/world/divers-in-grounded-costa-concordia-112/2012/01/25/gIQAOkD2PQ_video.html

·         http://www.bbc.co.uk/news/world-europe-16620807

·         http://www.foxnews.com/slideshow/world/2012/01/14/luxury-ship-runs-aground-off-italy-bodies-found/#slide=22

·         http://www.telegraph.co.uk/news/worldnews/europe/italy/9042826/Wife-of-Costa-Concordia-captain-says-it-is-not-for-those-on-land-to-judge-her-husband.html

·         http://www.telegraph.co.uk/news/interactive-graphics/9018076/Concordia-How-the-disaster-unfolded.html

·         http://news.qps.nl.s3.amazonaws.com/Grounding+Costa+Concordia.pdf

·         http://www.bellenews.com/2012/01/14/world/europe-news/italian-captain-of-costa-concordia-cruise-ship-has-been-arrested/

·         http://en.wikipedia.org/wiki/Costa_Concordia

·         http://en.wikipedia.org/wiki/Costa_Concordia_disaster

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Sunday, January 29, 2012 11:24:25 AM (Pacific Standard Time, UTC-08:00)  #    Comments [8] - Trackback
Ramblings
 Thursday, January 26, 2012

Ordinarily I focus this blog on areas of computing where I spend most of my time from high performance computing to database internals and cloud computing. An area that interests me greatly  but I’ve seldom written about is entrepreneurship and startups.

 

One of the Seattle areas startups with which I stay in touch is Socrata. They are focused on enabling federal, state, and local governments to improve the reach, usability and social utility of their public information assets.  Essentially making public information available and useful to their constituents. They are used by: the World Bank, the United Nations, the World Economic Forum, the US Data.Gov, Health & Human Services, Centers for Disease Control, several most major cities including NYC, Seattle, Chicago, San Francisco and Austin and many county and state governments. Even foreign governments like the Country of Kenya have adopted Socrata.

 

I first met Kevin Merritt, the founder and CEO of Socrata, back in 2005 when I was doing technical diligence for the Microsoft acquisition of the LA-based Frontbridge Technologies. I love doing diligence on startups because it’s an opportunity to dive in and spend a day or more digging deeply and understanding what smart people have produced, where things worked really well, and areas where things didn’t pan out as well as they could have. I’ve learned a lot in these roles and I’m  lucky to have been able to do many of them first at IBM, later at Microsoft, and now at Amazon.

 

What made this one a bit different is I got a call shortly after the deal closed asking if I wanted to be the General Manager of the Microsoft subsidiary that was formed in the acquisition. An opportunity to run mid-sized business in its entirety. Development, test, operations, and customer support. Absolutely! I’ve never learned so much as I did in the first year or so at what would become Microsoft Exchange Hosted Services.

 

It was a great experience and I’ve been 100% focused on cloud services since that time. And, as a consequence of leading Frontbridge, I got to know Kevin Merritt well. He is an excellent strategic thinker and an even better operator. Whenever Kevin was involved, customers were happy and the service was rapidly improving and expanding.  Kevin eventually left to form Socrata and he and I have stayed in touch since then. He knows I’m a sucker for a beer and some wings :-).

 

Based in Seattle, Socrata is venture-backed with a small and talented engineering team.  They are enjoying strong customer demand and their market success is fueling growth in the engineering team. They are currently looking for a CTO and, if I didn’t already have one of the best job out there, I would seriously considering joining Kevin and the team.  If you are a technology leader interested in big data, cloud computing, architecture of distributed systems, ops automation, and the user experience of making data easy to find and use, you should send Kevin, their founder and CEO, a note at kevin.merritt@socrata.com.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

Thursday, January 26, 2012 5:13:24 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Wednesday, January 18, 2012

Finally! I’ve been dying to talk about DynamoDB since work began on this scalable, low-latency, high-performance NoSQL service at AWS. This morning, AWS announced availability of DynamoDB: Amazon Web Services Launches Amazon DynamoDB – A New NoSQL Database Service Designed for the Scale of the Internet.

 

In a past blog entry, One Size Does Not Fit All, I offered a taxonomy of 4 different types of structured storage system, argued that Relational Database Management Systems are not sufficient, and walked through some of the reasons why NoSQL databases have emerged and continue to grow market share quickly. The four database categories I introduced were: 1) features-first, 2) scale-first, 3) simple structure storage, and 4) purpose-optimized stores. RDBMS own the first category.

 

DynamoDB targets workloads fitting into the Scale-First and Simple Structured storage categories where NoSQL database systems have been so popular over the last few years.  Looking at these two categories in more detail, Scale-First is:

 

Scale-first applications are those that absolutely must scale without bound and being able to do this without restriction is much more important than more features. These applications are exemplified by very high scale web sites such as Facebook, MySpace, Gmail, Yahoo, and Amazon.com. Some of these sites actually do make use of relational databases but many do not. The common theme across all of these services is that scale is more important than features and none of them could possibly run on a single RDBMS. As soon as a single RDBMS instance won’t handle the workload, there are two broad possibilities: 1) shard the application data over a large number of RDBMS systems, or 2) use a highly scalable key-value store.

 

And, Simple Structured Storage:

 

There are many applications that have a structured storage requirement but they really don’t need the features, cost, or complexity of an RDBMS. Nor are they focused on the scale required by the scale-first structured storage segment. They just need a simple key value store. A file system or BLOB-store is not sufficiently rich in that simple query and index access is needed but nothing even close to the full set of RDBMS features is needed. Simple, cheap, fast, and low operational burden are the most important requirements of this segment of the market.

 

More detail at: One Size Does Not Fit All.

 

The DynamoDB service is a unified purpose-built hardware platform and software offering. The hardware is based upon a custom server design using Flash Storage spread over a scalable high speed network joining multiple data centers.

 

DynamoDB supports a provisioned throughput model. A DynamoDB application programmer decides the number of database requests per second their application should be capable of supporting and DynamoDB automatically spreads the table over an appropriate number of servers. At the same time, it also reserves the required network, server, and flash memory capacity to ensure that request rate can be reliably delivered day and  night, week after week, and year after year.  There is no need to worry about a neighboring application getting busy or running wild and taking all the needed resources. They are reserved and there whenever needed.

 

The sharding techniques needed to achieve high requests rates are well understood industry-wide but implementing them does take some work. Reliably reserving capacity so it is always there when you need it, takes yet more work.  Supporting the ability to allocate more resources, or even less, while online and without disturbing the current request rate takes still more work. DynamoDB makes all this easy. It supports online scaling between very low transaction rates to applications requiring millions of requests per second. No downtime and no disturbance to the currently configured application request rate while resharding. These changes are done online only by changing the DynamoDB provisioned request rate up and down through an API call.

 

In addition to supporting transparent, on-line scaling of provisioned request rates up and down over 6+ orders of magnitude with resource reservation, DynamoDB is also both consistent and multi-datacenter redundant. Eventual consistency is a fine programming model for some applications but it can yield confusing results under some circumstances. For example, if you set a value to 3 and then later set it to 4, then read it back, 3 can be returned. Worse, the value could be set to 4, verified to be 4 by reading it, and yet 3 could be returned later. It’s a tough programming model for some applications and it tends to be overused in an effort to achieve low-latency and high throughput.  DynamoDB avoids forcing this by supporting low-latency and high throughout while offering full consistency. It also offers eventual consistency at lower request cost for those applications that run well with that model. Both consistency models are supported.

 

It is not unusual for a NoSQL store to be able to support high transaction rates. What is somewhat unusual is to be able to scale the provisioned rate up and down while on-line. Achieving that while, at the same time, maintaining synchronous, multi-datacenter redundancy is where I start to get excited.

 

Clearly nobody wants to run the risk of losing data but NoSQL systems are scale-first by definition. If the only way to high throughput and scale, is to run risk and not commit the data to persistent storage at commit time, that is exactly what is often done. This is where  DynamoDB really shines. When data is sent to DynamoDB, it is committed to persistent and reliable storage before the request is acknowledged. Again this is easy to do but doing it with average low single digit millisecond latencies is both harder and requires better hardware. Hard disk drives can’t do it and in-memory systems are not persistent so flash memory is the most cost effective solution.

 

But what if the server to which the data was committed fails, or the storage fails, or the datacenter is destroyed? On most NoSQL systems you would lose your most recent changes.  On the better implementations, the data might be saved but could be offline and unavailable. With dynamoDB, if data is committed just as the entire datacenter burns to the ground, the data is safe, and the application can continue to run without negative impact at exactly the same provisioned throughput rate. The loss of an entire datacenter isn’t even inconvenient (unless you work at Amazon :-)) and has no impact on your running application performance.

 

Combining rock solid synchronous, multi-datacenter redundancy with average latency in the single digits, and throughput scaling to the millions of requests per second is both an excellent engineering challenge and one often not achieved.

 

More information on DynamoDB:

·         Press Release: http://phx.corporate-ir.net/phoenix.zhtml?c=176060&p=irol-newsArticle&ID=1649209&highlight=

·         DynamoDB detail Page: http://aws.amazon.com/dynamodb/

·         DynamoDB Developer Guide: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/

·         Blog entries:

o     Werner: http://www.allthingsdistributed.com/2012/01/amazon-dynamodb.html

o    Jeff Barr: http://aws.typepad.com/aws/2012/01/amazon-dynamodb-internet-scale-data-storage-the-nosql-way.html

·         DynamoDB Frequently Asked Questions: http://aws.amazon.com/dynamodb/faqs/

·         DynamoDB Pricing: http://aws.amazon.com/dynamodb/pricing/

·         GigaOM: http://gigaom.com/cloud/amazons-dynamodb-shows-hardware-as-mean-to-an-end/

·         eWeek: http://www.eweek.com/c/a/Database/Amazon-Web-Services-Launches-DynamoDB-a-New-NoSQL-Database-Service-874019/

·         Seattle Times: http://seattletimes.nwsource.com/html/technologybrierdudleysblog/2017268136_amazon_unveils_dynamodb_databa.html

 

Relational systems remain an excellent solution for applications requiring Feature-First structured storage. AWS Relational Database Service supports both the MySQL and Oracle and relational database management systems: http://aws.amazon.com/rds/.

 

Just as I was blown away when I saw it possible to create the world’s 42nd most powerful super computer with a few API calls to AWS (42: the Answer to the Ultimate Question of Life, the Universe and Everything), it is truly cool to see a couple of API calls to DynamoDB be all that it takes to get a scalable, consistent, low-latency, multi-datacenter redundant, NoSQL service configured, operational and online.

 

                                                --jrh

  

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Wednesday, January 18, 2012 1:00:06 PM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Services
 Monday, January 16, 2012

Occasionally I come across a noteworthy datacenter design that is worth covering. Late last year a very interesting Japanese facility was brought to my attention by Mikio Uzawa an IT consultant who authors the Agile Cat blog. I know Mikio because he occasionally translates Perspectives articles for publication in Japan.

 

Mikio pointed me to the Ishikari Datacenter in Ishikari City, Hokkaido Japan. Phase I of this facility was just completed in November 2011. This facility is interesting for a variety of reasons but the design features I found most interesting are: 1) High voltage direct current power distribution, 2) whole building ductless cooling, and 3) aggressive free air cooling.

 

High Voltage Direct Current Power Distribution

I first came across the use of direct current when Annabel Pratt took me through the joint work Intel was doing with Lawrence Berkeley National Lab on datacenter HVDC distribution (Evaluation of Direct Current Distribution in Data Centers to Improve Energy Efficiency). In this approach they distribute 400V direct current rather than the more conventional 208V to 240V alternating current used in most facilities today.

 

High voltage direct current work in datacenters has been around for around a decade and it is in extensive test at many facilities world-wide.  Many companies are 100% focused on HVDC design consulting with Validus being one of the better known. 

 

The savings potential of HVDC are often shown to be very exciting with numbers beyond 30% frequently quoted. But the marketing material I’ve gone through in detail compare excellent HVDC designs with very poor AC designs. Predictably the savings are around 30%. Unfortunately, the difference between good AC and bad AC designs are also around 30% :-).

 

When I look closely at HVDC distribution, I see slight improvements in efficiency at around 3 to 5%, somewhat higher costs of equipment since it is less broadly used, less equipment availability and longer delivery times, and somewhat more complex jurisdictional issues with permitting and other approvals taking longer in some regions. Nonetheless, the picture continues to improve, the industry as a whole continues to learn, and I think there is a good chance that high voltage DC distribution will end up becoming a more common choice in modern datacenters.

 

The Ishikari facility is a high voltage DC distribution design. I’m looking forward to learning more about this aspect of the facility and watching how the system performs.

 

Whole Building Ductless Cooling

Air handling ducts costs money and restrict flow so why not recognize that the entire purpose of a datacenter shell is to keep the equipment dry and secure and to transport heat. Instead of installing extensive duct work, just treat the entire building as a very large air duct.

 

Perhaps the nicest mechanical design I’ve come across based upon ductless cooling is the Facebook Prineville facility. In this design, they use the entire second floor of the building for air handling and the lower floor for the server rooms.

The Ishikari design shares many design aspects with the Intel Jones Farms facility where the IT equipment is on the second floor and the electrical equipment is on the first.

 


Aggressive Free-Air Cooling

Looking at the air flow diagram above, you can see that the Ishikari Datacenter is making good use of the datacenter friendly climate of Japan and aggressively using free-air cooling. Free-air cooling, often called air side economization, is one of the most effective ways of driving down datacenter costs and substantially increasing overall efficiency. It’s good to see this design point spreading rapidly.

 

More information is available at: http://ishikari.sakura.ad.jp/index_eng.html

 

Some datacenter designs I’ve covered in the past:

·         Facebook Prineville Mechanical Design

·         Facebook Prineville UPS & Power Supply

·         Example of Efficient Mechanical Design

·         46MW with Water Cooling at a PUE of 1.10

·         Yahoo! Compute Coop Design

·         Microsoft Gen 4 Modular Data Centers

 

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, January 16, 2012 10:00:38 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Monday, January 02, 2012

Years ago, Dave Patterson remarked that most server innovations were coming from the mobile device world. He’s right. Commodity system innovation is driven by volume and nowhere is there more volume than in the mobile device world.  The power management techniques applied fairly successfully over the last 5 years had their genesis in the mobile world.  And, as processor power efficiency improves, memory is on track to become the biggest power consumer in the data center. I expect the ideas to rein in memory power consumption will again come from the mobile device world. Just as Eskimo’s are reported (apparently incorrectly) to have 7 words for snow, mobile memory systems have a large array of low power states with subtly different power dissipations and recovery times. I expect the same techniques will arrive fairly quickly to the server world.

 

ARM processors are used extensively in cell phones and embedded devices. I’ve written frequently of the possible impact of ARM on the server-side computing world.

·         Linux/Apache on ARM Processors

·         ARM Cortex-A9 SMP Design Announced

·         Very Low-Cost, Low-Power Servers

·         NVIDIA Project Denver: ARM Powered Servers

 

ARM remain power efficient while at the same time they are rapidly gaining the performance and features needed to run demanding server-side workloads. A key next step was made late last year when ARM announced the ARM V8 architecture. Key attributes of the new ARM architecture are:

·         64 bit virtual addressing

·         40 bit physical addresses

·         HW virtualization support

The first implementation of the ARM V8 architecture was announced the same day by Applied Micro Devices. The APM design is available in an FPGA implementation for development work this month and is expected to be in final system-on-a-chip form in 2H2012. The APM X-Gene offers:

 

·         64bit addressing

·         3 Ghz

·         Up to 128 cores

·         Super-scalar, quad issue processor

·         CPU and I/O virtualization support

·         Out of order processing

·         80 GB/sec memory throughput

·         Integrated Ethernet and PCIe

·         Full LAMP software stack port

 

APM X-Gene announcement:

·         Press Release: AppliedMicro Showcases World’s First 64-bit ARM v8 Core

·         Slides: Applied Micro Announces X-Gene

 

More ARM and low power servers reading:

·         ARM V8 Press Release: http://www.arm.com/about/newsroom/arm-discloses-technical-details-of-the-next-version-of-the-arm-architecture.php

·         AnandTech: http://www.anandtech.com/show/5027/appliedmicro-announces-xgene-arm-based-socs-for-cloud-computing

·         Ars technica: http://arstechnica.com/business/news/2011/10/arm-aims-for-the-server-room-with-its-new-64-bit-armv8-architecture.ars

·         CIDR Paper on low power computing: http://mvdirona.com/jrh/talksandpapers/jameshamilton_cems.pdf

·         The Case for Energy Proportional Computing: http://research.google.com/pubs/pub33387.html

·         ARM V8 Architecture: http://www.arm.com/files/downloads/ARMv8_Architecture.pdf

 

In the 2nd half of 2012 we will have a very capable, 64bit, server-targeted ARM processor implementation available to systems builders.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, January 02, 2012 9:21:02 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Wednesday, December 14, 2011

If you work in the database world, you already know Phil Bernstein. He’s the author of Principles of Transaction Processing and has a long track record as a successful and prolific database researcher.  Past readers of this blog may remember Phil’s guest blog posting on Google Megastore. Over the past few years, Phil has been working on an innovative NoSQL system based upon flash storage. I like the work because it pushes the limit of what can be done on a single server with transaction rates approaching 400,000, leverages the characteristics of flash storage in a thought provoking way, and employs interesting techniques such as log-only storage.

 

Phil presented Hyder at the Amazon ECS series a couple of weeks back (a past ECS presentation at: High Availability for Cloud Computing Database Systems.

 

In the Hyder system, all cores operate on a single shared transaction log. Each core (or thread) processes Optimistic Concurrency Control (OCC) database transactions one at a time. Each transaction posts its after-image to the shared log. One core does OCC and rolls forward the log. The database is a binary search tree serialized into the log (A B-tree would work equally well in this application). Because the log is effectively a no-overwrite, log-only datastore, a changed node require that the parent must now point to this new node which forces the parent to be updated as well. Now its parent needs updating and this cascading set of changes proceeds to the root on each update.

 

The tree is maintained via copy-on-write semantics where updates are written to the front of the log with references to unchanged tree nodes pointing back to the appropriate locations in the log. Whenever a node changes, the changed node is written to the front of the log. Consequently all database changes result in changes to all nodes to the top of the search tree.

 

This has the downside of requiring many tree nodes to be updated on each database update but has the upside of the writes all being sequential at the front of the log. Since it is a no-overwrite store, when an update is made, the old nodes remain so transactional time travel is easy. The old search tree root still point to a complete tree that was current as of the point in time when that root was the current root of the search tree.  As new nodes are written, some old nodes are no longer part of the current search tree and can be garbage collected over time.

Transactions are implemented by writing an intention log record to the front of the log with all changes required by this transaction and these tree nodes point either to other nodes within the intention record or to unchanged nodes further back in the log. This can be done quickly and all updates can proceed  in parallel without need for locking or synchronization.

 

Before the transaction can be completed, it must now be checked for conflict using Optimistic Concurrency Control. If there are no conflicts, the root of the search tree is atomically moved to point to the new root and the transaction is acknowledged as successful. If the transaction is in conflict, it is failed and the tree root is not advanced and the intention record becomes garbage.

 

Most of the transactional update work can be done concurrently without locks but two issues come to mind quickly:

 

1)      Garbage collection: because the systems is constantly rewriting large portions of the search tree, old versions of the tree a spread throughout the log and need to be recovered.

2)      Transaction Rate: The transaction rate is limited by the rate at which conflicts can be checked and the tree root advanced.

 

The latter is the biggest concern and the rest of the presentation focuses on the rate with which this bottleneck can be processed.  The presenter showed that rates in 400,000 transaction per second where obtained in performance testing so this is a hard limit but it is a fairly high hard limit. This design can go a long way before partitioning is required.

 

If you want to dig deeper, the Hyder presentation is at:

http://mvdirona.com/jrh/TalksAndPapers/Hyder4Amazon5Dec2011.pdf

 

More detailed papers can be found at:

 

Philip A. Bernstein, Colin W. Reid, Sudipto Das: Hyder - A Transactional Record Manager for Shared Flash. CIDR 2011: 9-20

http://www.cidrdb.org/cidr2011/Papers/CIDR11_Paper2.pdf

 

Philip A. Bernstein, Colin W. Reid, Ming Wu, Xinhao Yuan: Optimistic Concurrency Control by Melding Trees. PVLDB 4(11): 944-955 (2011)

http://www.vldb.org/pvldb/vol4/p944-bernstein.pdf

 

Colin W. Reid, Philip A. Bernstein: Implementing an Append-Only Interface for Semiconductor Storage. IEEE Data Eng. Bull. 33(4): 14-20 (2010)

http://sites.computer.org/debull/A10dec/hyder.pdf

 

Mahesh Balakrishnan, Philip A. Bernstein, Dahlia Malkhi, Vijayan Prabhakaran, Colin W. Reid: Brief Announcement: Flash-Log - A High Throughput Log. DISC 2010: 401-403

http://www.springerlink.com/content/c732l27h3mrn3170/

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Wednesday, December 14, 2011 9:43:25 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
Software
 Sunday, November 27, 2011

While at Microsoft I hosted a weekly talk series called the Enterprise Computing Series (ECS) where I mostly scheduled technical talks on server and high-scale service topics. I said “mostly” because the series occasionally roamed as far afield as having an ex-member of the Ferrari Formula 1 team present. Client-side topics are also occasionally on the list either because I particularly liked the work or technology behind it or thought it was a broadly relevant topic.

 

The Enterprise Computing Series has an interesting history. It was started by Jim Gray at Tandem.  Pat Helland picked up the mantle from Jim and ran it for years before Pat moved to Andy Heller’s Hal Computer Systems. He continued the ECS at HAL and then brought it with him when he joined Microsoft where he continued to run it for years. Pat eventually passed it to me and I hosted the ECS series for 8 or 9 years myself before moving to Amazon Web Services. Ironically when I arrived at Amazon, I found that Pat Helland had again created a series in the same vein as the ECS called the Principals of Amazon (PoA) series.

 

The PoA series is excellent but it doesn’t include external speakers and is hosted on a fixed day of the week so I occasionally come across a talk that I would like to host at Amazon that doesn’t fit the PoA. For those occasions, the Enterprise Computing Series lives on!

 

In this ECS talk Ashraf Aboulnaga of the University of Waterloo presented High Availability for Database Systems in Cloud Computing Environments. Ashraf presented two topics, 1) RemusDB: Database high availability using virtualization, and 2) DBECS: Database high availability and availability using eventually consistent cloud storage. The first topic was based upon the VLDB 2011 Best Paper Award “RemusDB: Transparent HighAvailability for Database Systems” by Umar Farooq Minha, Shriram Rajagopalan, Brendan Cully, Ashraf Aboulnaga, Ken Salem, and Andrew Warfield. The second topic is work that is not yet published nor as fully developed.

 

Focusing on the first paper, they built an active/standby database system using Remus. Remus implements transparent high availability for Xen VMs. It does this by reflecting all writes to memory in the active virtual machine to the non-active, backup VM.  Remus keeps the backup VM ready to take over with exactly the same memory state as the primary server. On failover, it can take over with the same memory contents including an already warm cache.

Remus is a simple and easy to understand approach to getting very fast takeover from a primary VM. The challenge is that memory write latencies are a fraction of network latencies so any solution that turns memory write latencies into network write latencies simply will not perform adequately for most workloads. Remus tackles this problem using the expected solution: batching many requests in a single network transfer. By default, every 25msec Remus suspends the primary VM, copies all changed pages to a Dom0 (hypervisor) buffer and the allows the VM to continue. The Dom0 buffer is used to minimized the length of time that the guest VM needs to be suspended but comes at the expense of requiring sufficient Dom0 memory for the largest group of changed pages in 25msec.

 

Once the guest machine changed pages are copied to Dom0, the primary VM is released from suspend state and the changes just copied to dom0 are then transferred to the secondary system and applied to the ready to run backup VM.

 

The downsides to the Remus approach are 1) a potentially large dom0 buffer is required and 2) up to 25msec of forward progress can be lost on failover, 3) the checkpoint work consumes considerable resources including time. The time to copy the changed pages may be acceptable but the other overheads are sufficiently high that it is very difficult to host demanding workloads like database workloads on Remus.

 

The authors tackle this problem but noting that Remus actually does more than is needed for database workloads. Or, worded differently, a Remus optimized for database workloads can dramatically reduce the implementation overhead. They introduced the following optimizations:

·         Asynchronous checkpoint compression: Maintain an LRU buffer of recent pages and only ship a delta of these pages. This optimization is based upon the assumption that DB systems modify some pages frequently and typically only change a small part of these pages between checkpoints.

·         Disk read tracking: don’t mark pages read from disk as dirty since they are already available to the backup server via an I/O

·         Memory deprotection: allows DB to declare regions of memory that don’t need to be replicated. This turned out not to be as powerful an optimization as the others and had the further downside of requiring database engine changes

·         Network optimization/Commit protection: Remus buffers every outgoing network packet to ensure clients never see the results of unsafe execution but this increases latency by not allowing any response back to the client until the next Remus checkpoint. Because DBs can fail and transactions can be aborted, they DB optimization is to send all packets back to client in real time except for commit, abort, or other database transaction state changing operations. On failover, any client in an unprotected network state (changes have been sent since the last checkpoint) has the transaction failed. A correct client will re-run the transaction and proceed without issue.

 

What was achieved is Remus, fast-failover protection for database workloads and far lower replication overhead. The authors used the database transaction benchmark TPC-C to show that Remus with DB optimizations has all the protection of Remus but with roughly 1/10th the overhead.

 

                Slides: http://mvdirona.com/jrh/TalksAndPapers/AshrafAboulnaga20111114.pdf

                VLDB Paper: http://www.cs.uwaterloo.ca/~ashraf/pubs/pvldb11remusdb.pdf

 

I'm not 100% convinced Remus is the best solution to the database high availability problem but I like the solution, learned from the proposed optimizations, and enjoyed the talk. Thanks to Pradeep Madhavarapu, who leads part of the Amazon database kernel engineering team (and is hiring :-)), for organizing this talk and to  Ashraf Aboulnaga for doing it.

 

                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Sunday, November 27, 2011 12:50:18 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Software
 Tuesday, November 22, 2011

Netflix is super interesting in that they are running at extraordinary scale, are a leader in the move to the cloud, and Adrian Cockcroft, the Netflix Director of Cloud Architecture, is always interesting in presentations. In this presentation Adrian covers similar material to his HPTS 2011 talk I saw last month.

 

His slides are up at: http://www.slideshare.net/adrianco/global-netflix-platform and my rough notes follow:

·         Netflix has 20 milion streaming members

o    Currently in US, Canada, and Latin America

o    Soon to be in UK and Ireleand

·         Netflix is 100% public cloud hosted

·         Why did Netflix move from their own high-scale facility to a public cloud?

o    Better business agility

o    Netflix was unable build datacenters fast enough

o    Capacity growth was both accelerating and unpredictable

o    Product launch spikes require massive new capacity (iPhone, Wii, PS3, & Xbox)

Netflix grew 37x from Jan 2010 through Jan 2011

 

·         Why did Netflix choose AWS as their cloud solution?

o    Chose AWS using Netflix own platform and tools

o    Netflix has unique platform requirements and extreme scale needing both agility & flexibility

o    Chose AWS partly because it was the biggest public cloud

§  Wanted to leverage AWS investment in features and automation

§  Wanted to use AWS availability zones and regions for availability, scalability, and global deployment

§  Didn’t want to be the biggest customer on a small cloud

o    But isn’t Amazon a competitor?

§  Many products that compete with Amazon run on AWS

§  Netflix is the “poster child” for the AWS Architecture

§  One of the biggest AWS customers

§  Netflix strategy: turn competitors into partners

o    Could Netflix use a different cloud from AWS

§  Would be nice and Netflix already uses 3 interchangeable CDN vendors

§  But no one else has the scale and features of AWS

·         “you have to be tall to ride this ride”

·         Perhaps in 2 to 3 years?

o    “We want to use cloud, we don’t want to build them”

§  Public clouds for agility and scale

§  We use electricity too but we don’t want to build a power station

§  AWS because they are big enough to allocated thousands of instances per hour when needed

 

 

o    Netflix Global PaaS

§  Supports all AWS Availability Zones and Regions

§  Supports multiple AWS accounts (test, prod, etc.)

§  Supports cross Regions and cross account data replication & archiving

§  Supports fine grained security with dynamic AWS keys

§  Autoscales to thousands of instances

§  Monitoring for millions of metrics

o    Portals and explorers:

§  Netflix Application Console (NAC): Primary AWS provisioning & config interface

§  AWS Usage Analyzer: cost breakdown by application and resource

§  SimpleDB Explorer: browse domains, items, attributes, values,…

§  Cassandra Explorer: browse clusters, keyspaces, column families, …

§  Base Service Explorer: browse endpoints, configs, perf metrics, …

o    Netflix Platform Services:

§  Discovery: Service Register for applications

§  Introspections: Endpoints

§  Cryptex: Dynamic security key management

§  Geo: Geographic IP lookup engine

§  Platform Serivce: Dynamic property configuration

§  Localization: manage and lookup local translations

§  EVcache: Eccetric Volatile (mem)Cached

§  Cassadra: Persistence

§  Zookeeper: Coordination

o    Netflix Persistence Services:

§  SimpleDB: Netflix moving to Cassandra

·         Latencies typically over 10msec

§  S3: using the JetS3t based interface with Netflix changes and updates

§  Eccentric Volatile Cache (evcache)

·         Discovery aware memcached based backend

·         Client abstractions for zone aware replication

·         Supports option to write to all zones, fast read from local

·         On average, latencies of under 1 msec

§  Cassandra

·         Chose because they value availability over consistency

·         On average, latency of “few microseconds”

§  MongoDB

§  MySQti: supports hard to scale, legacy, and small relational models

o    Implemented a Multi-Regional Data Replication system:

§  Oracle to SimpleDB and queued reverse path usingj SQS

o    High Availability:

§  Cassandra stores 3 local copies, 1 per availability zone

§  Each AWS availability zone is a separate building with separate power etc. but still fairly close together so synchronous access is practical

§  Synchronous access, durable, and highly available

 

Adrian’s slide deck is posted at: http://www.slideshare.net/adrianco/global-netflix-platform.

 

                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

 

Tuesday, November 22, 2011 1:09:55 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Friday, November 18, 2011

I seldom write consumer product reviews and this blog is about the furthest thing from a consumer focused site but, every so often, I come across a notable tidbit that is worthy of mention. A few weeks ago, it was Sprint unilaterally changing the terms of their wireless contracts (Sprint is Giving Free Customer Service Lessons). It just seemed a sufficiently confused decision that it was worthy of mention.

 

Here’s one that just nails it on the other side of the equation by obsessing over the customer experience: Roku. I’ve long known about Roku but I’m not a huge TV watcher so I’ve only been peripherally interested in the product. But we are both Netflix and Amazon Prime Instant Video customers and Roku supports both. And the entry level Roku streaming appliance is only $49 so we figured let’s give it a try. It actually ended up a bit more than $49 in that we first managed to upsell ourselves to a $59 Roku 2 to get HD, and then to a $79 device to get 1080P and then to a $99 device to het 1080P HD with a hardwired Ethernet connection. So we ended up with a $100 device. I think $50 is close to where this class of devices needs to end up but $100 is reasonable as well.

The device is amazing and shows what can be done with a focus on clean industrial design. It is incredibly small at only 3” square. I plugged it in, it booted up, updated its software, found its remote, upgraded the software on the remote and went live without any user interaction. I setup a Roku account, linked my Amazon account for access to Prime Instant Video, linked our Netflix account and it was ready to go.

 

The device is tiny, produces close to no heat, you don’t have to read the manual, the user interface is clean and notable for its snappiness. I expected a sluggish UI as many companies scrimp on processing power to get costs down but it is very snappy. In fact Netflix on a Roku is faster than the same support on an Xbox. The UI is clean, simple, snappy, and very elegant.

 

I love where consumer appliances are heading: simple, cheap, dedicated, purpose-build devices with clean user interfaces, and the hybrid delivery model where the user interface is delivered by the appliance but most of the functionality is hosted in the cloud.  The combination of cheap microelectronics, open source operating systems, and cloud hosting allows incredibly high function devices to be delivered at low cost. 

 

The Kindle Fire takes the hybrid cloud connected model a long way where the Fire’s Silk browser UI runs directly on the device close to the user where it can be highly interactive and responsive. But the power and network-bandwidth hungry browser backend is hosted on Amazon EC2 where connectivity is awesome and compute power is not battery constrained. I love the hybrid model and we are going to see more and more devices delivering a hybrid user experience where the compute intensive components are cloud hosted and user interface is in the device. My belief is that this is the future of consumer electronics and, as prices drop to the $30 to $50 range, everyone will have 10s of these special-purpose, cloud-connected devices.

 

For the first time in my life, I’m super interested in consumer devices and the possibilities of what can be done in the hybrid cloud-connected appliance model.

 

                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Friday, November 18, 2011 7:44:52 AM (Pacific Standard Time, UTC-08:00)  #    Comments [4] - Trackback
Services
 Tuesday, November 15, 2011

Yesterday the Top 500 Supercomputer Sites was announced. The Top500 list shows the most powerful commercially available supercomputer systems in the world. This list represents the very outside of what supercomputer performance is possible when cost is no object. The top placement on the list is always owned by a sovereign funded laboratory. These are the systems that only government funded agencies can purchase. But they have great interest for me because, as the cost of computing continues to fall, these performance levels will become commercially available to companies wanting to run high scale models and data intensive computing. In effect, the Top500 predicts the future so I’m always interested in the systems on the list.

 


What makes this list of the fastest supercomputers in the world released yesterday particularly unusual can be found at position #42.  42 is an anomaly of the first order. In fact, #42 is an anomaly across enough dimensions that its worth digging much deeper.

 

Virtualization Tax is Now Affordable:

I remember reading through the detailed specifications when the Cray 1 supercomputer was announced and marveling that it didn’t even use virtual memory. It was believed at the time that only real-mode memory access could deliver the performance needed.

We have come a long way in the nearly 40 years since the Cray 1 was announced. This #42 result was run not just using virtual memory but with virtual memory in a guest operating system running under a hypervisor. This is the only fully virtualized, multi-tenant super computer on the Top500 and it shows what is possible as the virtualization tax continues to fall. This is an awesome result and many more virtualization improvements are coming over the next 2 to 3 years.

 

Commodity Networks can Compete at the Top of the Performance Spectrum:

This is the only Top500 entrant below number 128 on the list that is not running either Infiniband or a proprietary, purpose-built network. This result at #42 is an all Ethernet network showing that a commodity network, if done right, can produce industry leading performance numbers.

 

What’s the secret?  10Gbps directly the host is the first part. The second is full non-blocking networking fabric where all systems can communicate at full line rate at the same time.  Worded differently, the network is not oversubscribed. See Datacenter Networks are in my Way for more on the problems with existing datacenter networks.

 

Commodity Ethernet networks continue to borrow more and more implementation approaches and good network architecture ideas from Infiniband, scale economics continues to drive down costs so non-blocking networks are now practical and affordable, and scale economics are pushing rapid innovation. Commodity equipment in a well-engineered overall service is where I see the future of networking continuing to head.

 

Anyone can own a Supercomputer for an hour:

You can’t rent supercomputing time by the hour from Lawrence Livermore National Laboratory. Sandia is not doing it either. But you can have a top50 supercomputer for under $2,600/hour. That is one of the world’s most powerful high performance computing systems  with 1,064 nodes and 8,512 cores for under $3k/hour. For those of you not needing quite this much power at one time, that’s $0.05/core hour which is ½ of the previous Amazon Web Services HPC system cost.

 

Single node speeds and feeds:

·         Processors: 8-core, 2 socket Intel Xeon @ 2.6 Ghz with hyperthreading

·         Memory: 60.5GB

·         Storage: 3.37TB direct attached and Elastic Block Store for remote storage

·         Networking: 10Gbps Ethernet with full bisection bandwidth within the placement group

·         Virtualized: Hardware Assisted Virtualization

·         API: cc2.8xlarge

 

Overall Top500 Result:

·         1064 nodes of cc2.8xlarge

·         240.09 TFlops at an excellent 67.8% efficiency

·         $2.40/node hour on demand

·         10Gbps non-blocking Ethernet networking fabric

 

Database Intensive Computing:

This is a database machine masquerading as a supercomputer. You don’t have to use the floating point units to get full value from renting time on this cluster. It’s absolutely a screamer as an HPC system. But it also has the potential to be the world’s highest performing MapReduce system (Elastic Map Reduce) with a full bisection bandwidth 10Gbps network directly to each node.  Any database or general data intensive workload with high per-node computational costs and/or high inter-node traffic will run well on this new instance type. 

 

If you are network bound, compute bound, or both, the EC2 cc2.8xlarge instance type could be the right answer. And, the amazing thing is that the cc2 instance type is ½ the cost per core of the cc1 instance.

 

Supercomputing is now available to anyone for $0.05/core hour. Go to http://aws.amazon.com/hpc-applications/ and give it a try. You no longer need to be a national lab or a government agency to be able run one of the biggest supercomputers in the world.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, November 15, 2011 10:21:38 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
Services
 Monday, November 14, 2011

Last week I got to participate in one of my favorite days each year, serving on the judging panel for the AWS Startup Challenge. The event is a fairly intense day where our first meeting starts at 6:45am and the event closes at 9pm that evening. But it is an easy day to love in that the entire day is spent with innovative startups who have built their companies on cloud computing.

 

I’m a huge believer in the way cloud computing is changing the computing landscape and that’s all I’ve worked on for many years now. But I have still not tired of hearing “Without AWS, we wouldn’t have even been able to think about launching this business.”

 

Cloud computing is allowing significant businesses to be conceived and delivered at scale with only tiny amounts of seed funding or completely bootstrapped. Many of the finalist we looked at last week’s event had taken less than $200k of seed funding and yet had already had thousands of users. That simply wouldn’t have been possible 10 years ago and I just love to see it.

 

The finalist for this year’s AWS Startup Challenge were:

Booshaka - United States (Sunnyvale, California)

Booshaka simplifies advocacy marketing for brands and businesses by making sense of large amounts of social data and providing an easy to use software-as-a-service solution. In an era where people are bombarded by media, advertisers face significant challenges in reaching and engaging their customers. Booshaka combines the social graph and big data technology to help advertisers turn customers into their best marketers.

Deputy.com - Australia (Sydney)

Deputy.com is an online business management solution specifically addressing the HR department. The powerful online and mobile platform engages all staff across an enterprise, builds positive culture and drives business growth.

Fantasy Shopper - UK (Exeter)

Fantasy Shopper is a social shopping game. The shopping platform centralizes, socializes and “gamifies” online shopping to provide a real-world experience.

Flixlab - United States (Palo Alto, California)

With Flixlab, people can instantly and automatically transform raw videos and photos from their smartphone or their friends’ smartphones, into fun, compelling stories with just a few taps and immediately share them online. After creation, viewers can then interact with these movies by remixing them and creating personally relevant movies from the shared pictures and videos.

Getaround - United States (San Francisco, California)

Getaround is a peer-to-peer car sharing marketplace that enables car owners to rent their cars to qualified drivers by the hour, day, or week. Getaround facilitates payment, provide 24/7 roadside assistance, and provide complete insurance backed by Berkshire Hathaway with each rental.

Intervention Insights - United States (Grand Rapids, Michigan)

Intervention Insights provides a medical information service that combines cutting edge bioinformatics tools with disease information to deliver molecular insights to oncologists describing an individual’s unique tumor at a genomic level. The company then provides a report with an evidenced-based list of therapies that target the unique molecular basis of the cancer.

Localytics - United States (Cambridge, Massachusetts)

Localytics is a real-time mobile application analytics service that provides app developers with tools to measure usage, engagement and custom events in their applications. All data is stored at a per-event level instead of as aggregated counts. This allows app publishers, for example, to create more accurately targeted advertising and promotional campaigns from detailed segmentation of their dedicated customers.

Judging this year’s competition was even more difficult than last year because of the high quality of the field. Rather than a clear winner just jumping out, nearly all the finalist were viable winners and each clearly led in some dimensions.

 

As I write this and reflect on the field of finalist, some notable aspects of the list: 1) it is truly international in that there are several very strong entrants from outside the US and more than ½ of the finalists come from outside of Silicon Valley – the combination of two trends is powerful: first the economics of cloud computing supports successful startups without venture funding and, second, the spread of venture and angel funding throughout the world. Both trends make for a very strong field. Continuing on the notable attributes list, 2) very early stage startups are getting traction incredibly quickly – cloud computing allows companies to go to beta without having to grow a huge company. And, 3) Diversity. There were consumer offerings, developer offerings, and services aimed at highly skilled professionals.

The winner of the AWS Startup Challenge this year was Fantasy Shopper from Exeter, United Kingdom. Fantasy Shopper is a small, mostly bootstrapped startup led by CEO Chris Prescott and CTO Dan Noz with two other engineers. Fantasy Shopper is a social shopping game. They just went into beta on October 18th and already have thousands of incredibly engaged users. My favorite example of which is this video blog posted to YouTube November 6th: http://www.youtube.com/watch?v=h_sKDgdEexk. Watch the first 60 to 90 seconds and you’ll see what I mean.

 

Congratulations to Chris, Dan, Brendan, and Findlay at Fantasy Shopper and keep up the great work.

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Monday, November 14, 2011 7:24:52 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Ramblings
 Thursday, November 03, 2011

As rescue and relief operations continue in response to the serious flooding in Thailand the focus has correctly been on human health and safety. Early reports estimated 317 fatalities, 700,000 homes and 14,000 factories impacted with over 660,000 not able to work. Good coverage mostly from the Bangkok Post is available at Newley.com authored by a reporter in the regoin. For example: http://newley.com/2011/11/02/thailand-flooding-update-november-2-2011-front-page-of-todays-bangkok-post/.

 

The floods are far from over and, as we look beyond the immediate problem in country, the impact on the technology world is expected to continue for just over a year even if the floods do recede in 3 to 4 weeks as expected. Disk drives are particularly hard hit with Digitimes Research reporting that the flood will create a 12% HDD supply gap in the 4th quarter of 2011 and the gap may increase into 2012.  Digitimes estimates the 4Q11 hard disk drive shortage to reach 19 million units.

Western Digital was hit the hardest by the floods with Tim Leyden, WD COO describing the situation in the last investor quarterly report as:

 

The flooded buildings in Thailand include our HDD assembly, test and slider facilities where a substantial majority of our slider fabrication capacity resides. In parallel with the internal slider shortages resulting from the above disruption, we are also experiencing other shortages on component parts from vendors located in several Thai industrial parks that have already been inundated by the floods, or have been affected by protective plant shutdowns.  We are evaluating the situation on a continuous basis, but  in order to get these facilities back up and running, we need the water level to stabilize, after which point it will take some period of time for the floods to recede. We are assessing our options so that we can safely begin working to accelerate the water removal and either extract and transfer the equipment to clean-rooms in other locations or prepare it for operation on-site. As a result of these activities, at this point in time, we estimate that our regular capacity and possibly our suppliers capacity will be significantly constrained for several quarters.

 

Toshiba reports Impact of the Floods in Thailand they were seriously impacted as well:

 

Location: Navanakorn Industrial Estate Zone, Pathumtani, Thailand
Main Product: Hard Disk Drive

·         Damage status: The water is 2 meters high on the site and the surrounding area and more than 1 meter deep in the buildings. Facilities are damaged but no employees have been injured in the factory.

·         Alternative sites: We have started alternative production at other factories, but the production volume will be limited by available capacity.

·         Operation: All the employees have been evacuated from the industrial zone, at the order of the Thai government. With the water at its current level, we anticipate a long-term shutdown. The date of resumption of operation is unpredictable.

 

Because the hard disk supply chain is heavily represented in this region, many hard disk manufacturers with unaffected plants will still lose capacity. Noble Financial Equity Research made the following 4th quarter shipped volume estimates:

 

Continuing with data from Noble Financial Equity Research:

·         Due to the effects of flooding, we do not expect the industry to return to normalcy for 3 to 4 quarters

·         We see only 120M drives shipped this quarter versus the TAM (total addressable market) of 175M to 180M units

·         Due to lack of channel and finished goods inventory, the supply shortfall in the March quarter is also expected to be sever despite higher expected drive shipments and component availability

·         By shifting production out of Asian plants, critical component supplier Nidec believes it can ramp to an output of 170 drive motors by the March quarter

·         We see significantly higher drive and component prices persisting into the summer months of 2012

·         Seagate will be the principal beneficiary of the supply shortage and higher pricing

·         We believe Hutchinson (drive suspension manufacturer) will be able to rapidly ramp its US assembly operations and higher suspension prices will offset the reduced business from Western Digital

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Thursday, November 03, 2011 7:41:22 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Hardware
 Monday, October 31, 2011

I’m not sure why it all happens at once but it often does.  Last Monday I kicked off HPTS 2011 in Asilomar California and then flew to New York City to present at the Open Compute Summit. 

 

I love HPTS. It’s a once every 2 year invitational workshop that I’ve been attending since 1989. The workshop attracts a great set of presenters and attendees: HPTS 2011 agenda. I blogged a couple of the sessions if you are interested:

·         Microsoft COSMOS at HPTS

·         Storage Infrastructure Behind Facebook Message

 

The Open Compute Summit was kicked off by Frank Frankovsky of Facebook followed by the legendary Andy Bechtolsheim of Arista Networks.  I did a talk after Andy which was a subset of the talk I had done earlier in the week at HPTS.

·         HPTS Slides: Internet-Scale Data Center Economics: Costs and Opportunities

·         OCP Summit: Internet Scale Infrastructure Innovation

 

Tomorrow I’ll be at University of Washington presenting Internet Scale Storage at the University of Washington Computer Science and Engineering Distinguished Lecturer Series (agenda). Its open to the public so, if you are in the Seattle area and interested, feel free to drop by EEB-105 at 3:30pm (map and directions).

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

Monday, October 31, 2011 6:07:27 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Saturday, October 29, 2011

Sometimes the most educational lessons are on what not to do rather than what to do. Failure and disaster can be extraordinarily educational as long as the reason behind the failure is well understood.  I study large system outages, infrastructure failures, love reading post mortems (when they actually have content), and always watch carefully how companies communicate with their customers during and right after large scale customer impacting events. I don’t do it because I enjoy failure – these things all scare me. But, in each there are lessons to be learned.

 

 

Sprint advertising from: http://unlimited.sprint.com/?pid=10 (2011/10/29).

 

I typically point out the best example rather than the worst but every once in a while you see a blunder so big it just can’t be ignored.   Sprint is the 3rd place wireless company in an industry where backbreaking infrastructure costs strongly point towards there only being a small number of surviving companies unless services are well differentiated.  All the big wireless players work hard on differentiation but it’s a challenge and, over time, the biggest revenue, supports the biggest infrastructure investment, and its gets harder and harder to be successful as a #3 player.

 

Sprint markets that they are better than the #1 and #2 carrier because they really have unlimited data rather than merely using the word “unlimited” in the marketing literature. They say “at Sprint you get unlimited data, no overage charges, and no slowing you down” (http://unlimited.sprint.com/?pid=10).

 

We live on a boat and so 4G cellular is about as close as we can get to broadband. I like to do all I can to encourage broad competition because it is good for the industry and good for customers.  That is one of the reasons we are Sprint customers today.  We use Sprint because they offer unlimited 4G and I really would like there to be more than 2 surviving North American wireless providers.

 

Unfortunately, Sprint seems less committed to my goal of keeping the #3 wireless player healthy and solvent.  Looking at the Sprint primary differentiating feature, unlimited data, they plan to shut it off this month. That’s a tough decision but presumably it was made with care and there exists a plan to grow the company with some other feature or features making them a better choice than Verizon or AT&T.  Just being a 3rd choice with a less well developed network and with less capital to invest into that network doesn’t feel like a good long term strategy for Sprint.

 

What makes Sprint’s decision notable is the way the plan was rolled out. Sprint has many customers under 2 year, unlimited data contracts. Rather than risk the negative repercussions and customer churn from communicating the change, they went the stealth route.  The only notification was buried in the fine print of the October bill:

 

Mobile Broadband Data Allowance Change
Eff. on your next bill, Mobile Broadband Data Plan 4G usage will be combined with your current 3G monthly data allowance and no longer be unlimited. On-network data overage rate for 3G/4G is $.05/MB. Monitor combined data usage at sprint.com. Please visit sprint.com/servicechange for details.

In November, many of us are going to get charged an overage fee of $0.05/MB on what has been advertised heavily as the only “real” unlimited plan.  For many customers, the only reason they have a Sprint contract is that the data plan was uncapped. Both my phone and Jennifer’s are with AT&T. The only reason we are using Sprint for connectivity from the boat WiFi system is Sprint offered unbounded data. Attempting a stealth change of the primary advertised characteristic of a service shows very little respect for customers even when compared with other telcos, an industry not generally known for customer empathy.

 

I agree that almost nobody is going to read the bill and I suppose some won’t notice when subsequent bills are higher. But many eventually will. And, even for those that don’t notice and are silently are getting charged more, when they do notice, they are going to be unhappy. No matter how you cut it, the experience is going to be hard on customer trust. And, at the same time they showing little respect for customers, they are releasing them all from contract at the same time. Any Sprint customer is now welcome to leave without termination charge.

 

Some analysts have speculated that Sprint doesn’t have the bandwidth to support their launch of iPhone.  This billing structure change strongly suggests that Sprint really does have a bandwidth problem. I’ve still not yet figured out why an iPhone is more desirable at Sprint than it is at Verizon or AT&T. And I still can’t figure out why the #3 provider with the same data caps is more desirable than the big 2 but it’s not important that I understand. That’s a Sprint leadership decision.

 

Let’s assume that the Sprint network is in capacity trouble and they have no choice but to cap the data plans even though they are changing the very terms they advertised as their primary advantage. Even if that is necessary, I’m 100% convince the right way to do it is to support the existing contact terms for the duration of those contracts. If the company really is teetering on failure and is unable to honor the commitments they agreed to, then they need to be upfront with customers. You can’t slip in new contract terms quietly into the statement and hope nobody notices. Showing that little respect for customers is usually rewarded by high churn rates and a continuing to shrink market share.  Poor approach.

 

I called Sprint and pointed out they were kind of missing the original contact terms. They said “there was nothing they could do” however, they would be willing to offer a $100 credit if we would agree to another 2 year contract term. Paying only $100 to get a customer signed up for another 2 years would be an incredible bargain for Sprint. Most North American carriers spend at least that on device subsidies when getting customers committed to an additional 2 year term. This would be cheap for Sprint and would get customers back under contract after this term change effectively released them. The Sprint customer service representative did correctly offer to waive early cancellation fees since they were changing the contract terms of the original contract. 

 

Sprint customers are now all able to walk away today from the remaining months in their wireless contacts without any cost. They are all free to leave. From my perspective, it is just plain nutty for Sprint to give their entire subscription base the freedom to walk away from contracts without charge while, at the same time, treating them poorly. It’s a recipe for industry leading churn.

 

                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Saturday, October 29, 2011 6:40:18 AM (Pacific Standard Time, UTC-08:00)  #    Comments [9] - Trackback
Ramblings
 Tuesday, October 25, 2011

One of the talks that I particularly enjoyed yesterday at HPTS 2011 was Storage Infrastructure Behind Facebook Messages by Kannan Muthukkaruppan. In this talk, Kannan talked about the Facebook store for chats, email, SMS, & messages.

 

This high scale storage system is based upon HBase and Haystack. HBase is a non-relational, distributed database very similar to Google’s Big Table. Haystack is simple file system designed by Facebook for efficient photo storage and delivery. More on Haystack at: Facebook Needle in a Haystack.

 

In this Facebook Message store, Haystack is used to store attachments and large messages.  HBase is used for message metadata, search indexes, and small messages (avoiding the second I/O to Haystack for small messages like most SMS).

 

Facebook Messages takes 6B+ messages a day. Summarizing HBase traffic:

·         75B+ R+W ops/day with 1.5M ops/sec at peak

·         The average write operation inserts 16 records across multiple column families

·         2PB+ of cooked online data in HBase. Over 6PB including replication but not backups

·         All data is LZO compressed

·         Growing at 250TB/month

 

The Facebook Messages project timeline:

·         2009/12: Project started

·         2010/11: Initial rollout began

·         2011/07: Rollout completed with 1B+ accounts migrated to new store

·         Production changes:

o   2 schema changes

o   Upgraded to Hfile 2.0

 

They implemented a very nice approach to testing where, prior to release, they shadowed the production workload to the test servers.

After going into production the continued the practice of shadowing the real production workload into the test cluster to test before going into production:

 

The list of scares and scars from Kannan:

·         Not without our share of scares and incidents:

o   s/w bugs. (e.g., deadlocks, incompatible LZO used for bulk imported data, etc.)

§  found a edge case bug in log recovery as recently as last week!

·         performance spikes every 6 hours (even off-peak)!

o   cleanup of HDFS’s Recycle bin was sub-optimal! Needed code and config fix.

·         transient rack switch failures

·         Zookeeper leader election took than 10 minutes when one member of the quorum died. Fixed in more recent version of ZK.

·         HDFS Namenode – SPOF

·         flapping servers (repeated failures)

·         Sometimes, tried things which hadn’t been tested in dark launch!

o   Added a rack of servers to help with performance issue

§  Pegged top of the rack network bandwidth!

§  Had to add the servers at much slower pace. Very manual .

§  Intelligent load balancing needed to make this more automated.

·         A high % of issues caught in shadow/stress testing

·         Lots of alerting mechanisms in place to detect failures cases

o   Automate recovery for a lots of common ones

o   Treat alerts on shadow cluster as hi-pri too!

·         Sharding service across multiple HBase cells also paid off

 

Kannan’s slides are posted at: http://mvdirona.com/jrh/TalksAndPapers/KannanMuthukkaruppan_StorageInfraBehindMessages.pdf

 

                                                                --jrh

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, October 25, 2011 1:03:10 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services

Rough notes from a talk on COSMOS, Microsoft’s internal Map reduce systems from HPTS 2011. This is the service Microsoft uses internally to run MapReduce jobs. Interesting, Microsoft plans to use Hadoop in the external Azure service even though COSMOS looks quite good: Microsoft Announces Open Source Based Cloud Service. Rough notes below:

 

Talk: COSMOS: Big Data and Big Challenges

Speaker: Ed Harris

·         Petabyte storage and computation systems

·         Used primarily by search and advertising inside Microsoft

·         Operated as a service with just over 4 9s of availability

·         Massively parallel processing based upon Dryad

o   Dryad is very similar to MapReduce

·         Use SCOPE (structured Computation Optimized for Parallel Execution) over Dryad

o   A SQL-like language with an optimizers implemented over Dryad

·         They run hundreds of virtual clusters. In this model, internal Microsoft teams buy servers and given them to COSMOS and are subsequently assured at least these resources

o   Average 85% CPU over the cluster

·         Ingest 1 to 2 PB/day

·         Roughly 30% of the Search fleet is running COSMOS

·         Architecture:

o   Store Layer

§  Many extent nodes store and compress streams

§  Streams are sequences of extents

§  CSM: Cosmos Store Layer handles names, streams, and replication

·         First level compression is light. Data that is kept more than a week is more aggressively compressed after a week on the assumption that data that lives a week will likely live longer

o   Execution Layer:

§  Jobs queue up on virtual clusters and then executed

o   SCOPE Layer

§  Compiler and optimizer for SCOPE

§  Ed said that the optimizer is a branch of the SQL Server optimizer

·         They have 60+ Phd internships each year and hire ~30 a year

 

James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

 

Tuesday, October 25, 2011 8:37:20 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<March 2012>
SunMonTueWedThuFriSat
26272829123
45678910
11121314151617
18192021222324
25262728293031
1234567

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton