Tuesday, April 28, 2009

Earlier this week I got a thought provoking comment from Rick Cockrell in response to the posting: 32C (90F) in the Data Center. I found the points raised interesting and worthy of more general discussion so I pulled the thread out from the comments into a separate blog entry. Rick posted:

 

Guys, to be honest I am in the HVAC industry. Now, what the Intel study told us is that yes this way of cooling could cut energy use, but what is also said is that there was more than a 100% increase in server component failure in 8 months (2.45% to 4.46%) over the control study with cooling... Now with that said if anybody has been watching the news lateley or Wall-e, we know that e-waste is overwhlming most third world nations that we ship to and even Arizona. Think?

I see all kinds of competitions for energy efficiency, there should be a challenge to create sustainable data center. You see data centers use over 61 billion kWh annually (EPA and DOE), more than 120 billion gallons of water at the power plant (NREL), more than 60 billion gallons of water onsite (BAC) while producing more than 200,000 tons of e-waste annually (EPA). So for this to be a fair game we can't just look at the efficiency. It's SUSTAINABILITY!

It would be easy to just remove the mechanical cooling (I.E. Intel) and run the facility hotter, but the e-waste goes up by more than 100% (Intel Report and Fujitsu hard drive testing), It would be easy to not use water cooled equipment, to reduce water onsite use but the water at the power plant level goes up, as well as the energy use. The total solution has to be a solution of providing the perfect environment, the proper temperatures, while reducing e-waste.

People really need to do more thinking and less talking. There is a solution out there that can do almost everything that needs to be done for the industry. You just have to look! Or maybe call me I'll show you.

 

Rick, you commented that “it’s time to do more thinking and less talking” and argued that the additional server failures seen in the Intel report created 100% more ewaste so simply wouldn’t make sense. I’m willing to do some thinking with you on this one.

 

I see two potential issues with your assumption.  The first that the Intel report showed “100% more ewaste”. What they saw in a 8 rack test is server mortality rate of 4.46% whereas their standard data centers were 3.83%. This is far from double and with only 8 racks may not be statistically significant. Further evidence that the difference may not be significant we see that the control experiment where they had 8 racks in the other half of the container running on DX cooling showed failure rates of 2.45%.  It may be noise given that the control differed from the standard data center by about as much as test data set. And, it’s a small sample.

 

Let’s assume for a second that the increase in failure rates actually was significant. Neither the investigators or I are convinced this is the case but let’s make the assumption and see where it takes us.  They have 0.63% more than their normal data centers and 2.01% more than the control.  Let’s take the 2% number and think it through assuming these are annualized numbers. The most important observation I’ll make is that 85% to 90% of servers are replaced BEFORE they fail which is to say that obsolescence is the leading cause of server replacement. They no longer are power efficient and get replaced after 3 to 5 years.  If I could save 10% of the overall data center capital expense and 25%+ of the operating expense at the cost of having an additional 2% in server failures each year. Absolutely yes.  Further driving this answer home, Dell, Rackable, and ZT Systems will replace early failures if run under 35C (95F) on warranty.

 

So, the increased server mortality rate is actually free during the warranty period but let’s ignore that and focus on what’s better for the environment.  If 2% of the servers need repair early and I spend the carbon footprint to buy replacement parts but saving 25%+ of my overall data center power consumption, is that a gain for the environment?  I’ve not got a great way to estimate true carbon footprint of repair parts but it sure looks like a clear win to me.

 

On the basis of the small increase in server mortality weighed against the capital and operating expense savings, running hotter looks like a clear win to me. I suspect we’ll see at least a 10F average rise over the next 5 years and I’ll be looking for ways to make that number bigger. I’m arguing it’s a substantial expense reduction and great for the environment.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Tuesday, April 28, 2009 8:01:18 AM (Pacific Standard Time, UTC-08:00)  #    Comments [20] - Trackback
Hardware
 Saturday, April 25, 2009

This IEEE Spectrum article was published in February but I’ve been busy and haven’t had a chance to blog it. The author, Randy Katz, is a UC Berkeley researcher and member of the Reliable Available Distributed Systems Lab. Katz was a coauthor on the recently published RAD Lab article on Cloud Computing: Berkeley Above the Clouds.

 

The IEEE Spectrum article focuses on data center infrastructure: Tech Titans Building Boom.  In this article Katz, looks at the Google, Microsoft, Amazon, and Yahoo data center building boom. Some highlights from my read:

·         Microsoft Quincy is 48MW total load with 48,600 sq m of space.  4.8 km of chiller pipe, 965 km of electrical wire, 92,900 m2 of drywall, and 1.5 metric tons of backup batteries.

·         Yahoo Quincy, is somewhat smaller at 13,000 m2. This not yet complete facility will include free air cooling.

·         Google Dalles is a two building facility on the Columbia river, each at 6,500 m2.  I’ve been told that this facility does make use air-side economization but in carefully studying all pictures I’ve come across I can’t find air intakes or louvers so I’m skeptical. From the outside the facilities look fairly conventional.

·         Google is also building in Pryor, Okla.; Council Bluffs, Iowa; Lenoir, N.C.; and Goose Creek, S.C.

·         Arial picture of Google Dalles: http://www.spectrum.ieee.org/feb09/7327/2

·         McKinsey estimates that the world has 44M servers and that they consume 0.5% of all electricity and produce 0.2% of all carbon dioxide. However, in a separate article McKinsey also speculates that Cloud Computing may be more expensive for enterprise customers, a claim that most of the community had trouble understanding or finding data to support.

·         Google uses conventional multicore processors. To reduce the machines’ energy appetite, Google fitted them with high-efficiency power supplies and voltage regulators, variable-speed fans, and system boards stripped of all unnecessary components like graphics chips. Google has also experimented with a CPU power-management feature called dynamic voltage/frequency scaling. It reduces a processor’s voltage or frequency during certain periods (for example, when you don’t need the results of a computing task right away). The server executes its work more slowly, thus reducing power consumption. Google engineers have reported energy savings of around 20 percent on some of their tests. For more recently released data on Google’s servers, see Data Center Efficiency Summit (Posting #4).

·         Katz reports that average data center is 14C and that newer centers are pushing to 27C. I’m interested in going to 35C and eliminating process based cooling: Data Center Efficiency Best Practices.

·         Containers: T he most radical change taking place in some of today’s mega data centers is the adoption of containers to house servers. Instead of building raised-floor rooms, installing air-conditioning systems, and mounting rack after rack, wouldn’t it be great if you could expand your facility by simply adding identical building blocks that integrate computing, power, and cooling systems all in one module? That’s exactly what vendors like IBM, HP, Sun Microsystems, Rackable Systems, and Verari Systems have come up with. These modules consist of standard shipping containers, which can house some 3000 servers, or more than 10 times as many as a conventional data center could pack in the same space. Their main advantage is that they’re fast to deploy. You just roll these modules into the building, lower them to the floor, and power them up. And they also let you refresh your technology more easily—just truck them back to the vendor and wait for the upgraded version to arrive.

·         Microsoft Chicago will have 200 containers in its lower floor (it’s a two floor facility) and it’s expected to be well over 45MW and will be 75MW if built out to the full 200 containers planned (First Containerized Data Center Announcement). The Chicago, Dublin, and Des Moines facilities have all been delayed by Microsoft presumably due to economic conditions: Microsoft Delays Chicago, Dublin, and Des Moines Data Centers.

 

Check out Tech Titans Building Boom: http://www.spectrum.ieee.org/feb09/7327.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, April 25, 2009 6:40:10 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware
 Monday, April 20, 2009

I’m always interested in research on cloud service efficiency, and last week, at the Uptime Institute IT Symposium in New York City, management consultancy McKinsey published a report entitled Clearing the air on Cloud Computing. McKinsey is a well respected professional services company that describes itself as “a management consulting firm advising leading companies on organization, technology, and operations”.  Over the first 22 years of my career in server-side computing at Microsoft and IBM, I’ve met McKinsey consultants frequently, although they were typically working on management issues and organizational design rather than technology. This particular report focuses more on technology, where the authors investigate the economics of very high scale data centers and cloud computing. This has been my prime area of interest for the last 5 years, and my first observation is the authors are taking on an incredibly tough challenge.

 

Gaining a complete inventory of the costs of internal IT is very difficult. The costs hide everywhere.  Some are in central IT teams, some are in central procurement groups, some with the legal and contract teams, and some in departmental teams doing IT work although not part of corporate IT. It’s incredibly difficult to get a full, accurate, and unassailable inventory into the costs of internal IT. Further complicating the equation, internal IT is often also responsible for mission-critical tasks that have nothing to do with comparing internal IT with cloud services offerings. Internal IT is often responsible for internal telco and for writing many of the applications that actually run the business.  Basically, it’s very hard to first find all the comparable internal IT costs and, even with a complete inventory of IT costs, it’s then even harder to separate out mission-critical tasks that internal IT teams own that have nothing to do with whether the applications are cloud or internally hosted. I’m arguing that this report’s intent, of comparing costs in a generally applicable way, across all industries, is probably not possible to do accurately and may not be a good idea.

 

In the report, the authors conclude that current cloud computing offerings “are not cost-effective compared to large enterprise data centers.”  They argue that cloud offerings are most attractive for small and medium sized enterprises. The former is a pretty strong statement, and contradicts most of what I’ve learned about high scale service, so it’s definitely worth digging deeper.

 

It’s not clear that a credible detailed accounting of all comparable IT costs that generalizes across all industries can be produced. Each company is different and these costs are both incredibly hard to find and entangled with many other mission-critical tasks the internal IT team owns that has nothing to do with whether they are internally hosted or utilizing the cloud. From all the work I’ve done around high scale services, it’s inarguably true that some internal IT tasks are very leveraged. These tasks form the core competency of the business and are usually at least developed internally if not hosted internally.  In what follows, I’ll argue that non-differentiated services -- services that need to be good but aren’t the company’s competitive advantage -- are much more economically hosted in very high-scale cloud computing environments. The hosting decision should be driven by company strategy and a decision to concentrate investment capital where it has the most impact. The savings available using a shared cloud for non-differentiated services are dramatic, and are available for all companies, from the smallest startup to the largest enterprise. I’ll look at some of these advantages below.

 

In this report the authors conclude that cloud computing makes sense for small and medium enterprises but “are not cost-effective to large enterprise data centers.” The authors argue there are economies of scale that makes sense for the small and medium sized businesses, but the cost advantages break down at the very large. Essentially they are arguing that big companies already have all the economies of scale available to internet-scale services. On the face, this appears unlikely. And, upon further digging, we’ll see it’s simply incorrect across many dimensions.

 

Let’s think about economies of scale.  Large power plants produce lower cost power than small regional plants.  Very large retail store chains spend huge amounts on optimizing all aspects of their businesses from supply chain optimization through customer understanding and, as a consequence, can offer lower prices. There are exceptions to be sure but, generally, we see a pretty sharp trend towards economies of scale across a wide range of businesses.  There will always be big, dumb, poorly run players and there will always be nimble but small innovators.  The one constant is those that understand how to grow large and get the economies of scale and yet still stay nimble, often deliver very high quality products at much lower cost to the customer.

 

Perhaps the economies of scale don’t apply to the services world?  Looking at services such as payroll and internal security, we see that almost no companies choose to do their own internally.  These services clearly need to be done well, but they are not differentiated.  It’s hard to be so good at payroll that it yields a competitive advantage, unless your company is actually specializing in payroll. Internal operations such as payroll and security are often sublet to very large services companies that focus on them. ADP, for example, has been successful at providing a very high scale service that makes sense for even the biggest companies. I actually think it’s a good thing that the companies I’ve worked for over the last twenty years didn’t do their own payroll and instead focus their investment capital on technology opportunities that grow the business and help customers. It’s the right answer.

 

We find another example in enterprise software.  When I started my career, nearly all large companies developed their own internal IT applications. At the time, most industry experts speculated that none of the big companies would ever move to packaged ERP systems. But, the economies of scale of the large ERP development shops are substantial and, today, very few companies develop their own ERP or CRM systems.  The big companies like SAP can afford to invest in the software base at rates even the largest enterprise couldn’t afford. Fifteen years ago SAP had 4,200 engineers working on their ERP system. Even the largest enterprise could never economically justify spending a fraction of that.  Large central investments at scale typically make better economic sense unless the system in question is one of a company’s core strategic assets.

 

I’ve argued that smart, big players willing to invest deeply in innovating at scale can produce huge cost advantages and we’ve gone through examples from power generation, through retail sales, payroll, security, and even internal IT software. The authors of the McKinsey study are essentially arguing that, although all major companies have chosen to enjoy the large economies of scale offered by packaged software products over internal development, this same trend won’t extend to cloud hosted solutions. Let’s look closely at the economics to see if this conclusion is credible.

 

In the enterprise, most studies report that the cost of people dominates the cost of servers and data center infrastructure.  In the cloud services world, we see a very different trend.  Here we find that the costs of servers dominate, followed by mechanical systems, and then power distribution (see the Cost of Power in Large Data Centers). As an example, looking at all aspects of operational costs in a mid-sized service led years ago, the human administrative costs were under 10% of the overall operational costs.  I’ve seen very large, extremely well run services where the people costs have been driven below 4%. Given that people costs dominate many enterprise deployments, how do high-scale cloud services get these cots so low? There are many factors contributing but the most important two are 1) cloud services run at very high scale and can afford to invest more in automation amortizing that investment across a much larger server population, and 2) services teams can specialize focused on doing one thing and doing it very well. This kind of specialization yields efficiency gains, but it is only affordable at multi-tenant scale. The core argument here is that the number 1 cost in the enterprise is people whereas, in high scale services, these costs have been amortized down to sub-10%. Arguing there are no economies at cloud scale is the complete opposite of my experience and observations.

 

<JRH>Page 25 of study shows a “disguised client example“ where the example company had 1,704 people working in IT before the move to cloud services and still required 1,448 after the move. I’m very skeptical that any company with 1,704 people working in IT – clearly a large company – would move to cloud computing in one, single discrete step.  It’s close to impossible and would be foolhardy.  Consequently, I suspect the data either represents a partial move to the cloud or is only a paper exercise. If the former, the data is incomplete and, if the later, the data is speculative.  The story is clouded further by including in the headcount inventory desktop support, real estate, telecommunications and many other responsibilities that wouldn’t be impacted by the move to cloud services. Adding extraneous costs in large numbers dilutes the savings realized by this disguised customer. Overall, this slide doesn’t appear informative.

 

We’ve shown that at very high scale the dominant costs are server hardware and data center infrastructure. Very high scale services hire server designers and have an entire team focused on the acquisition of some of the most efficient server designs in the world.  Google goes so far as to design custom servers (see Jeff Dean on Google Infrastructure) something very hard to economically do at less than internet-scale.  I’ve personally done joint design work with Rackable Systems in producing servers optimized for cloud services workloads (Microslice Servers). When servers are the dominant cost and you are running at 10^5 to 10^6 servers scale, considerable effort can and should be spent on obtaining the most cost effective servers possible for the workload. This is hard to do economically at lower scale.

 

We’ve shown that people costs are largely automated out of very high scale services and that the server hardware is either custom, jointly developed, or specifically targeted to the workload.  What about data center infrastructure?  The Uptime Institute reports that the average data center Power Usage Effectiveness is 2.0 (smaller is better). What this number means is that for every 1W of power that goes to a server in an enterprise data center, a matching watt is lost to power distribution and cooling overhead. Microsoft reports that its newer designs are achieving a PUE of 1.22 (Out of the box paradox…). All high scale services are well under 1.7 and most, including Amazon, are under 1.5. High scale services can invest much more in infrastructure innovations by spreading this large investment out over a large number of data centers. As a consequence, these internet-scale services are a factor of 2 more efficient than the average enterprise. This is good for the environment and, with power being such a substantial part of the cost of high-scale computing, it substantially reduces costs as well.

 

Utilization is the factor that many in the industry hate talking about because the industry-wide story is so poor.  The McKinsey report says that enterprise server utilization is actually down around 10% which is approximately consistent with I’ve seen working with enterprise customers over the years. The implication is the servers and the facilities that house them are only 10% used.  This sounds like the beginning of an incredibly strong argument for cloud services but the authors take a different path and argue it would be easy to increase enterprise utilization far higher than 10%. With an aggressive application of virtualization and related technologies, they feel utilizations as high as 35% are possible.  That conclusion is possibly correct, but it’s worth spending a minute on this point. At 35% efficiency, a full 2/3 is still wasted which seems unfortunate, unnecessary, and hard on the environment.  Improving from 10% to 35% will require time, new software, new training, etc. but it may be possible.  What’s missing in this observation is that 1) cloud services can invest more in these efficiency innovations and they are already substantially down that path, 2) large user populations allow a greater investment in infrastructure efficiency at a higher rate, and 3) not all workloads have correlated peaks, so larger, heterogeneous populations offer substantially larger optimization possibilities than most enterprises can achieve alone (see: resource consumption shaping).

 

In the discussion above, we focused on the costs “below” the software (data center infrastructure and servers) and found a substantial and sustainable competitive advantage in high scale deployments.  Looking at people costs, we see the same advantage again.  On the software-side, the cost picture ranges from less in the cloud to equal but it isn’t higher. There doesn’t seem to be a dimension that supports the claim of this report. I just can’t find the data to support the claim that enterprises shouldn’t consider cloud service deployments. Looking at slides on the McKinsey presentation that make the cost argument in detail, the graphs on slides 22, 23, and 24 just don’t make sense to me. I’ve spent considerable time on the data but just can’t get it to line up with the AWS price sheet or any other measure of reality.  The limitation might be mine but it seems others are having trouble matching this data to reality as well.

 

My conclusion: any company not fully understanding cloud computing economics and not having cloud computing as a tool to deploy where it makes sense is giving up a very valuable competitive edge. No matter how large the IT group, if I led the team, I would be experimenting with cloud computing and deploying where it make sense.  I would want my team to know it well and to be deploying to the cloud when the work done is not differentiated or when the capital was better leveraged elsewhere

 

IT is complex and a single glib answer is almost always wrong.  My recommendation is to start testing and learning about cloud services, to take a closer look at your current IT costs, and to compare the advantages of using a cloud service offering with both internal hosting and mixed hosting models.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Monday, April 20, 2009 4:53:21 PM (Pacific Standard Time, UTC-08:00)  #    Comments [15] - Trackback
Services
 Saturday, April 18, 2009

In Where SSDs Don’t Make Sense in Server Applications, we looked at the results of a HDD to SSD comparison test done by the Microsoft Cambridge Research team.  Vijay Rao of AMD recently sent me a pointer to an excellent comparison test done by AnandTech. In SSD versus Enterprise SAS and SATA disks, Anandtech compares one of my favorite SSDs, the Intel X25-E SLC 64GB, with a couple of good HDDs. The Intel SSD can deliver 7000 random IOPS/sec and the 64GB component is priced in the $800 range.

 

The full AnandTech comparison is worth reading but I found the pricing with sequential and random I/O performance data is particularly interesting. I’ve brought this data together into the table below:

 

Drive

Capacity

Pricing

$/GB

$/Seq Read ($/MB/s)

$/Seq Write $/MB/s)

Seq I/O Density

$/Rdm  Read ($/MB/s)

$/Rdm  Write ($/MB/s)

Rdm I/O Density

 

Intel X25-E SLC

64GB

$795-$900

$13.24

$3.28

$4.28

3.563

$17.66

$9.02

1.109

 

Cheetah 15k

300GB

$270-$300

$0.95

$2.28

$2.24

0.420

$142.50

$57.00

0.012

 

WD 1000FYPS

1TB

$190-$200

$0.20

$2.71

$2.50

0.075

$195.00

$65.00

0.002

 

Notes:

 

All I/O measurements obtained using SQLIO

Random I/O measurements using 8k pages

Sequential measurements using 64kB I/Os

I/O density is average of read and write performance divided by capacity

Price calculations based upon average of selling price range listed.

Source: Anandtech (http://it.anandtech.com/IT/showdoc.aspx?i=3532&p=1)

 

Looking at this data in detail, we see the Intel SSD produces extremely good Random I/O rates but we should all know that raw performance is the wrong measure. We should be looking at dollars per unit performance. By this more useful metric, the Intel SSD continues to look very good at $17.66 $/MB/s on 8K read I/Os whereas the HDDs are $142 and $195 $/MB/s respectively.  For hot random workloads, SSDs are a clear win.

 

What do I mean by “hot random workloads”? By hot, I mean a high number of random IOPS per GB. But, for a given storage technology, what constitutes hot?   I like to look at I/O density which is the cutoff between a given disk with a given workload being capacity bound or I/O rate bound. For example, looking at the table above we see the random I/O density for an 64GB Intel disk is 1.109 MB/s/GB.  If you are storing data where you need 1.109 MB/s of 8k I/Os per GB of capacity or better, then the Intel device will be I/O bound and you won’t be able to use all the capacity. If the workload requires less than this number, then it is capacity bound and you won’t be able to use all the IOPS on the device. For very low access rate data, HDDs are a win. For very high access rate data, SSDs will be a better price performer.

 

As it turns out, when looking at random I/O workloads, SSDs are almost always capacity bound and HDDs are almost always IOPS bound.  Understanding that we can use a simple computation to compare HDD cost vs SSD cost on your workload. Take the HDD farm cost which will be driven by the number of disks needed to support the I/O rate times the cost of the disk.  This is the storage budget needed to support your workload on HDDs. Take the size of the database and divide by the SSD capacity to get the number of SSDs required. Multiple the number of SSDs required times the price of the SSD. This is the budget required to support your workload on SSDs.  If the SSD budget is less (and it will be for hot, random workloads), then SSDs are a better choice.  Otherwise, keep using HDDs for that workload.

 

In the sequential I/O world, we can use the same technique.  Again, we look at the sequential I/O density to understand the cut off between bandwidth bound and capacity bound for a given workload.  Very hot workloads over small data sizes will be a win on SSD but as soon as the data sizes get interesting, HDDs are a more economic solution for sequential workloads.  The detailed calculation is the same. Figure out how many HDDs required to support your workload on the basis of capacity or sequential I/O rates (depending upon which is in shortest supply for your workload on that storage technology). Figure out the HDDs budget. Then do the same for SSDs and compare the numbers. What you’ll find is that, for sequential workloads, SSDs are only best value for very high I/O rates over relatively small data sizes.

 

Using these techniques and data we can see when SSDs are a win for workloads with a given access pattern.  I’ve tested this line of thinking against many workloads and find that hot, random workloads can make sense on SSDs. Pure sequential workloads almost never do unless the access patterns are very hot or the capacity required relatively small.

 

For specific workloads that are neither pure random nor pure sequential, we can figure out the storage budget to support the workload on HDDs and on SSDs as described above and do the comparison.  Using these techniques, we can step beyond the hype and let economics drive the decision.

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, April 18, 2009 10:19:30 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5] - Trackback
Hardware
 Tuesday, April 14, 2009

My notes from an older talk done by Ryan Barrett on the Google App Engine Data store at Google IO last year (5/28/2008). Ryan is a co-founder of the App Engine team.

 

·         App Engine Data Store is build on Big Table.

o   Scalable structured storage

o   Not a sharded database

o   Not an RDBMS (MySQL, Oracle, etc.)

o   Not a Distributed Hash Table (DHT)

o   It IS a sharded sorted array

·         Supported operations:

o   Read

o   Write

o   Delete

o   Single row transactions (optimistic concurrency control).

o   Scans:

1.       Prefix scan

2.       Range scan

·          Primary object: Entity

o   Stored in entity table

o   Each row has a name and the row name is fully qualified /root/parent/entity/child

o   Each entity has a parent or is a root entity and may have child entities

o   Primary key is the fully qualified name and this can’t change

o   An entity can’t be reparented (it can be deleted and created with a different parent)

·         Queries:

o   Queries can be filtered on kind and Ryan says kind “is like a table” (kind can be parent, child, grandparent, …)

o   Queries can be filtered on ancestor

o   Query language is GQL (presumably Google Query Language) which is a small subset of SQL

o   All queries must be expressible as range or prefix scans (no sort, orderby, or other unbounded size operations supported)

·         Secondary index implementation:

o   Indexes are also implemented as BigTable tables

o   Kind Index:

·         Contents: (kind, key)

o   Single property index:

·         Coentents: (kind, name, value)

·         Two copies of this index maintained: 1) ascending, and 2) descending

o   Composite indexes:

·         Contents: (kind, value, value)

·         Supports multi-property indexes

·         Built on programmer request but not on use (query returns error if required doesn’t exist)

·         Programmer can specify what composite indexes are needed in index.yaml

·         SDK creates composit index specs automatically in index.yaml as queries are run

·         Entity group

o   Supports multi-entity update

·         Defined by root entity (all entities under a root are an entity group)

·         All journaling and transactions done at root

·         Text and Blobs:

o   Not indexed. All other properties are

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Tuesday, April 14, 2009 5:28:35 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Sunday, April 12, 2009

All new technologies go through an early phase when everyone initially is completely convinced the technology can’t work. Then for those that actually do solve interesting problems, they get adopted in some workloads and head into the next phase. In the next phase, people see the technology actually works well for some workloads and they generalize this outcome to a wider class of workloads. They get convinced the new technology is the solution for all problems. Solid State Disks (SSDs) are now clearly in this next phase. 

 

Well intentioned people are arguing emphatically that SSDs are great because they are “fast”.   For the most part, SSDs actually are faster than disks both in random reads, random writes and sequential I/O. I say “for the most part” since some SSDs have been incredibly bad at random writes. I’ve seen sequential write rates as low as ¼ that of magnetic HDDs but Gen2 SSD devices are now far better. Good devices are now delivering faster than HDD results across random read, write, and sequential I/O. It’s no longer the case that SSDs are “only good for read intensive workloads”.   

 

So, the argument that SSDs are fast is now largely true but “fast” really is a misleading measure. Performance without cost has no value.  What we need to look at is performance per unit cost.  For example, SSD sequential access performance is slightly better than most HDDs but the cost MB/s is considerably higher. It’s cheaper to obtain sequential bandwidth from multiple disks than from a single SSD.  We have to look at performance per unit cost rather than just performance.  When you hear a reference to performance as a one dimensional metric, you’re not getting a useful engineering data point.

 

When do SSDs win when looking at performance per unit dollar on the server?  Server workloads requiring very high IOPS rates per GB are more cost effective on SSDs.  Online transaction systems such as reservation systems, many ecommerce systems, and anything with small, random reads and writes can run more cost effectively on SSDs. Some time back I posted When SSDs make sense in server applications and the partner post When SSDs make sense in client applications. What I was looking at is where SSDs actually do make economic sense. But, with all the excitement around SSDs, some folks are getting a bit over exuberant and I’ve found myself in several arguments where smart people are arguing that SSDs make good economic sense in applications requiring sequential access to sizable databases. They don’t.

 

It’s time to look at where SSDs don’t make sense in server applications.  I’ve been intending to post this for months and my sloth has been rewarded.  The Microsoft Research Cambridge team recently published Migrating Server Storage to SSDs: Analysis of Tradeoffs and the authors save me some work by taking this question on. In this paper the authors look at three large server-side workloads:

1.       5000 user Exchange email server

2.       MSN Storage backend

3.       Small corporate IT workload

 

The authors show that these workloads are far more economically hosted on HDDs and I agree with their argument.  They conclude:

 

…across a range of different server workloads, replacing disks by SSDs is not a cost effective option at today’s price. Depending on the workload, the capacity/dollar of SSDs needs to improve by a factor of 3 – 3000 for SSDs to replace disks. The benefits of SSDs as an intermediate caching tier are also limited, and the cost of provisioning such a tier was justified for fewer than 10% of the examined workloads

 

They have shown that SSDs don’t make sense across a variety of server-side workloads.  Essentially that these workloads are more cost effectively hosted on HDDs. I don’t quite agree with the generalization of this argument that SSDs don’t make sense on the server-side for any workloads. They remain a win for very high IOPS OLTP databases but it’s fair to say that these workloads are a tiny minority of server-side workloads. The right way to make the decision is to figure out the storage budget for the workload to be hosted on HDD and compare that with the budget to support the workload on SSDs and make the decision on that basis.  This paper argues that the VAST majority of workloads are more economically hosted on HDDs.

 

Thanks to Zach Hill who sent this my way.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, April 12, 2009 8:31:05 AM (Pacific Standard Time, UTC-08:00)  #    Comments [10] - Trackback
Hardware
 Thursday, April 09, 2009

Last week I attended the Data Center Efficiency Summit hosted by Google. You’ll find four posting on various aspects of the summit at: http://perspectives.mvdirona.com/2009/04/05/DataCenterEfficiencySummitPosting4.aspx.

 

Two of the most interesting videos:

·         Modular Data Center Tour: http://www.youtube.com/watch?v=zRwPSFpLX8I&feature=channel

·         Data Center Water Treatment Plant: http://www.youtube.com/watch?v=nPjZvFuUKN8&feature=channel

 

A Cnet article with links to all the videos: http://news.cnet.com/8301-1001_3-10215392-92.html?tag=newsEditorsPicksArea.0.

 

The presentation I did on Data Center Efficiency Best Practices is up at: http://www.youtube.com/watch?v=m03vdyCuWS0

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, April 09, 2009 7:18:35 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Tuesday, April 07, 2009

In the talk I gave at the Efficient Data Center Summit, I note that the hottest place on earth over recorded history was Al Aziziyah Libya in 1922 where 136F (58C) was indicated (see Data Center Efficiency Summit (Posting #4)). What’s important about this observation from a data center perspective is that this most extreme temperature event ever, is still less than the specified maximum temperatures for processors, disks, and memory.  What that means is that, with sufficient air flow, outside air without chillers could be used to cool all components in the system. Essentially, it’s a mechanical design problem. Admittedly this example is extreme but it forces us to realize that 100% free air cooling possible.  Once we understand that it’s a mechanical design problem, then we can trade off the huge savings of higher temperatures against the increased power consumption (semiconductor leakage and higher fan rates) and potentially increased server mortality rates.

 

We’ve known for years that air side economization (use of free air cooling) is possible and can limit the percentage of time that chillers need to be used. If we raise the set point in the data center, chiller usage falls quickly.  For most places on earth, a 95F (35C) set point combined with free air cooling and evaporative cooling are sufficient to eliminate the use of chillers  entirely.

 

Mitigating the risk of increased server mortality rates, we now have manufacturers beginning to warrant there equipment to run in more adverse conditions. Rackable Systems recently announced that CloudRack C2 will support full warrantee at 104F (40C): 40C (104F) in the Data Center. Ty Schmitt of Dell confirms that all Dell servers are warranted at 95F (35C) inlet temperatures. 

 

I recently came across a wonderful study done by the Intel IT department (thanks to Data Center Knowledge): reducing data center cost with an Air Economizer.

 

In this study Don Atwood and John Miner of Intel IT take the a datacenter module and divide it up into two rooms of 8 racks each. One room is run as a control with re-circulated air the their standard temperatures. The other room is run on pure outside air with the temperature allowed to range between 65F and 90F.  If the outside temp falls below 65, server heat is re-circulated to maintain 65F. If over 90F, then the air conditioning system is used to reduced to 90F.  The servers ran silicone design simulations at an average utilization rate of 90% for 10 months.

 

 

The short summary is that the server mortality rates were marginally higher – it’s not clear if the difference is statistical noise or significant – and the savings were phenomenal. It’s only four pages and worth reading: http://www.intel.com/it/pdf/Reducing_Data_Center_Cost_with_an_Air_Economizer.pdf.

 

We all need to remember that higher temperatures mean less engineering headroom and less margin for error so care needs to be shown when raising temperatures. However, it’s very clear that its worth investing in the control systems and processes necessary for high temperature operation. Big savings await and it’s good for the environment.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Tuesday, April 07, 2009 2:27:45 PM (Pacific Standard Time, UTC-08:00)  #    Comments [8] - Trackback

 Sunday, April 05, 2009

Last week, Google hosted the Data Center Efficiency Summit.  While there, I posted a couple of short blog entries with my rough notes:

·         Data Center Efficiency Summit

·         Rough Notes: Data Center Efficiency Summit

·         Rough Notes: Data Center Efficiency Summit (posting #3)

 

In what follows, I summarize the session I presented and go into more depth on some of what I saw in sessions over the course of the day.

 

I presented Data Center Efficiency Best Practices at the 1pm session.  My basic point was that PUEs in the 1.35 range are possible and attainable without substantial complexity and without innovation.  Good solid design, using current techniques, with careful execution is sufficient to achieve this level of efficiency.

 

In the talk, I went through power distribution from high voltage at the property line to 1.2V at the CPU and showed cooling from the component level to release into the atmosphere. For electrical systems, the talk covered an ordered list of rules to increase power distribution efficiency:

1.       Avoid conversions (Less transformer steps & efficient or no UPS)

2.       Increase efficiency of conversions

3.       High voltage as close to load as possible

4.       Size voltage regulators (VRM/VRDs) to load & use efficient parts

5.       DC distribution potentially a small win (regulatory issues)

Looking at mechanical systems, the talk pointed out the gains to be had by carefully moving to higher data center temperatures.  Many server manufacturers including Dell and Rackable will fully stand behind their systems at inlet temperatures as high as 95F. Big gains are possible via elevated data center temperatures. The ordered list of mechanical systems optimizations recommended:

1.       Raise data center temperatures

2.       Tight airflow control, short paths, & large impellers

3.       Cooling towers rather than chillers

4.       Air-side economization & evaporative cooling

 

The slides from the session I presented are posted at: http://mvdirona.com/jrh/TalksAndPapers/JamesHamilton_Google2009.pdf.

 

Workshop Summary:

The overall workshop was excellent. Google showed the details behind 1) the modular data center they did 4 years ago showing both the container design and the that of the building that houses them, 2) the river water cooling system employed in their Belgium data center. And 3) the custom Google-specific server design.

 

Modular DC: The modular data center was a 45 container design where each container was 222KW (roughly 780W/sq ft). The containers were housed in a fairly conventional two floor facility.  Overall, it was nicely executed but all Google data centers built since this one have been non-modular and each subsequent design has been more efficient than this one. The fact that Google has clearly turned away from modular designs is interesting.  My read is that the design we were shown missed many opportunities to remove cost and optimize for the application of containers.  The design chosen essentially built a well executed but otherwise conventional data center shell using standard power distribution systems and standard mechanical systems. No part of the building itself optimized for containers.  Even though it was a two level design, rather than just stacking containers, a two floor shell was built. A 220 ton gantry crane further drove up costs but the crane was not fully exploited by packing the containers in tight and stacking them. 

 

For a containerized model to work economically, the attributes of the container need to be exploited rather than merely installing them in a standard data center shell. Rather than building an entire facility with multiple floors, we would need to use a much cheaper shell if any at all. The ideal would be a design where just enough concrete is poured to mount four container mounting bolts so they can be tied down to avoid wind damage. I believe the combination of not building a full shell, the use of free air cooling, and the elimination of the central mechanical system would allow containerized designs to be very cost effective. What we learn from the Google experiment is that a the combination of a conventional data center shell and mechanical systems with containers works well (their efficiency data shows it to be very good) but isn’t notably better than similar design techniques used with non-containerized designs.

 

River water cooling: The Belgium river water cooled data center caught my interest when it was first discussed a year ago.  The Google team went through the design in detail. Overall, it’s beautiful work but included a full water treatment plant to treat the water before using it.  I like the design in that its 100% better both economically and environmentally to clean and use river water rather than to take fresh water from the local utility.  But, the treatment plant itself represents a substantial capital expense and requires energy for operation. It’s clearly an innovative way to reduce fresh water consumption. However, I slightly prefer designs that depend more deeply on free air cooling and avoid the capital and operational expense of the water treatment plant.

 

Custom Server: The server design Google showed was clearly a previous generation. It’s a 2005 board and I strongly suspect there exist subsequent designs at Google that haven’t yet been shown publically.  I fully support this and think showing publically the previous generaion design is a great way to drive innovation inside a company while contributing to the industry as a whole.  I think it’s a great approach and the server that was shown last Wednesday was a very nice design.

 

The board is a 12volt only design.  This has been come more common of late with IBM, Rackable, Dell and others all doing it.  However, when the board was first designed, this was considerably less common.  12V only supplies are simpler, distributing on-board the single voltage is simpler and more efficient, and distribution losses are lower at 12v than either 3.3 or 5 for a given sized trace. Nice work.

 

Perhaps the most innovative aspect of the board design is the use of a distributed UPS. Each board has a 12V VRLA battery that can keep the server running  for 2 to 3 minutes during power failures. This is plenty of time to ride through the vast majority of power failures and is long enough to allow the generators to start, come on line, and sync.  The most important benefit of this design is it avoids the expensive central UPS system. And, it also avoids the losses of the central UPS (94% to 96% efficient UPSs are very good and most are considerably worse). Google reported their distributed UPS was 99.7% efficient. I like the design.

 

The motherboard was otherwise fairly conventional with a small level of depopulation. The second Ethernet port was deleted as was USB and other components. I like the Google approach to server design.

 

The server was designed to be rapidly serviced with the power supply, disk drives, and battery all being Velcro attached and easy to change quickly.  The board itself looks difficult to change but I suspect their newer designs will address that shortcoming.

 

Hat’s off to Google for organizing this conference to get high efficiency data center and server design techniques more broadly available across the industry. Both the board and the data center designs shown in detail where not Google’s very newest but all were excellent and well worth seeing. I like the approach of showing the previous generation technology to the industry while pushing ahead with newer work. This technique allows a company to reap the potential competitive advantages of its R&D investment while at the same time being more open with the previous generation.

 

It was a fun event and we saw lots of great work. Well done Google.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, April 05, 2009 3:37:12 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services

The HotPower ’09  workshop will be held on October 10th at the same venue and right before the Symposium on Operating Systems Principles (SOSP 2009) at Big Sky Resort Montana. Hotpower recognizes that power is becoming a central issue in the design of all systems from embedded systems to servers for high-scale data centers.

From http://hotpower09.stanford.edu/:

Power is increasingly becoming a central issue in designing systems, from embedded systems to data centers. We do not understand energy and its tradeoff with performance and other metrics very well. This limits our ability to further extend the performance envelope without violating physical constraints related to batteries, power, heat generation, or cooling.

HotPower hopes to provide a forum in which to present the latest research and to debate directions, challenges, and novel ideas about building energy-efficient computing systems. In addition, researchers coming to these issues from fields such as computer architecture, systems and networking, measurement and modeling, language and compiler design, and embedded systems will gain the opportunity to interact with and learn from one another.

 If you are interesting in submitting a paper to HotPower: http://hotpower09.stanford.edu/cfp.html.  

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, April 05, 2009 7:18:20 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Wednesday, April 01, 2009

Previous “rough notes” posting: Rough Notes: Data Center Efficiency Summit.

 

Containers Based Data Center

·         Speaker: Jimmy Clidaras

·         45 containers (222KW each/max is 250Kw – 780W/sq ft)

·         Showed pictures of containerized data centers

·         300x250’ of container hanger

·         10MW facility

·         Water side economizer

·         Chiller bybass

o   Limit chiller hours via raised temp inside

·         High efficiency transformers: 99.5%

·         27C (81F) cold aisle

·         Distributed UPS (each server has a lead accede battery).

Jimmy showed videos of the containerized data center. Show the layout of the entire facility and the detail behind the container design.  PUE is in the 1.25 range. This data center is listed as “Data Center A” in the Google PUE publications

 

Overall it was a great presentation and it’s great to see this level of detail being contributed to the industry. The day continues to be super interesting.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, April 01, 2009 1:22:44 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback

My rough notes from the first two sessions at the Data Center Efficiency Summit at Google Mountain view earlier today:

 

Data Center Energy Going Forward

·         Speaker: John Tuccillo, APC

·         Green Grid:

o   Data Collection  & Analysis

o   Data Center Technology & Strategy

o   Data Center Operations

o   Data Center Metrics & Measurements

·         Metrics team:

o   PUE & DCiE

o   DCP: Data Center Productivity

 

Insights in Google’s PUE Results

·         Speakers: Chris Malone & Ben Jai, Google

·         Chris started off by reviewed existing data from 6 data center average quarterly and published for a year (on web):

o   All less than 1.3

o   Best at 1.16 (Google DC ‘E’)

·         Inclusion in external published data:

o   5MW or bigger and operating for more than 6 months

·         Typical PUE ~1.7

·         Google DC E

o   Mechanical: (didn't get data point)>

o   Power Distribution: 4.9%

·         Achieved by rigorous application of best practices:

o   Air-side economization

o   Water-side economization

o   Close coupled cooling

o   99.9% UPS efficiency

·         99.9% UPS Efficiency (Ben Jai presenting)

o   Distributed on-board UPS

o   Single voltage motherboard (12v)

o   Motherboard provides 5v to disk and all step downs needed by on board requirements

o   Installed a lead-acid distributed UPS to ride through power sags

o   Avoids double conversion of many central UPS

o   Only enough power in UPS to allow generators to start or to switch to other A/C supply

·         Google Measurement of PUE (Chris Malone):

o   Average DC around PUE of 2.0 in 2006

o   Sate of the art data center around 1.2 using exotic techniques

o   2 of 6 DC report daily, 4 of 6 report continuously

o   Measure at sub-station and extrapolate to utility input at substation

o   Most measurements on the server side taken at PDUs.  On newer servers, it’s measured at PDUs (more precise).

o   Accuracy of PUE measurement at +/-2%

·         Best Google facility on quarterly basis: PUE => 1.19

o   The problem with non-annual numbers is they are skewed by the impacts of changing weather conditions.  Need to annualize to gain full insight.

o   They should some impacts on PUE of weather factors and DC maintenance

o   Showed utilization at different facilities:

§  Ranged from clusters around 30% to clusters on the high end at 75% (amazingly high by industry standards).

 

Chris and Ben presented great material in this last section. Super interesting, very nice designs, and well presented.  The PUE measurement techniques look credible and the results are excellent.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, April 01, 2009 10:04:28 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services

Google is hosting a the Efficient Data Center summit today at their Mountain View facility.  It looks like its going to be as great event and I fully expect we’ll see more detail than ever on how high scale operators run their facilities. But, in addition, one of the goals of the event is to talk about what the industry as a whole can do to increase data center efficiency. It looks like a good event.

 

9:00 am

Registration

9:30 am

Welcome

 

Urs Hoelzle, Google

9:45 am

Standards from The Green Grid

 

John Tuccillo, The Green Grid

10:30 am

Insights Into Google's PUE

 

Jimmy Clidaras & Chris Malone, Google

11:15 am

What's Next for the Data Center Industry

 

Andrew Fanara, EPA

12:00 pm

Lunch

1:00 pm

Best Practices

 

James Hamilton, Amazon Web Services

1:45 pm

Google Data Center Video Tour

 

Jimmy Clidaras & Chris Malone, Google

2:15 pm

Best Practices Q&A

 

Luiz Barroso, Moderator, Google; Ken Brill, Uptime Institute;

 

James Hamilton, Amazon Web Services; Olivier Sanche, eBay

3:00 pm

Break

3:15 pm

Sustainable Data Centers & Water Management

 

Joe Kava, Google

4:00 pm

Wrap-Up

 

I just flew back from China a day ago and, having spent more than a day in an airplane has left me with an upper respritory issue. But, I’ll work through that and, baring my voice going away entirely, I’ll be presenting at 1pm.  I expect I’ll also blog interesting points over the course of the day as well.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, April 01, 2009 7:10:52 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services
 Sunday, March 29, 2009

I participated in the Self Managing Database Systems closing panel titled Grand Challenges in Database Self-Management. Also on the panel were:

Ken Salem of  University of Waterloo (my alma mater), organized the panel and asked each of us to: “identify one substantial open problem related to self-managing databases - something that people interested in this area should be working on.  Feel free to define "database" broadly.”

 

The panel was organized to give us each 10 min to present our grand challenge followed by Audience Q&A.  As my topic, I chose: RDBMS Loosing Workloads in the Cloud. The basic premise is that very high-scale service workloads actually often do use RDBMS but they use them as simple ISAMs and full RDBMS functionality is rarely exploited.  And, many data management tasks in new domains are done by MapReduce, Memcached, or other solutions.  Basically, RDBMSs are heavily used in the cloud but only a tiny percentage of their features and many new workloads aren’t using RDBMS at all.

 

The call to action is to focus on cost. Go where the user pain is (service optimize for cost). And, as a test, if the very first thing the largest users do is shut off auto-management, the feature isn’t yet right.  We should be implementing auto-management systems that the very biggest users actually chose to use. These very large customers prioritize stability over the last few percentage of optimization. They don’t want to get called in the middle of the night when a plan changes.  My recommendation is to adopt a do no harm mantra and, failing that, detect and correct harm before it has broad impact.  Be able to revert back a failed optimization fast. Focus on the problems where human optimization is not possible. For example resource allocation is extremely dynamic. The correct amount of buffer pool, sort heap, and hash join space varies with the workload and can’t be effectively human set. This type of problem is perfect for auto-management.

 

Focus on optimizations that are  1) stable (do no harm) or 2) dynamic where you can do better than a static, human chosen setting.

 

I would also like to the see the community define “database” to be all persistent data management rather on applications written to relational interfaces.  The problem is far larger.

 

My slides are at: http://mvdirona.com/jrh/talksAndPapers/JamesHamilton_SMDB_Panel.pdf.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Sunday, March 29, 2009 11:52:48 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
Services
 Saturday, March 28, 2009

Today , I’m at the Self Managing Database Systems workshop which is part of the International Conference on Data Engineering in Shanghai.  At last year’s ICDE, I participated in a panel: International Conference on Data Engineering 2008.  Earlier today, I did the SMDB keynote where I presented: Cloud Computing Economies of Scale.

 

The key points I attempted to make were:

·         Utility (Cloud) computing will be a big part of the future of server-side systems. This is a lasting and fast growing economy with clear economic gains. These workloads are already substantial and growing incredibly fast. And, it’s a new frontier where there are many new tough problems to be solved. Reminiscent of the RDBMS world 20 years ago.

·         High-scale service workloads are very different from enterprise workloads. Enterprise workloads typically have people as the number 1 cost.  Utility computing affords greater scale, a deeper investment in automation and, as a consequence, people costs are actually very low. H/W costs are dominant and power and functionally related costs are soon to take over.  The optimizations affordable in the utility computing world are much different from the enterprise computing world and the cost equations and drivers are very different.

·         The Recovery Orient Computing Model is an incredibly powerful management technique that doesn’t eliminate human administration but reduces it by a factor of 10 leaving only the interesting and tough problems. I argue that administrators that are working on only tough problems not amenable to automation are more effective, more valuable, and make less mistakes.  Drudgery are repetition drives errors.

·         If workloads are partitioned, synchronously redundant, and well monitored, they can be managed by ROC techniques with a savings of over 10x possible. This is how the best services are managed and it is a technique that will (slowly) spread to the enterprise.

·         I walk through a variety of interesting management & optimizations problems in the service world and pointed out that the current solutions are nowhere close to as good as they could be.  Huge improvements will be made over the next decade. It’s a great research area and a great area to in which to be working.

 

The slides I presented are up at: http://mvdirona.com/jrh/TalksAndPapers/JamesHamilton_SMDB2009.pdf.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Saturday, March 28, 2009 8:41:46 PM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Services
 Friday, March 27, 2009

There has been lots of speculation about the new name for Microsoft Search. The most prevalent speculation is that Live.com will be branded Kumo: Microsoft to Rebrand Search. Will it be Kumo?

 

Confirming that the Kumo brand is definitely the name that is being tested internally at Microsoft, I’ve noticed over the last week that the Search Engine Referral URL www.kumo.com has been showing up frequently as the source for searches that find this blog.  I suppose the brand could be changed yet again as the Microsoft internal bits are released externally. But, having been through the hassle of a brand change and know how much testing it really does require, I suspect we’re looking at the final answer with this one.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Friday, March 27, 2009 5:18:10 PM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Ramblings
 Thursday, March 26, 2009

Over the last couple of years, I’ve been getting more interested in Erlang as an high-scale services implementation language originally designed at Ericcson.  Back in May of last year I posted: Erlang and High-Scale System Software. 

 

The Erlang model of spawning many lightweight threads that communicate via message passing is typically less efficient than the more common shared memory and locks approach but the lightweight processes with message passing model but it is much easier to get a correct implementation using this model.  Erlang also encourages a “fail fast” programming model.  Years ago I became convinced that this design pattern is one of the best ways to get high scale systems software correct (Designing and Deploying Internet-Scale Services).   

Chris Newcombe of Amazon recently presented an excellent talk on Erlang at the Berkeley RAD Lab.  The first part of Chris’ Berkeley talk on Erlang is posted here: Erlang: Productivity and Performance (ChrisNewcombe_ErlangProductivityPerformance.pdf (298.21 KB)). The second half of Chris’ talk is posted at: http://ulf.wiger.net/weblog/wp-content/uploads/2009/01/damp09-erlang-multicore.pdf (unfortunately this link is down at the time of this posting). Update: Ulf Wiger offers a live URL for his excellent slides: http://www.cse.unsw.edu.au/~pls/damp09/damp09-wiger-keynote.pdf.

In this talk Chris gives an overview of Erlang, talks about some of the advantages of the language, and then goes through some of the performance strengths and weaknesses of Erlang.

 

                                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

Thursday, March 26, 2009 6:42:25 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
Software
 Friday, March 20, 2009

From Data Center Knowledge yesterday: Rackable Turns up the Heat, we see the beginnings of the next class of server innovations. This one is going to be important and have lasting impact. The industry will save millions of dollars and megawatts of power ignoring the capital expense reductions possible. Hat’s off to Rackable Systems to being the first to deliver. Yesterday they announced the CloudRack C2.  CloudRack is very similar to the MicroSlice offering I mentioned in the Microslice Servers posting. These are very low cost, high efficiency and high density, server offerings targeting high scale services.

 

What makes the CloudRack C2 particularly notable is they have raised the standard operating temperature range to a full 40C (104F).  Data center mechanical systems consume roughly 1/3 of all power brought into the data center:

       Data center power consumption:

      IT load (servers): 1/1.7=> 59%

      Distribution Losses: 8%

      Mechanical load(cooling): 33%

From: Where Does the Power Go?

 

The best way to make cooling more efficient is to stop doing so much of it.  I’ve been asking all server producers including Rackable to commit to full warrantee coverage for servers operating with 35C (95F) inlet temperatures.  Some think I’m nuts but a few innovators like Rackable and Dell fully understand the savings possible. Higher data center temperatures conserve energy and reduce costs. It’s good for the industry and good for the environment.

 

To fully realize these industry-wide savings we need all data center IT equipment certified for high temperature operations particularily top of rack and aggregation switches.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Friday, March 20, 2009 6:25:39 AM (Pacific Standard Time, UTC-08:00)  #    Comments [6] - Trackback
Hardware
 Thursday, March 19, 2009

HotCloud ’09 is a workshop that will be held at the same time as USENIX ’09 (June 14 through 19, 2009). The CFP:

 

Join us in San Diego, CA, June 15, 2009, for the Workshop on Hot Topics in Cloud Computing. HotCloud '09 seeks to discuss challenges in the Cloud Computing paradigm including the design, implementation, and deployment of virtualized clouds. The workshop provides a forum for academics as well as practitioners in the field to share their experience, leverage each other's perspectives, and identify new and emerging "hot" trends in this area.

HotCloud '09 will be co-located with the 2009 USENIX Annual Technical Conference (USENIX '09), which will take place June 14–19, 2009. The exact date of the workshop will be set soon.

The call for paper is at: http://www.usenix.org/events/hotcloud09/cfp/.

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Thursday, March 19, 2009 4:22:14 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Ramblings
 Wednesday, March 18, 2009

This the third posting in the series on heterogeneous computing. The first two were:

1.       Heterogeneous Computing using GPGPUs and FPGAs

2.       Heterogeneous Computing using GPGPUs:  NVidia GT200

 

This post looks more deeply at the AMD/ATI RV770.

 

The latest GPU from AMD/ATI is the RV770 architecture.  The processor contains 10 SIMD cores, each with 16 streaming processor (SP) units.   The SIMD cores are similar to NVidia’s Texture Processor Cluster (TPC) units (the NVidia GT200 also has 10 of these), and the 10*16 = 160 SPs are “execution thread granularity” similar to NVidia’s SP units (GT200 has 240 of these).  Unlike NVidia’s design which executes 1 instruction per thread, each SP on the RV770 executes packed 5-wide VLIW-style instructions.  For graphics and visualization workloads, floating point intensity is high enough to average about 4.2 useful operations  per cycle.  On dense data parallel operations (ex. dense matrix multiply), all 5 ALUs can easily be used.

 

The ALUs in each SP are named x, y, z, w and t.  x, y, z and w are symmetric, and capable of retiring a single precision floating point multiply-add per cycle.  The t unit is a Special Function Unit (SFU) capable of everything an xyzw ALU can do, plus transcendental functions like sin, cos, etc.  There is also a branch unit in each SP to deal with shader program branches.

 

From this information, we can see that when people are talking about 800 “shader cores” or “threads” or “streaming processors”, they are actually referring to the 10*16*5 = 800 xyzwt ALUs.  This can be confusing, because there are really only 160 simultaneous instruction pipelines.  Also, both NVidia and AMD use symmetric single issue streaming multiprocessor architectures, so branches are handled very differently from CPUs. 

 

The RV770 is used in the desktop Radeon 4850 and 4870 video cards, and evidently the “workstation” FireStream 9250 and FirePro V8700.  The Radeon 48x0 X2 “enthusiast desktop” cards have two RV770s on the same card. Like NVidia Quadro cards, the typical difference between the “desktop” and “workstation” cards is that the workstation card has anti-aliased (AA) line capability enabled (primarily for the CAD market) and it costs 5-10 times as much.    

 

[The computing cores always have AA line capability, so it’s probably more accurate to say that the desktop cards have this capability disabled.  Theoretically, foundry binning could sort processors with hard faults in the “anti-aliased line hardware” as “desktop” processors.  However, this probably never really happens since this is just a tiny bit of instruction decode logic or microcode that sends “lines” to shared setup logic that triangles are computed on.  Likewise, the NVidia Tesla boards are just GT200 processors with potentially some extra compliance testing and more (non-ECC) board memory.  Arguably, these artificially maintained high margin product lines are what keep these companies profitable; industrial design subsidizes gamers!]

 

Double precision floating point is accomplished by fusing the xyzw ALUs within an SP into two pairs.  These two double units can perform either multiply or add (but not both) each cycle.  The t unit is unaffected by this fused mode, and ALU/transcendental operations can be co-scheduled alongside the doubles just like with single precision-only VLIW issue.

 

Local card memory is 512MB of GDDR3 for the 4850 and 1GB of GDDR5 for the 4870.  Both use a 256 bit wide bus, but GDDR3 is 2 channel while GDDR5 is 4 channel.

 

Let’s look at peak performance numbers for the Radeon 4870, clocked at reference 750MHz.  Keep in mind that all of the ALUs are capable of multiply-add instructions (2 flop/cycle):

= 750MHz/s * 10 SPMD * 16 SIMD/SPMD * 5 ALU/SIMD * 2 flop/cycle per ALU

= 1200000M flop/s = 1.2 TFlop/s

For double precision:

= 750MHz/s * 10 * 16 * 2 “double FPU” * 1 Flop/cycle per “double FPU”

= 240 GFlop/s double precision + 240 GFlop/s single precision on the 160 t SFUs

 

Reference memory frequency is 900 MHz:

= 900MHz/s * 4 channels * 256 bits/channel = 115 GB/s

 

Here are peak performance numbers for some RV770 cards:

                                                Single                    Double                 Bandwidth          TDP Power          Cost

·         Radeon 4850      1000 GFlop/s      200 GFlop/s        64 GB/s                180W                     $130

·         Radeon 4870      1200                       240                         115                         200                         $180

·         4850 X2                 2000                       400                         127                         230                         $255

·         4870 X2                 2400                       480                         230                         285                         $420

·         FireStrm 9250    1000                       200                         64                           180                         $790       (same as 4850)

·         FirePro V8700    1200                       240                         115                         200                         $1130    (same as 4870)

 

The Radeon 4850 X2 is the cheapest compute capability per retail dollar available outside of DSPs and fixed function ASICs.  However, it’s bandwidth is very low compared to floating point horsepower – if it executes less than 63 floating point instructions for every F32 piece of data that must be fetched from memory, then memory bandwidth will be the bottleneck!  The 4870 is better balance at a computational intensity breakpoint of 42.  However, NVidia’s cards are applicable to a wider range of workloads; the GTX 285 has a breakpoint of 27 instructions (less compute power, more bandwidth).  For reference a Core i7 is about 16, and CPU caches are much bigger than GPU “caches” so there is a more opportunity to reuse data before fetching off-chip.

 

Thanks to Mike Marr for the research and the detailed write-up above. Errors or omissions are mine.

 

                                                --jrh

 

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com  

H:mvdirona.com | W:mvdirona.com/jrh/work  | blog:http://perspectives.mvdirona.com

 

Wednesday, March 18, 2009 4:09:07 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

Archive
<April 2009>
SunMonTueWedThuFriSat
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

Categories
This Blog
Member Login
All Content © 2014, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton