Monday, May 12, 2008

I’ve spent a big part of my life working on structured storage engines,  first in DB2 and later in SQL Server.  And yet, even though I fully understand the value of fully schematized data, I love full text search and view it as a vital access method for all content wherever it’s stored.   There are two drivers of this opinion: 1) I believe, as an industry, we’re about ¼ of the way into a transition from primarily navigational access patterns to personal data to ones based upon full text search, and 2) getting agreement on broad, standardizing schema across diverse user and application populations is very difficult. 


On the first point, for most content on the web, full text search is the only practical way to find it.  Navigational access is available but it’s just not practical for most content.  There is simply too much data and there is no agreement on schema so more structured searches are usually not possible.  Basically structured search is often not supported and navigational access doesn’t scale to large bodies of information.  Full text search is often the only alternative and it’s the norm when looking for something on the web. 


Let’s look at email.   Small amounts of email can be managed by placing each piece of email you chose to store in a specific folder so it can be found later navigationally.  This works fine but only if we keep only a small portion of the email we get.  If we never bothered to throw out email or other documents that we come across, the time required to folderize would be enormous and unaffordable. Folderization just doesn’t scale.  When you start to store large amount of email or just stop (wasting time) aggressively deleting email, then the only practical way to find most content is full text search.  As soon as 5 to 10GB of un-folderized and un-categorized personal content is accumulated, it’s the web scenario all over again: search is the only practical alternative.  I understand that this scenario is not supported or encouraged by IT or legal organizations at most companies but that is the way I chose to work.  There is no technical stumbling block to providing unbounded corporate email stores and the financial ones really don’t stand up to scrutiny. Ironically most expensive, corporate email systems offer only tiny storage quotas while most free, consumer-based services are effectively unbounded.  Eventually all companies will wake up to the fact that knowledge workers work more efficiently with all available data.  And, when that happens, even corporate email stores will grow beyond the point of practical folderization.


The second issue was the difficulty of standardizing schema across many different stores and many different applications.  The entire industry has wanted to do this over the past couple of decades and many projects have attempted to make progress.  If they were widely successful, it would be wonderful but they haven’t been.  If we had standardized schema, we would have quick and accurate access to all data across all participating applications.  But it’s very hard to get all content owners to cooperate or even care.  Search engines attempt to get to the same goal but they chose a more practical approach: they use full text search and just chip away at the problem.  They work hard on ranking. They infer structure in the content where possible and exploit it where it’s found.   Where structure can’t be found, at least there is full text search with reasonably good ranking to full back upon.


Strong or dominant search engine providers have considerable influence over content owners and weak forms of schema standardization becomes more practical.  For example, a dominate search engine provider can offer content owners opportunities to get better search results for their web site if they supply a web site map (standard schema showing all web pages in site).  This is already happening and web administrators are participating because it brings them value.  A web sites ranking in the important search engine providers is very vital and a chance to lift your ranking even slightly is worth a fortune.  Folks will work really hard where they have something to gain.  So, if adopting common schema can improve ranking, there is significant chance something positive actually could happen. 


The combination of providing full text search over all content and then motivating content providers to participate in full or partial schema standardization coupled with the search engine inferring schema where it’s not feels like a practical approach to richer search.  I love full text search and view it as the under-pinning to finding all information structured or not.  The most common queries will include both structured and non-structured components but the common element will be that full schema standardization isn’t required nor is it required that a user understand schema to be able to find what they need.  Over time, I think we will see incremental participation in standardized schemas but this will happen slowly.  Full text search with good ranking and relevance assisted by whatever schema can be found or inferred in the data will be the under-pinning to finding most content over the near term.




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Monday, May 12, 2008 4:42:14 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Wednesday, May 07, 2008

Some time back I got a question on what I look for when hiring a Program Manager from the leader of a 5 to 10 person startup.  I make no promise that what I look for is typical of what others look for – it almost certainly is not.  However, when I’m leading an engineering team and interviewing for a Program Manager role, these are the attribute I look for.  My response to the original query is below:


The good news is that you’re the CEO not me.  But, were our roles reversed, I would be asking you why you think you need  PM at this point?  A PM is responsible for making things work across groups and teams.  Essentially they are the grease that helps make a big company be able to ship products that work together and get them delivered through a complicated web of dependencies.  Does a single product startup in the pre-beta phase actually need PM?  Given my choice, I would always go with more great developer at this phase of the companies life and have the developers have more design ownership, spend more time with customers, etc.  I love the "many hats" model and it's one of the advantages of a start-up. With a bunch of smart engineers wearing as many hats as needed, you can go with less overhead and fewer fixed roles, and operate more efficiently. The PM role is super important but it’s not the first role I would staff in a early-stage startup.


But, you were asking for what I look for in a PM rather than advice on whether you should look to fill the role at this point in the company’s life.  I don't believe in non-technical PMs, so what I look for in PM is similar to what I look for in a developer.  I'm slightly more willing to put up with somewhat rusty code in a PM, but that's not a huge difference.  With a developer, I'm more willing to put up with certain types of minor skill deficits in certain areas if they are excellent at writing code.  For example, a talented developer that isn’t comfortable public speaking, or may only be barely comfortable in group meetings, can be fine. I'll never do anything to screw up team chemistry or bring in a prima donna but, with an excellent developer, I'm more willing to look at greatness around systems building and be OK with some other skills simply not being there as long as their absence doesn't screw-up the team chemistry overall.  With a PM, those skills need to be there and it just won't work without them.


It's mandatory that PMs not get "stuck in the weeds". They need to be able to look at the big picture and yet, at the same time, understand the details, even if they aren't necessarily writing the code that implements the details.  A PM is one of the folks on the team responsible for the product hanging together and having conceptual integrity.  They are one of the folks responsible for staying realistic and not letting the project scope grow and release dates slip. They are one of the team members that need to think customer first, to really know who the product is targeting, to keep the project focused on that target, and to get the product shipped


So, in summary: what I look for in a PM is similar to what I look for in a developer ( but I'll tolerate their coding possibly being a bit rusty. I expect they will have development experience. I'm pretty strongly against hiring a PM straight out of university -- a PM needs experience in a direct engineering role first to gain the experience to be effective in the PM role. I'll expect PMs to put the customer first and understand how a project comes together, keep it focused on the right customer set, not let feature creep set in, and to have the skill, knowledge, and experience to know when a schedule is based upon reality and when it's more of a dream.  Essentially I have all the expectations of a PM that I have of a senior developer, except that I need them to have a broad view of how the project comes together as a whole, in addition to knowing many of the details. They must be more customer focused, have a deeper view of the overall project schedules and how the project will come together, be a good communicator, perhaps a less sharp coder, but have excellent design skills. Finally, they must be good at getting a team making decisions, moving on to the next problem, and feeling good about it.






Wednesday, May 07, 2008 4:40:50 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Monday, May 05, 2008

I forget what brought it up but sometime back Sriram Krishnan forwarded me this article on Mike Burrows and his work through Dec, Microsoft, and Google (The Genius: Mike Burrows' self-effacing journey through Silicon Valley).  I enjoyed the read.  Mike has done a lot over the years but perhaps his best known works of recent years are Alta Vista at DEC and Chubby at Google.


I first met Mike when he was at Microsoft Research.  He and Ted Wobber (also from Digital) came up to Redmond to visit.  Back then I led the SQL Server relational engine development team which included the full text search index support.   I was convinced then, and still am today, that relational database engines do a good job of managing structured data but a poor job of the other 90 to 95% of the data in the world that is less structured.  It just seems nuts to me that customers industry-wide are spending well over $10B a year on relational database management systems and yet only being able to effectively use these systems to manage a tiny fraction of their data.  As an increasing fraction of the structured data in the world is already stored in relational database managements systems, industry growth will come from helping customers manage their less structured data. 


To be fair, most RDMBS (including SQL Server) do support full text indexing but what I’m after is deep support for full text where the index is a standard access method rather than a separate indexing engine on the side and, more importantly, full statistics are tracked on the full text corpus allowing the optimizer to make high quality decisions on join orders and techniques that include full text indices.


If you haven’t read Mike’s original Chubby paper, do that:  Another paper is at: Chubby is an interesting combination of name server, lease manager, and mini-distributed file system.  It’s not the combination of functionality that I would have thought to bring together in a single system but it’s heavily used and well regarded at Google.  Unquestionably a success.




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Monday, May 05, 2008 4:32:43 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Thursday, May 01, 2008

The years of Moore’s law growth without regard to power consumption are now over. On the data center side, power isn’t close to the largest cost of running a large service but it is one of the largest controllable costs and it has been in the press frequently of late.  On the client side, battery power is the limiting factor. 


It is worth understanding what devices consume the most power since most laptops provide some form of user control.   Most systems allow LCD backlight dimming, the CPU power consumption can be lowered (a combination of factors including reducing clock speed and voltage), wireless radios can be switched off, and disks activity can be curtailed or eliminated.  Where does the power go? 


The data below was measured by Mahesri and Vardhan with an Thinkpad R40 as the system under test:













LCD Backlight



Wireless (802.11)




HDD (40GB@4,200RPM)








 Data from:


The dominant consumer by a significant factor is the CPU.   This power consumption is, of course, very load dependent particularly in multi-core systems where the spread between minimum and maximum power dissipation is even higher. The second largest consumer is the LCD backlight, which isn’t surprising.  Two LCD-related findings that I did find surprising: 1) the LCD without backlight is a very light consumer of power, and 2) there is a perceptible difference in power consumption between mostly black and mostly white backgrounds.   The hard disk drive power consumption was notably less than I expected with only 2.8W dissipated during active reading.


I wrote up more detail in: ClientSidePower6_External.doc (130 KB).




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Thursday, May 01, 2008 4:49:55 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Tuesday, April 29, 2008

My rough notes from the Web 2.0 Keynote by Yahoo! CTO Ari Balogh:


·         Yahoo! is making three big bets:

1.       Be the starting point for all consumers

2.       Be the must buy for advertisers

3.       Provide an Open Platform

·         Focus of today’s talk is on the later, open platform.

·         Yahoo! broad set of assets are well known

·         We lead in 7 areas including: Mail, My Front Page and Messenger (the full list was provided nor how Yahoo! was computed to “lead” in these area)

·         350M unique users/month and 500M users overall

·         20B page views/month

·         250M users minutes per month

·         10B user relationships across properties and this is the real asset

·         Yahoo! has been open since 2003

·         25+ APIs

·         200K App IDs (hints at the large number of developers)

·         #2 API in the world with Flikr

·         1B UI files/served/week

·         Y!OS: (Yahoo! Open Strategy)

·         Announcing today they are open all assets at Yahoo! to developers

·         Planning to make all experiences at Yahoo “social”

·         Provide an open developer platform with hooks for third parties to make experiences more social

·         Built into application platform:

·         Security: give users control of their data.  Where they want to share what with who.

·         Application gallery. A common way to <JRH>

·         Unify profiles across all of Yahoo (this will take a while) and provide access to developers the social graph and the notification engine. Open up developer access to produce events and the platform includes the ranking engine to show users the most relevant events based upon their context (including social graph).

·         Making Yahoo! more social:

·         Not creating another social network

·         Making all of yahoo “social”

·         “social” isn’t a destination but rather a dimension of a user experience

·         “social” drives relevance, community, and virality

·         Showed some examples:

·         Email client showing messages most relevant on the basis of social network

·         Same basic idea for a “My Yahoo!” page

·         When?

·         Search Monkey is the first step

·         Later this year they will deliver Y!OS and provide more uniform and consistent developer access

·         Making Yahoo! more social will take longer with property by property steps being taken over next few years

·         Summary:

1.       Rewiring Yahoo! from the ground up

2.       Open Yahoo! to developers like never before

3.       Making Yahoo! more social across Yahoo! properties and to third party developers


The 12 min presentation is at: Ari Balogh Web 2.0 Expo Keynote.


James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Tuesday, April 29, 2008 3:53:37 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Friday, April 25, 2008

Flash SSDs in laptops have generated considerable excitement over the last year and are in use at both extremes of the  laptop market.  At the very low end, where only very small storage amounts can be funded, NAND Flash is below the below the disk price floor.  Mechanical disks with all their complexity are very difficult to manufacture for less than $30 each.  What this means is that for very small storage quantities, NAND Flash storage can actually be cheaper than mechanical disk drives even though the price per GB for Flash is larger. That’s why the One Laptop Per Child project uses NAND flash for persistent storage.  At the high end of the market, NAND flash considerably more expensive than disk but, for the premium price, offers much higher performance, more resilience to shock and high G handling, and longer battery life.


Recently there have been many reports of high-end SSD laptop performance problems.  Digging deeper, this is driven by two factors: 1) gen 1 SSDS produce very good read performance but aren’t particularly good on random write workloads, and 2) performance degradation over time.  The first factor can be seen clearly in this performance study using SQLIO:  The poor random write performance issue is very solvable using better Flash wear leveling algorithms, reserving more space (more on this later), and capacitor backed DRAM staging areas. In fact STEC ZeusIOPS is producing great performance numbers today, Fusion IO is reporting great numbers, and many others are coming.  The first problem, that of poor random write performance, can be solved and these solutions will migrate down to the commodity drives. 


The second problem, the performance degradation issue, is more interesting.  There have been many reports of laptop dissatisfaction and very high return rates: Returns, technical problems high with flash-based notebooks. Dell has refuted these claims Dell: Flash notebooks are working fine but there are lingering anecdotal complaints of degrading performance. I’ve heard it enough myself that I decided to dig deeper.  I chatted off the record with an industry insider on why SSDs appear to degrade over time.  Here’s what I learned (released with their permission):


On a pristine NAND SSD made of quality silicon to ensure write amplification remaining at 1 [jrh: write amplification refers to the additional writes that are caused by a single write due to wear leveling and the Flash erase block sizes being considerably larger than the write page size – the goal is to get this as close to 1 as possible where 1 is no write amplification], given a not-so-primitive controller and reasonable over-provisioning (greater than 25%), a sparsely used volume (less than half full at any time) will not start showing perceptible degraded performance for a long time (perhaps as long as 5 years, the projected warranty period to be given to these SSD products).


If any of the above conditions is changed, the write amplification will quickly degrade ranging from 2 to 5, or even higher.  That contributes to the early start of perceptible degraded write performance.  That is, on a fairly full SSD you’d start having perceptible write performance problems more quickly, and so on.


Inexpensive (cheap?) SSD made of low-quality silicon will likely to have more read errors.  Error correction techniques will still guarantee correct information being returned on reads.  However, each time a read error is detected, the whole “block” of data will have to be relocated elsewhere on the device.  A not-so-well designed controller firmware will worsen the read delay, due to poorly implemented algorithms and ill-conceived space layout that take longer to search for available space for the relocated data, away from the read error area.


If the read-error-data-relocation happens to collide with the negative conditions that plague the write performance above, you’d start seeing overall degraded performance very quickly.


Chkdsk may have contributed to the forced relocation of the data away from where read errors occurred, hence improving the SSD performance (for a while) until the above collisions happen.  Perhaps the same when Defrag is used.


In short, performance degradation over time is unavoidable with SSD devices.  It’s a matter of how soon it kicks in and how bad it gets; and it varies across designs.


We expect the enterprise class SSD devices to be as much as 100% over-provisioned (e.g., a 64GB SSD actually holds 128GB of flash silicon). 


Summary: there are two factors in play. The first is that SSD write random performance is not great on low end parts so ensure you understand the random write I/O specification before spending on an SSD. The second one is more insidious in that, in this failure mode, the performance just degrade slowly over time.  The best way to avoid this phenomena is to 2x over-provision.  If you buy N bytes of SSD, don’t use more than ½N and consider either chkdsk or copying the data off, bulk erasing, and sequentially copying back on . We know over-provisioning is effective. The later techniques are unproven but seem likely to work. I’ll report supporting performance studies or vendor reports when either surface.




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Friday, April 25, 2008 4:15:29 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Wednesday, April 23, 2008

It’s not often I come across three interesting notes in the same day but here’s another. Earlier today the Jim Gray Systems Lab was announced and it will be lead by long time database pioneer David DeWitt.  This is great to see for a large variety of reasons. First of all it’s wonderful to see the contribution of Jim Gray to the entire industry recognized in the naming of this new lab.  Very appropriate.  Second I’m really looking forward to working more closely with DeWitt.  This is going to be fun.


This is “earned” in that Madison has been contributing great database developers to the industry for what seems like forever – I’ve probably worked with more Madison graduates over the years than any other single school. It’s good to see a systems focused research lab opened up there. 


It’s also good to see this project come together. I was involved in earlier discussions on this project some years back and, although we didn’t find a way to make it happen then, I really liked the idea.  I’m glad others were successful in doing the hard work to get this project to reality.


·         University of Wisconsin at Madison News:  

·         DeWitt Interview (from above):

·         Server and Tools Business News Blog:

·         Information Week:;jsessionid=2PMY2VDAXNZHSQSNDLOSKHSCJUNN2JVN?articleID=207401497




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Wednesday, April 23, 2008 11:07:22 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback

Earlier today, Amazon AWS announced a reduction in egress charges.  The new charges:

·         $0.100 per GB - data transfer in

·         $0.170 per GB - first 10 TB / month data transfer out

·         $0.130 per GB - next 40 TB / month data transfer out

·         $0.110 per GB - next 100 TB / month data transfer out

·         $0.100 per GB - data transfer out / month over 150 TB


Compared with the old:

·         $0.100 per GB - data transfer in

·         $0.180 per GB - first 10 TB / month data transfer out

·         $0.160 per GB - next 40 TB / month data transfer out

·         $0.130 per GB - data transfer out / month over 50 TB


Most networking contracts charge symmetrically for ingress and egress – you pay the max of the two -- so the ingress cost to Amazon is effectively zero.


Note that it’s a non-linear reduction favoring higher volume users.  TechCrunch reported a couple of days back that the Amazon AWS customer base has rapidly swung from a nearly pure start-up community to more of a mix of startups and very large enterprises with the enterprise customers now bringing the largest workloads (  Not really all that surprising – I expected this to happen and talked about it in the Next Big Thing. What is surprising to me is the speed with which the transformation is taking place. I was predicting workload mix shift to happen at AWS 3 to 5 years from now. Things are moving quickly in the services world.




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Wednesday, April 23, 2008 7:51:41 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback

Live Mesh has been under development for a couple of years now.  Now it’s hear in “technology preview” form. I think the first public mention was probably back in March of last year in a blog entry by Mary Jo Foley that mentioned Windows Live Core ( Last night Amit Mital, General Manager of Windows Live Core, did a blog entry that coves Live Mesh in more detail that previously seen:


UPDATE: The report above attributing first mention of Windows Live Core to Mary Jo Foley was incorrect.  The sleuths at LiveSide appear to have reported this one first:


Live Mesh is a platform that supports synchronizing data across devices, a platform for deploying  and managing apps that run on multiple devices, supports screen remoting making all your devices and applications available from anywhere, and it strikes an interesting balance exploiting both cloud services supported features and unique device capabilities. The initial device support is Windows only but Mac and other device clients are coming as well.


Screen shots are up on CrunchBase:


Ray Ozzie did a 36 min Channel 9 interview with Jon Udell:


Abolade Gbadegesin, Live Mesh Architect, did a video on Live Mesh Architecture that is worth checking out:


Demo video:




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Wednesday, April 23, 2008 7:15:15 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3] - Trackback
 Tuesday, April 22, 2008

Here’s a statistic I love, Facebook is running 1,800 MySQL Servers with only 2 DBAs. Impressive. I love seeing services show how far you can go towards admin-free operation. 2:1,800 is respectable and for database servers it downright impressive. This data from a short but interesting report at:


The Facebook fleet has grown fairly dramatically of late.   For example, Facebook is the largest Memcached installation and the most recent reports I had come across have 200 Memcached servers at facebook.  At the Scaling MySQL panel, they report 805 Memcached servers.


1,800 MySQL Servers, insulated by 805 Memcached servers, and driven by 10,000 web servers. Smells like success.




Thanks to Dare Obasanjo for pointing me to this one.


James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Tuesday, April 22, 2008 7:36:00 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Monday, April 21, 2008

Back in March I speculated that Google was soon to announce a third party service platform. Well, on the evening of April 7th, Google Application Engine was announced.  It’s been heavily covered over the last couple of weeks and I’ve been waiting to get a beta account so I can write some code against it. I’ve not yet got an account but Sriram Krishnan has been playing with it and sent me the following excellent review.


·         Guest book development video: Developing and deploying an application on Google App Engine (9:29)

·         Techcrunch: Google Jumps Head First Into Web Services With Google App Engine.

·         Google App Engine Limitations: evan_tech.

·         What’s coming up: We're up and Running!

·         High Scalability: Google App Engine – A second Look


Sriram’s review of Application Engine.



-       It’s well designed from end to end, builds on a good ecosystem of tools, most scenarios for a typical web 2.0 app are covered. If I were to ever get into the Facebook-app writing business, AppEngine would be my first choice. However, any startup which requires code to execute outside the web request-reply cycle is out of luck and would need to use EC2.

-       The mailing list is overflowing so there is obviously huge community interest and lots of real coders building stuff.

-       The datastore is a bit wonky for my taste. It neither fits into SQL/RDBMS nor the clean spreadsheet model of Amazon SimpleDb – it’s a ORM with some querying thrown-in and that leads to some abstraction leakages . The limitations on queries are going to take a bit of getting used to since they’re not intuitive at all(they only support queries where they can scan the index sequentially for results, the choice of datatype is not straightforward). The datastore was the area where I found myself consulting the docs most frequently.

-       Python-only is probably a big con at the moment. I’m a big Python fan but its pretty apparent that a lot of people want PHP and Ruby.  However, when you poke around the framework, it is pretty apparent that the framework is built to be language agnostic and that the creators had support for other languages in mind from the beginning.

-       Lack of SSL support, unique IPs per app instance are other problems. The latter really kicks in when you’re calling other Web 2.0 APIs. A lot of them do quota calculations based on IP address and this wont work when you’re sharing your IP with a bunch of other apps. Lack of SSL support is not a blocker (since you can use Google’s inbuilt authentication system) but will block any non-serious app.

-       The beta limits are too conservative and they are too aggressive in enforcing them -  they kept nuking my benchmarking apps for relatively short bursts of activity (more on that later). This really makes me hesitate to put anything non-trivial on AppEngine. If I were them, I would loosen up these limits or get customers to pay a bit extra for more CPU/network slices


The Web Framework

-       I’m familiar with Python and Django so I’m probably not the best person to judge the learning curve. It’s very clean and usable (I like it much better than ASP.NET) and I found myself being reasonable productive within a few minutes.

-        There are also put hooks in so that you can use almost any Python framework of your choice with a bit of work – you’re not stuck to the one provided. On the mailing list, there’s a lot of activity around porting other frameworks (pylons,, cherrypy, etc) to AppEngine. If it were up to me, I would be using Aaron Swartz’s but that is more a stylistic personal preference.

-       Python was not originally designed to be sandboxed so Google had to make some major cuts to make it ‘safe’ – they don’t allow opening sockets for example. This has caused a lot of open source Python code to stop working – essential libraries like urllib (the equivalent of .net’s HttpWebRequest) need some porting work.

-       The tools support is a bit sparse – debugging is mostly through printf/exception stack traces However, what it lacks in tooling is made up for in the speed of its edit cycle – just edit a .py file and then refresh the page.

-       Some people are going to have trouble getting used to the lack of sessions but I think the pain will be temporary (some people have started working on using the datastore as a Django session store to session state). From my limited testing, I didn’t see much machine affinity – Google seems happy to spin up processes on different machines and kill them the moment they finish serving the request.


The Datastore

-       You specify your data models in Python and there’s some ORM magic that takes place behind the scenes. They have a few inbuilt data types and you can use expando (dynamic) properties to assign properties at runtime which haven’t been defined in your model. Data schema versioning is a big question-mark at the moment – if I were Google, I would look into supporting something like RoR’s migrations

-       Querying is done through a SQL-subset called GQL on specifically defined indexes. For a query to succeed, the query must be supported by an index and the scan needs to find sequential results and this puts some restrictions on the kinds of queries you can execute (you can’t have inequality operators on more than one attribute, for example). Several indexes are auto-generated and you can request others to be created.

-       They appear to auto-generate several indexes.

-       Entities can be grouped together through ReferenceProperties into groups. Each group is stored together. Queries within one group can be bunched together into a transaction (everything is optimistic concurrency by default). Bunching together lots of entities into one group is bad since Google seems to do some sort of locking on the entity group – the docs say some updates might fail.

-       No join support. Like SimpleDb, they suggest de-normalization.

-       The datastore tools are sparse at the moment. I had to write code to delete stale data from my datastore since the website would only show me 20 items at a time.

-       All the APIs (the datastore, user auth, mail) are offered through Google’s internal RPC mechanism. Google calls the individual  RPC messages protocol buffers and all the AppEngine APIs are implemented using the afore-mentioned stub generators (this is what you get with the local SDK as well). 



This section is woefully short - it is very hard to run benchmarks since Google will keep killing apps with high activity. Here’s what I got


-       Gets/puts/deletes are all really fast. I benchmarked a tight loop running a fixed number of iterations, each query operating on a single object or retrieving a single object (which I kept tuning to avoid hitting the Google limits). Each averaged 0.001 s(next to nothing – almost noise).

-       Turning up the number of results to retrieve meant a linear increase in numbers. I inserted multiple entities with just a single byte in each to have the least possible serialization/de-serialization overhead.  For 50 results, the query execution time was around 0.15s, for 100, around 0.30s and so on. I saw a linear increase all the way until I hit Google’s limits on CPU usage.

-       I can’t measure this correctly but a ballpark guesstimate is that Google nukes your app if you use up close to 100% CPU (by running in a tight loop like I did) for over 2 seconds for any given request. For every app, they tell you the number of CPU cycles used (a typical benchmark app cost me around 50 megacycles) and I think they do some quota calculations based on megacycles used per second.


Overall, perf seems excellent but I would worry about hitting quota limits due to a Digg/Slashdot effect. I plan on trying out some more complex queries and I’ll let you know if I see something weird.


The Tools

-       The dashboard is excellent. Gives you nice views on error logs, what’s in the datastore, usage patterns for all your important counters (requests, CPU, bandwidth, etc)

-       Good end-to-end flow for the common tasks – registering a domain and assigning it to your application, managing multiple versions of your app, looking at logs,etc.


James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:


Monday, April 21, 2008 4:59:22 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Friday, April 18, 2008

In the Rules of Thumb post, I argued that many of the standard engineering rules the thumb are changing. On a closely related point, Nishant Dani and Vlad Sadovsky both pointed me towards The Landscape of Parallel Computing Research: A View from Berkeley by David Patterson et al. Dave Patterson is best known for foundational work on RISC and for co-inventing RAID.  He has an amazing ability to spot a problem where the solution is near, the problem is worth solving, and then come up with practical solutions.  This paper has many co-authors but shows some of that same style.  It focuses on parallel systems and some of the conventional wisdom that has driven systems designs for some time that are no longer correct.  The Berkeley web site with more detail is at:


In the paper they argue that 13 computational kernels can be used to characterize most workloads.  Then they go on to observe that over ½ of these kernels are memory bound today and we expect more to be in the future.  In effect, the problem is getting data up the storage and memory hierarchy to the processors not the speed of the processors themselves. This has been true for years and the problems worsens each year and yet it still seems that the problem gets less focus than scaling processors speeds even though the later won’t help without the first.


If you are interested in parallel systems, it’s worth reading the paper.  I’ve included the key changes in conventional wisdom below:


1. Old CW: Power is free, but transistors are expensive.

· New CW is the “Power wall”: Power is expensive, but transistors are “free”. That

is, we can put more transistors on a chip than we have the power to turn on.

2. Old CW: If you worry about power, the only concern is dynamic power.

· New CW: For desktops and servers, static power due to leakage can be 40% of

total power. (See Section 4.1.)

3. Old CW: Monolithic uniprocessors in silicon are reliable internally, with errors

occurring only at the pins.

· New CW: As chips drop below 65 nm feature sizes, they will have high soft and

hard error rates. [Borkar 2005] [Mukherjee et al 2005]

4. Old CW: By building upon prior successes, we can continue to raise the level of

abstraction and hence the size of hardware designs.

· New CW: Wire delay, noise, cross coupling (capacitive and inductive),

manufacturing variability, reliability (see above), clock jitter, design validation,

and so on conspire to stretch the development time and cost of large designs at 65

nm or smaller feature sizes. (See Section 4.1.)

5. Old CW: Researchers demonstrate new architecture ideas by building chips.

· New CW: The cost of masks at 65 nm feature size, the cost of Electronic

Computer Aided Design software to design such chips, and the cost of design for

GHz clock rates means researchers can no longer build believable prototypes.

Thus, an alternative approach to evaluating architectures must be developed. (See

Section 7.3.)

6. Old CW: Performance improvements yield both lower latency and higher


· New CW: Across many technologies, bandwidth improves by at least the square

of the improvement in latency. [Patterson 2004]

7. Old CW: Multiply is slow, but load and store is fast.

· New CW is the “Memory wall” [Wulf and McKee 1995]: Load and store is slow,

but multiply is fast. Modern microprocessors can take 200 clocks to access

Dynamic Random Access Memory (DRAM), but even floating-point multiplies

may take only four clock cycles.

The Landscape of Parallel Computing Research: A View From Berkeley


8. Old CW: We can reveal more instruction-level parallelism (ILP) via compilers

and architecture innovation. Examples from the past include branch prediction,

out-of-order execution, speculation, and Very Long Instruction Word systems.

· New CW is the “ILP wall”: There are diminishing returns on finding more ILP.

[Hennessy and Patterson 2007]

9. Old CW: Uniprocessor performance doubles every 18 months.

· New CW is Power Wall + Memory Wall + ILP Wall = Brick Wall. Figure 2 plots

processor performance for almost 30 years. In 2006, performance is a factor of

three below the traditional doubling every 18 months that we enjoyed between

1986 and 2002. The doubling of uniprocessor performance may now take 5 years.

10. Old CW: Don’t bother parallelizing your application, as you can just wait a little

while and run it on a much faster sequential computer.

· New CW: It will be a very long wait for a faster sequential computer (see above).

11. Old CW: Increasing clock frequency is the primary method of improving

processor performance.

· New CW: Increasing parallelism is the primary method of improving processor

performance. (See Section 4.1.)

12. Old CW: Less than linear scaling for a multiprocessor application is failure.

· New CW: Given the switch to parallel computing, any speedup via parallelism is a



James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Friday, April 18, 2008 4:42:25 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Wednesday, April 16, 2008

How to ensure that data written to disk, is REALLY on disk?  Yeah, I know, this shouldn’t be hard but the I/O stack is deep, everyone is looking for performance, everyone is caching along the way, so it’s more interesting than you might like.  If you writing code that needs to reliable write through semantics like Write Ahead Logging, then you need to ensure you are writing through to media. If you are writing to a SAN or SCSI, it’s pretty straight forward but if you are using EIDE or SATA, then things get a bit more interesting. What follows is Windows-specific but you need to be aware of these issues on non-Windows systems as well.


If it’s a SCSI disk (not SATA or EIDE), then setting FILE_FLAG_WRITE_THROUGH and FILE_FLAG_NO_BUFFERING is sufficient.  FILE_FLAG_WRITE_THROUGH force all data written to the file to be written through the cache directly to disk. All writes are to the media.  FILE_FLAG_NO_BUFFERING ensures that all reads come directly from the media as well by preventing any read ahead and disk caching. What’s happening behind the scenes when these parameters are specified on CreateFile() is that the filsystem and memory manager are not caching and Force Unit Access (FUA) is being sent to the device on writes to ensure they are directly to the media rather than cached in the device cache


The reason the above is not typically sufficient with EIDE and SATA drives is that FUA is dropped by the standard SATA and EIDE miniport driver.  The filesystem and memory manager will respect the parameters but the device will likely still cache writes without FUA.


FUA is dropped for performance reasons since SATA and EIDE can only process one command at a time and the full flush required by FUA is slow. SCSI can process multiple commands in parallel and the flush is less expensive. Is Native Command Queuing (NCQ) the solution to the performance problem? Unfortunately, no.  NCQ allows multiple commands to be sent to the drive, it gives the drive flexibility in what order to execute the commands but the restriction of only one command executing at a time remains.


What’s the solution to getting reliable writes when using commodity disks and needing guaranteed writes. The simple answer is to set the registry flag that turns off the discarding of FUA. This solve the correctness problem but at considerable performance expense. Essentially this will be semantically correct but slow due to the SATA single-command limitation and the length of time it takes to go directly to the media.  Shutting of Write Cache Enable (WCE) on a per-drive basis is another option.


Another option is FlushFileBuffers() which is a call fully honored by all device types. FlushFileBuffers takes a file handle arguments and flushes the filesystem/memory manager cache for that handle and flushes the entire system volume that holds that file.  This again works but is broader than required in that the entire device cache will get flushed.  I’m told that you can also use FLUSH_CACHE on the device as an alternative to FlushFileBuffers() on a handle. A paper that shows the use of FLUSH_CACHE to achieve correct write ahead logging semantics is up at: Enforcing Database Recoverability on Disks that Lack Write-Through.  In this paper, using SQL Server running a mini-TPC-C as a test case, the measure performance degradation of as little 2% using FLUSH_CACHE calls to the device as needed. A small price to pay for correctness.




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | Msft internal blog: msblogs/JamesRH


Wednesday, April 16, 2008 5:59:43 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Monday, April 14, 2008

Wow, the pace is starting to pick up in the service platform  world. Google announced their long awaited entrant with Google Application Engine last Monday, April 7th. Amazon announced the SimpleDB to answer the largest requirement they were hearing from AWS customers: persistent, structured storage. Yesterday, another major step was made with Werner Vogles announcing availability of persistent storage for EC2.

Persistence for EC2 is a big one.  I’ve been amazed at how hard customers were willing to work to get persistent storage in EC2.  The most common trick is to periodically snapshot the up to 160GB of ephemeral state allocated to each Amazon EC2 instance to S3. This does work but is very clunky and looses all between the last snap shot and non-orderly shutdown is a bit nasty.  A solution I like is a replicated block storage layer like DRBD.  One innovative solution to all EC2 state being transient is to use DRDB to maintained a replicated file system between two EC2 instances.  Not bad – in fact I really like it but it’s hard to set up and, last time I checked, only supported 2-way redundancy when 3 is where you want to be when using commodity hardware.


It appears the solution is (nearly) here with EC2 persistence.  The model they have chosen storage volume as the abstraction.  Any number of storage volumes can be created in sizes of up to 1TB. Each storage volume is created in a developer specified availability zone and each volume supports snapshots to S3. A volume can be created from a snap-shot.  The supported redundancy and recovery models were not specified but I would expect that they are using redundant, commodity storage. Werner did say it was file system semantics which I interpret as cached, asynchronous write with optional application controlled write through/flush.  It is not clear if shared volumes are supported (multiple EC2 instances accessing the same volume).


Another blog entry from Amazon “demo’s” Usage: I spent some time experimenting with this new feature on Saturday. In a matter of minutes I was able to create a pair of 512 GB volumes, attach them to an EC2 instance, create file systems on them with mkfs, and then mount them. When I was done I simply unmounted, detached, and then finally deleted them.


Unfortunately, persistent storage for EC2 won’t be available until “later this year” but it looks like a good feature that will be well received by the development community.


Update: This may be closer to beta than I thought.  I just (5:52am 4/14) reciveved a limited beta invitation.




Thanks to David Golds and Dare Obasanjo for sending pointers my way.


James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Monday, April 14, 2008 4:37:39 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Saturday, April 12, 2008

The only thing worse than no backups is restoring bad backups. A database guy should get these things right.  But, I didn’t, and earlier today I made some major site-wide changes and, as a side effect, this blog was restored to December 4th, 2007.  I’m working on recovering the content and will come up with something over the next 24 hours. However it’s very likely that comments between Dec 4th and earlier today will be lost.  My apologies.


Update 2008.04.13: I was able to restore all content other than comments between 12/4/2007 and yesterday morning.  All else is fine.  I'm sorry about the RSS noise during the restore and for the lost comments.  The backup/restore procedure problem is resolved.  Please report any broken links or lingering issues. Thanks,





James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |  | blog:



Saturday, April 12, 2008 11:16:29 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Hardware | Process | Ramblings | Services | Software

I was on a panel at the International Conference on Data Engineering yesterday morning in Cancun, Mexico but I was only there for Friday. You’re probably asking “why would someone fly all the way to Cancun for one lousy day?”  Not a great excuse, but it goes like this: the session was originally scheduled for Wednesday and I was planning to attend the entire conference since I haven’t been to a pure database conference in a couple of years.  But, it was later moved to Friday mid-morning and work is piling up at the day job so I ended up deciding to just fly in for the day. 


It was such a short trip that I ended up flying both in and out of Cancun with the same flight crew.  They offered me a job so I’ve now got a back-up plan at Alaska Air in case the distributed systems market goes soft.    Actually, I had some company.  Hector Garcia-Molina and I arrived at the airport at same time Thursday and went out at the same time Friday. Hector was flying in for his Friday morning keynote PhotoSpread: A Spreadsheet for Managing Photos.


The panel I participated in on Friday was “Cloud Computing-Was Thomas Watson Right After All?” organized by Raghu Ramakrishnan of Yahoo! Research.  The basic premise of the panel is that the much of the current server-side workloads are migrating to the cloud and this trend is predicted by many to accelerate. I partly agreed and partly disagreed.  From my perspective the broad move to a services-based model is inescapable. The economics are simply too compelling.  But, at the same time that I see a massive migration to a service based model, the capabilities of the edge are growing faster than ever.  One billion cell phones will sell this year.  Personal computer sales remain robust.  The edge will always have more compute, more storage, and less latency.  I argue that we will continue to see more conventional enterprise workloads move to a service-based model each year. And, at the same time, we’ll see increased reliance on the capabilities of edge devices. More service based applications will be dependent upon large local caches supporting low latency access and disconnected operation and deep, highly engaging user interfaces.  Basically service based applications exploiting local device capabilities and interfaces (Browser-Hosted Software with a "Real" UX ). 


My summary: the edge pulls computation close to the user for the best possible user experience. The core pulls computation close to data.  Basically, I’m arguing both will happen.


Looking more closely at the mass migration of many of the current enterprise workloads to a services-based delivery model, the driving factor is lower cost and freeing up IQ to work on the core business. When there is an order of magnitude in cost savings possible, big changes happen. In many ways the predicted mass move to services reminds me of the move to packaged Enterprise Resource Planning software 10 to 20 years back.  Before then, most enterprises wrote all their own internal systems, which were incredibly expensive but 100% tailored to their unique needs.  It was widely speculated that no large company would ever be willing to change their business sufficiently to use commercial ERP software. And, they probably wouldn’t have if it wasn’t for the several-factor difference in price.  The entire industry moved to packaged ERP software at an incredible pace.  Common applications like HR and accounting are now typically sourced commercially by even the largest enterprises and they invest in internal development where they need to innovate or add significant value (generally, ignoring Enron, you don’t really want to innovate too much in accounting). 


The same thing is happening with services.  Just as before, I frequently hear that no big enterprise will move to a services-based model due to security and privacy reasons and a need to tailor their internal applications for their own use.  And, again, the cost difference is huge and I fully expect the results will be the same: common applications where the company is not doing unique innovation will move to a services-based model.  In fact, it’s already happening. Even as early as a couple of years back when I led Exchange Hosted Services, I was amazed to find that many of largest household name enterprises are moving some of their applications to a services-model.  It’s happening.


The slides I presented at ICDE: JamesRH_ICDE2008x.ppt (749.5 KB).




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |

Friday, April 11, 2008 11:12:32 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Wednesday, April 09, 2008

What’s commonly referred to as the Great Firewall of China isn’t really a firewall at all.  I recently came across an Atlantic Monthly article investigating how the Great Firewall works and what it does (see The Connection has been Reset).


The official name of what is often called the Great Firewall of China is the Golden Shield project. Rather than acting as a firewall, it’s actually mirroring content and manipulating DNS, connection management, and URL redirection to implement its goal of restricting what internet users inside China can access.


This project has been widely criticized on political and social fronts – I won’t repeat them here.  It’s also been widely criticized on technical grounds as ineffective, weak, and easy to thwart.  Again, not my focus.  This article simply caught my interest technically as content filtering at this scale is an incredibly difficult task. What techniques are employed?


Like many software security problems, no single solution solves the problem fully and the main goal of the Golden Shield project is to add friction.  If it’s painful enough to get to the content they are trying to prevent from being accessed, few will bother to access it.  Essentially the goal of the four levels of protection they are using is to add friction and it’s friction rather than prevention that ensures that few Chinese internet users see restricted content in any quantity.  The four levels of protection/restriction are:


1.       DNS Block: sites that are on the current blacklist get DNS resolution failure or get redirected to other content.  This was the technique employed against to force them add filtering to their web index. For some time , all access to was redirected to their larger Chinese competitor baidu.  The other application of this technique is to return DNS lookup failure so, for example, searches for will return “not found”.

2.       Connect: In parallel with connection requests leaving China, they are inspected.  If the IP address is on the current IP blacklist, connection reset will be sent which will cause the connection to fail.

3.       URL Block: If the URL contains words on the illegal word blacklist, the connection is redirected infinitely.  I’m not sure if they are only sniffing the URL or also doing reverse DNS to get the site name as well but, if unacceptable words are found in the URL, they redirect the connection repeated. Some browsers hang while others return an error message.

4.       Content Block: At this level the DNS lookup has been successful and the connection has been made and content is being returned to the user. As the content is returned to the requesting user inside China, it’s being scanned in parallel for unapproved keywords and phrases. If any are found, the connection is broken immediately. As well as breaking the connection mid-way, subsequent requests from that client IP to that destination IP are blocked. The first block is short, but consecutive attempts drive up the length of the IP-to-IP connect block period and may eventually draw official scrutiny.


In addition to these techniques to block access to content outside-of-China, an estimate 30,000 censors scan and get removed unapproved content posted within within China (see


The Golden Shield project is reportedly also being used in the opposite direction to prevent access to some content inside of China from outside the country.


There are many means of subverting the Golden Shield including using a proxy server outside of China or setting up a VPN connection to a server outside of the country.  Encrypted connections will also get through as well encrypted email.  However, all these techniques are non-default and require some work on behalf of the user.  Most users don’t bother so, for the most part, the goals of the Golden Shield are attained even though it’s technically not that strong.


The Atlantic Monthly article:

Wired Article:

Wikipedia article:




Thanks to Jennifer Hamilton and Mitch Wyle for pointing out the Atlantic Monthly article.


James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |

Tuesday, April 08, 2008 11:13:57 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Saturday, April 05, 2008

The services world is one built upon economies of scale.  For example, networking costs for small and medium sized services can run nearly an order of magnitude more than large bandwidth consumers such as Google, Amazon, Microsoft and Yahoo pay. These economies of scale make it possible for services such as Amazon S3 to pass on some of the economies of scale they get on networking, for example, to those writing against their service platform while at the same profiting (S3 is currently pricing storage under their cost but that’s a business decision rather than a business model problem). These economies of scale enjoyed by large service providers extend beyond networking to server purchases, power costs, networking equipment, etc.


Ironically, even with these large economies of scale, it’s cheaper to compute at home than in the cloud. Let’s look at the details.


Infrastructure costs are incredibly high in the services world with a new 13.5 mega-watt data center costing over $200m before the upwards of 50,000 servers that fill the data center are purchased.  Data centers are about the furthest thing from commodity parts and I have been arguing that we should be moving to modular data centers  for years (there has been progress on that front as well: First Containerized Data Center Announcement).  Modular designs take some of the power and mechanical system design from an upfront investment with 15 year life to a design that comes with each module and is on a three year or less amortization cycle and this helps increase the speed of innovation. 


Modular data centers help but they still require central power, mechanical systems, and networking systems and these systems remain expensive, non-commodity components. How to move the entire datacenter to commodity components?  Ken Church ( makes a radical suggestion: rather than design and develop massive data centers with 15 year lives, let’s incrementally purchase condominiums (just-in-time) and place a small number of systems in each.  Radical to be sure but condo’s are a commodity and, if this mechanism really was cheaper, it would be a wake-up call to all of us to start looking much more closely at current industry-wide costs and what’s driving them. That’s our point here.


Ken and I did a quick back of envelope of this approach below.   Both configurations are designed for 54k servers and roughly 13.5MWs.  Condos appear notably cheaper, particularly in terms of capital.   




Large Tier II+ Data Center

Condo Farm (1125 Condos)




54k (= 48 servers/condo * 1125 Condos)




Power (peak)

13.5 MW (= 250 Watts/server * 54k servers)

13.5MW (= 250 Watts/server * 54k servers  = 12 KW/condo * 1125 Condos)







over $200M

$112.5M (= $100k/condo * 1125 Condos)





Annual Expense


$3.5M/year (= $0.03 per kw/h * 24*356 hours/year * 13.5MW)

$10.6M/year (= $0.09 per kw/h * 24*365 hours/year * 13.5MW)





Annual Income

Rental Income


$8.1M/year (= $1000/condo per month * 12 months/year * 1125 Condos less $200/condo per month condo fees. We conservatively assume 80% occupancy)



In the quick calculation above, we have the condos at $100k each and all 1,125 of them at $112.5M whereas the purpose built data center would price in at over $200M.  We have assumed an unusually low cost for power on the purpose built center with a 66% reduction over standard power rates. Deals this good are getting harder to negotiate but they still do exist.  The condo must pay full residential power costs without discount which is far higher at $10.6M/year.  However, offsetting this increased power cost, we rent the condo’s out at a low cost of $1,000/month and conservatively only account for 80% occupancy.


Looking at the totals, the condo’s are at 56% of the capital cost and annually they run $2.5M in operational costs whereas the data center power costs are higher at $3.5m.  The condos operational costs are 71% of the purpose built design.  Summarizing, the condo’s run just about ½ the cost of the purpose built data center both in capital and in annual operating costs.


Condos offer the option to buy/sell just-in-time.  The power bill depends more on average usage than worst-case peak forecast.  These options are valuable under a number of not-implausible scenarios:

·         Long-Term demand is far from flat and certain; demand will probably increase, but anything could happen over the next 15 years

·         Short-Term demand is far from flat and certain; power usage depends on many factors including time of day, day of week, seasonality, economic booms and busts.  In all data centers we’ve looked at average power consumption is well below worst-case peak forecast.


How could condos compete or even approach the cost of a purpose built facility built where land is cheap and power is cheaper?  One factor is that condos are built in large numbers and are effectively “commodity parts”.  Another factor is that most data centers are over-engineered.  They include redundancy such as uninterruptable power supplies that the condo solution doesn’t include.  The condo solution gets it’s redundancy via many micro-data centers and being able to endure failures across the fabric. When some of the non-redundantly powered micro-centers are down, the others carry the load. (Clearly achieving this application-level redundancy requires additional application investment).


One particularly interesting factor is when you buy large quantities of power for a data center, it is delivered by the utility in high voltage form. These high voltage sources (usually in the 10 to 20 thousand volt range) need to be stepped down to lower working voltages which brings efficiency losses, distributed throughout the data center which again brings energy losses, and eventually delivered to the critical load at the working voltage (240VAC is common in North America with some devices using 120VAC). The power distribution system represents approximately 40% of total cost of the data center. Included in that number are the backup generators, step-down transformers, power distribution units, and uninterruptable power supplies. Ignore the UPS and generators since we’re comparing non-redundant power, and two interesting factors jump out: 1) the cost of the power distribution system ignoring power redundancy is 10 to 20% of the cost of the data center and 2) the power losses through distribution run 10 to 12% of the power brought into the center.


This is somewhat ironic in that a single family dwelling gets two-phase 120VAC (240VAC between the phases or 120VAC between either phase and ground) delivered directly to the home.  All the power losses experienced through step down transformers (usually in the 92 to 96% efficiency range) and all the power lost through distribution (depends upon size and length of conductor) is paid for by the power company. But, if you buy huge quantities of power as we do in large data centers, the power company delivers high voltage lines to the property and you need to pay the substantial capital cost of step down transformers and, in addition, pay for the power distribution losses.  Ironically, if you don’t buy much power, the infrastructure is free. If you buy huge amounts, you need to pay for the infrastructure.  In the case of condos, the owners need to pay for the inside the building distribution so they are somewhere between single family dwellings and data centers in having to pay for part of the infrastructure but not as much as a DC.


Perhaps, the power companies have found a way to segment the market into consumer v. business.  Businesses pay more because they are willing to pay more.  Just as businesses pay more for telephone service and airplane travel, businesses also pay more for power.  Despite great deals we’ve been reading about, data centers are actually paying more for power than consumers after factoring in the capital costs.    Thus, it is a mistake to move computation from the home to the cloud because doing so moves the cost structure from consumer rates to business rates.


The condo solution might be pushing the limit a bit but whenever we see a crazy idea even within a factor of two of what we are doing today, something is wrong.  Let’s go pick some low hanging fruit.


Ken Church & James Hamilton

{Church, JamesRH}


Saturday, April 05, 2008 11:15:44 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2] - Trackback
 Thursday, April 03, 2008

A couple of interesting directions brought together: 1) Oracle compatible DB startup, and 2) a cloud-based implementation.


The Oracle compatible offering is EnterpriseDB. They use the PostgreSQL code base and implement Oracle compatibility to make it easy for the huge Oracle install base to support them.  An interesting approach.  I used to lead the SQL Server Migration Assistant team so I know that true Oracle compatibility is tough but, even failing to be 100% compatible makes it easier for Oracle apps to port over to them. The pricing model is free for a developer license and $6k/socket for their Advanced Server edition.


The second interesting direction is offering is from Elastra.  It’s a management and administration system that automates deploying and managing dynamically scalable services. As part of the Elastra offering is support for Amazon AWS EC2 deployments.


Bring together EnterpriseDB and Elastra and you have an Oracle compatible database, hosted in EC2, with deployment and management support: ELASTRA Propels EnterpriseDB into the Cloud. I couldn’t find any customer usage examples so this may be more press release than a fully exercised, ready for prime-time solution but it’s a cool general direction and I expect to see more offerings along these lines over next months.  Good to see.




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |

Thursday, April 03, 2008 11:17:15 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
 Wednesday, April 02, 2008

I’m a big believer in auto-installable client software but I also want a quality user experience.  For data intensive applications, I want a caching client. I use and love many of browser-hosted clients but, for development work, email clients, and photo editing, I still use installable software. I want a snappy user experience, I need to be able to run disconnected or weakly connected, and I want to fully use my local resources.  Speed and richness is king for these apps – it’s the casual apps that are getting replaced well by browser based software in my world. 


However, I’ve been blown away but how fast the set of applications I’m willing to run in the browser has been expanding. For example, Yahoo Mail impressed me when it came out. Both Google and Live maps are impressive (how can anyone understand and maintain that much JavaScript?).  In fact, in the ultimate compliment, these mapping services are good enough that, even though I have local mapping software installed, I seldom bother to start it.  


Here’s another one that announced last week that is truly impressive:  The Adobe online implementation of Photoshop is an eye opener. Predictably, it’s flash and flex based and, wow, it’s amazing for a within-the-browser experience.  I’m personally still editing my pictures locally but Photoshop Express shows a bit of what’s possible.




James Hamilton, Windows Live Platform Services
Bldg RedW-D/2072, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | |

Wednesday, April 02, 2008 11:18:16 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0] - Trackback
Services | Software

Disclaimer: The opinions expressed here are my own and do not necessarily represent those of current or past employers.

<May 2008>

This Blog
Member Login
All Content © 2015, James Hamilton
Theme created by Christoph De Baene / Modified 2007.10.28 by James Hamilton