In Where SSDs Don’t Make Sense in Server Applications, we looked at the results of a HDD to SSD comparison test done by the Microsoft Cambridge Research team. Vijay Rao of AMD recently sent me a pointer to an excellent comparison test done by AnandTech. In SSD versus Enterprise SAS and SATA disks, Anandtech compares one of my favorite SSDs, the Intel X25-E SLC 64GB, with a couple of good HDDs. The Intel SSD can deliver 7000 random IOPS/sec and the 64GB component is priced in the $800 range.

The full AnandTech comparison is worth reading but I found the pricing with sequential and random I/O performance data is particularly interesting. I’ve brought this data together into the table below:





$/Seq Read ($/MB/s)

$/Seq Write $/MB/s)

Seq I/O Density

$/Rdm Read ($/MB/s)

$/Rdm Write ($/MB/s)

Rdm I/O Density

Intel X25-E SLC










Cheetah 15k





















All I/O measurements obtained using SQLIO

Random I/O measurements using 8k pages

Sequential measurements using 64kB I/Os

I/O density is average of read and write performance divided by capacity

Price calculations based upon average of selling price range listed.

Source: Anandtech (

Looking at this data in detail, we see the Intel SSD produces extremely good Random I/O rates but we should all know that raw performance is the wrong measure. We should be looking at dollars per unit performance. By this more useful metric, the Intel SSD continues to look very good at $17.66 $/MB/s on 8K read I/Os whereas the HDDs are $142 and $195 $/MB/s respectively. For hot random workloads, SSDs are a clear win.

What do I mean by “hot random workloads”? By hot, I mean a high number of random IOPS per GB. But, for a given storage technology, what constitutes hot? I like to look at I/O density which is the cutoff between a given disk with a given workload being capacity bound or I/O rate bound. For example, looking at the table above we see the random I/O density for an 64GB Intel disk is 1.109 MB/s/GB. If you are storing data where you need 1.109 MB/s of 8k I/Os per GB of capacity or better, then the Intel device will be I/O bound and you won’t be able to use all the capacity. If the workload requires less than this number, then it is capacity bound and you won’t be able to use all the IOPS on the device. For very low access rate data, HDDs are a win. For very high access rate data, SSDs will be a better price performer.

As it turns out, when looking at random I/O workloads, SSDs are almost always capacity bound and HDDs are almost always IOPS bound. Understanding that we can use a simple computation to compare HDD cost vs SSD cost on your workload. Take the HDD farm cost which will be driven by the number of disks needed to support the I/O rate times the cost of the disk. This is the storage budget needed to support your workload on HDDs. Take the size of the database and divide by the SSD capacity to get the number of SSDs required. Multiple the number of SSDs required times the price of the SSD. This is the budget required to support your workload on SSDs. If the SSD budget is less (and it will be for hot, random workloads), then SSDs are a better choice. Otherwise, keep using HDDs for that workload.

In the sequential I/O world, we can use the same technique. Again, we look at the sequential I/O density to understand the cut off between bandwidth bound and capacity bound for a given workload. Very hot workloads over small data sizes will be a win on SSD but as soon as the data sizes get interesting, HDDs are a more economic solution for sequential workloads. The detailed calculation is the same. Figure out how many HDDs required to support your workload on the basis of capacity or sequential I/O rates (depending upon which is in shortest supply for your workload on that storage technology). Figure out the HDDs budget. Then do the same for SSDs and compare the numbers. What you’ll find is that, for sequential workloads, SSDs are only best value for very high I/O rates over relatively small data sizes.

Using these techniques and data we can see when SSDs are a win for workloads with a given access pattern. I’ve tested this line of thinking against many workloads and find that hot, random workloads can make sense on SSDs. Pure sequential workloads almost never do unless the access patterns are very hot or the capacity required relatively small.

For specific workloads that are neither pure random nor pure sequential, we can figure out the storage budget to support the workload on HDDs and on SSDs as described above and do the comparison. Using these techniques, we can step beyond the hype and let economics drive the decision.

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | | | blog: