Intel Fellow and Director of Storage Architecture Knut Grimsrud presented at WinHEC 2008 last week and it caught my interest for several reasons: 1) he talked about Intel findings with their new SSD which looks like an extremely interesting price/performer, 2) they have found interesting power savings in their SSD experiments beyond the easy to predict reduction in power consumption of SSDs over HDDs, and 3) Knut presented a list of useful SSD usage do’s and don’ts.
Starting from the best practices:
· DO queue requests to SSD as deeply as possible
- SSD has massive internal parallelism and generally is underutilized. Parallelism will further increase over time.
- Performance scales well with queue depth
· DON’T withhold requests in order to “optimize” or “aggregate” them
- Traditional schemes geared towards reducing HDD latencies do not apply. Time lost in withholding requests difficult to make up.
· DO worry about software/driver overheads & latencies
- At 100K IOPS how does your SW stack measure up?
· DON’T use storage “backpressure” to pace activity
- IO completion time (or rate) is not a useful pacing mechanism and attempting to use that as throttle can result in tasks generating more activity than desired
Common HDD optimizations you should avoid:
· Block/page sizes, alignments and boundaries
- Intel® SSD is insensitive to whether host writes have any relationship to internal NAND boundaries or granularities
- Expect other high-performing SSDs to also handle this
- Internal NAND structures constantly changing anyway, so chasing this will be a losing proposition
· Write transfer sizes & write “globbing”
- No need to accumulate writes in order to create large writes
- Temporarily logging writes sequentially and later re-locating to final destination unhelpful to Intel SSD (and is detrimental to longevity)
· Software “helping” by making near-term assumptions about SSD internals will become a long-term hindrance
- Any SW assistance must have longevity
On the power savings point, Knut laid out an interesting argument on increased power savings for SSDs over HDDs beyond the standard device power difference. These standard power differences are real of course but, on a laptop device where a HDD typically draws around 2.5W active, these often pointed to savings are relatively small. However, an additional measurable savings was reported by Knut. Because SSDS are considerably faster than HDD, speculative page fetching done by Windows Superfetch is not needed. And, because Superfetch is sometimes incorrect, the additional I/Os and processing done by Superfecth, consume more power. Essentially, with the very high random I/O rates offered by SSDs, Superfetch isn’t needed and, if disabled, there will be additional power savings due to reduced I/o and page processing activity.
Another potential factor I’ve discussed with Knut’s is that in standard laptop operating mode, the common usage model is one where there are periods of inactivity and short periods of peak workload typically accompanied by high random I/O rates. More often than not, laptop performance is bounded by random I/O performance. If SSD usage allows these periods of work to be completed more quickly, the system can quickly return to an idle, low-power state. We’ve not measured this gain but it seems intuitive that getting the work done more quickly will leave the system active for shorter periods and have it in idle states for longer. Assuming a faster system spends more time in idle states (rather than simply doing more work), we should be able to measure additional power savings indirectly attributable to SSD usage.
Knut’s slides: Intel’s Solid State Drives. Thanks to Vlad Sadovsky for sending this one my way.
–jrh
James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | JamesRH@microsoft.com
H:mvdirona.com | W:research.microsoft.com/~jamesrh | blog:http://perspectives.mvdirona.com