Facebook Flashcache

Facebook released Flashcache yesterday: Releasing Flashcache. The authors of Flashcache, Paul Saab and Mohan Srinivasan, describe it as “a simple write back persistent block cache designed to accelerate reads and writes from slower rotational media by caching data in SSD’s.”

There are commercial variants of flash-based write caches available as well. For example, LSI has a caching controller that operates at the logical volume layer. See LSI and Seagate take on Fusion-IO with Flash. The way these systems work is, for a given logical volume, page access rates are tracked. Hot pages are stored on SSD while cold pages reside back on spinning media. The cache is write-back and pages are written back to their disk resident locations in the background.

For benchmark workloads with evenly distributed, 100% random access patterns, these solutions don’t contribute all that much. Fortunately, the world is full of data access pattern skew and some portions of the data are typically very cold while others are red hot. 100% even distributions really only show up in benchmarks – most workloads have some access pattern skew. And, for those with skew, a flash cache can substantially reduce disk I/O rates at lower cost than adding more memory.

What’s interesting about the Facebook contribution is that its open source and supports Linux. From: http://github.com/facebook/flashcache/blob/master/doc/flashcache-doc.txt:

Flashcache is a write back block cache Linux kernel module. [..]Flashcache is built using the Linux Device Mapper (DM), part of the Linux Storage Stack infrastructure that facilitates building SW-RAID and other components. LVM, for example, is built using the DM.

The cache is structured as a set associative hash, where the cache is divided up into a number of fixed size sets (buckets) with linear probing within a set to find blocks. The set associative hash has a number of advantages (called out in sections below) and works very well in practice.

The block size, set size and cache size are configurable parameters, specified at cache creation. The default set size is 512 (blocks) and there is little reason to change this.

More information on usage: http://github.com/facebook/flashcache/blob/master/doc/flashcache-sa-guide.txt. Thanks to Grant McAlister for pointing me to the Facebook release of Flashcache. Nice work Paul and Mohan.


James Hamilton

e: jrh@mvdirona.com

w: http://www.mvdirona.com

b: http://blog.mvdirona.com / http://perspectives.mvdirona.com

4 comments on “Facebook Flashcache
  1. Yes, CacheCade is the product I was referring to.

    I love the Reg but don’t typically use them as the final resolution point on technical subjects. They are good for a laugh though. Remember "white trash data centers (http://www.theregister.co.uk/2007/04/11/ms_white_trash/).


  2. Is CacheCade the same as the product in the linked Register article? That article says the product isn’t actually available, which is a nice advantage of the Facebook code.

  3. The LSI part (with extra cost software from LSI) absolutely does support tiered storage with SSDs acting as a cache in front of HDD. From: http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_advanced_services/index.html

    LSI MegaRAID® CacheCade

    LSI MegaRAID CacheCade tiered cache allows users to leverage SSDs in front of hard disk drives (HDDs) to create up to 512GB of controller cache. Using SSDs as controller cache allows for very large data sets to be present in cache to deliver up to a 50X performance improvement in read-intensive applications, such as file, Web, OLTP and database server. The solution is designed to provide a dramatic performance upgrade while only requiring a small investment in SSD technology.


  4. Wes Felter says:

    The LSI card doesn’t perform any caching, although you could use it with Facebook’s Flashcache.

    I’m glad to see caching catching on, since it’s the easiest way to use flash to accelerate I/O.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.