Flickr DB Architecture

I’ve been collecting scaling stories for some time now and last week I came across the following run down on Fliker scaling: Federation at Flickr: Doing Billions of Queries Per Day by Dathan Vance Pattishall, the Flickr database guy.

The Flickr DB Architecture is sharded with a PHP access layer to maintain consistency. Flickr users are randomly assigned to a shard. Each shard is duplicated in another database that is also serving active shards. Each DB needs to be less than 50% loaded to be able to handle failover.

Shards are found via a lookup ring that maps userID or groupID to shardID and photoID to userID. The DBs are protected by a memcached layer with a 30 minute caching lifetime. Slide 16 says they are maintaining consistency using distributed transactions but I strongly suspect they are actually just running two parallel transactions with application management rather than 2pc.

Maintenance is done by bringing down ½ the DBs and the remaining DBs will handle the load but it appears they have no redundancy (failure protection) during the maintenance periods.

They have 12TB of user data in aggregate and they appear to be using MySQL (slide 25 complains about an INNODB bug).

Other web site scaling stories:

· Scaling Linkedin:

· Scaling Amazon:

· Scaling Second Life:

· Scaling Technorati:

· Scaling Flickr:

· Scaling Craigslist:

· Scaling Findory:

· MySpace 2006:

· MySpace 2007:

· Twitter, Flickr, Live Journal, Six Apart, Bloglines,, SlideShare, and eBay:


Thanks to Kevin Merritt (Blist) and Dave Quick (Microsoft) for sending this my way.

James Hamilton, Data Center Futures
Bldg 99/2428, One Microsoft Way, Redmond, Washington, 98052
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 | | | blog:

2 comments on “Flickr DB Architecture
  1. Ryan G says:

    Nothing wrong with bringing it to the fore again. I only got this because of Stumble.

  2. David says:

    You realize, don’t you, that this list was already posted in your blog in November: //

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.