I’ve long argued that tough constraints often make for a better service and few services are more constrained than Wikipedia where the only source of revenue is user donations. I came across this talk by Domas Mituzas of Wikipedia while reading old posts on Data Center Knowledge. The posting A Look Inside Wikipedia’s Infrastructure includes a summary of the talk Domas gave at Velocity last summer.
Interesting points from the Data Center Knowledge posting and the longer document referenced below from the 2007 MySQL coference:
· Wikipedia serves the world from roughly 300 servers
o 200 application servers
o 70 Squid servers
o 30 Memcached servers (2GB each)
o 20 MySQL servers using Innodb, each with 16GB of memory (200 to 300GB each)
o They also use Squid, Nagios, dsh, nfs, Ganglia, Linux Virtual Service, Lucene over .net on Mono, PowerDNS, lighttpd, Apache, PHP, MediaWiki (originated at Wikipedia)
· 50,000 http requests per second
· 80,000 MySQL requests per second
· 7 million registered users
· 18 million objects in the English version
For the 2007 MySQL Users Conference, Domas posted great details on the Wikipidia architecture: Wikipedia: Site internals, configuration, code examples and management issues (30 pages). I’ve posted other big service scaling and architecture talks at: http://perspectives.mvdirona.com/2008/12/27/MySpaceArchitectureAndNet.aspx.
Amazon Web Services
Updated: Corrected formatting issue.