I am excited by very low power, very low cost servers and the impact they will have on our industry. There are many workloads where CPU is in excess and lower power and lower cost servers are the right answer. These are workloads that don’t fully exploit the capabilities of the underlying server. For these workloads, the server is out of balance with excess CPU capability (both power and cost). There are workloads were less is more. But, with technology shifts, it’s easy to get excited and try to apply the new solution too broadly.
We can see parallels in the Flash memory world. At first there was skepticism that Flash had a role to play in supporting server workloads. More recently, there is huge excitement around flash and I keep coming across applications of the technology that really don’t make economic sense. Not all good ideas apply to all problems. In going after this issue I wrote When SSDs make sense in Server applications and then later When SSDs Don’t Make Sense in Server Applications. Sometimes knowing where not to apply a technology is more important than knowing where to apply it. Looking at the negative technology applications is useful.
Returning to very low-cost, low-power servers, I’ve written a bit about where they make sense and why:
· Very Low-Power Server Progress
· The Case for Low-Cost, Low-Power Servers
· 2010 the Year of the Microslice Computer
· Linux/Apache on ARM Servers
· ARM Cortex-A9 SMP Design Announced
But I haven’t looked much at where
very low-power, low-cost servers do not make sense. When aren’t
they a win when looking at work done per dollar and work done per joule? Last
week Dave DeWitt sent me a
paper that looks the application of Wimpy (from the excellent FAWN, Fast Array of Wimpy Nodes, project
at CMU) servers and their application to database workloads. In Wimpy Node
Clusters: What About Non-Wimpy Workloads Willis Lang, Jignesh Patel, and
Srinanth Shankar find that Intel Xeon
E5410 is slightly better than Intel Atom
when running parallel clustered database workloads including TPC-E and TPC-H.
The database engine in this experiment is IBM DB2 DB-X (yet another new name
for the product originally called DB2 Parallel Edition – see IBM DB2 for information on DB2
but the Wikipedia page is not yet caught up to the latest IBM name change).
These results show us that on complex, clustered database
workloads, server processors can win over low-power parts. For those interested in probing the very low-cost,
low-power processor space, the paper is worth a read: Wimpy Node Clusters:
What About Non-Wimpy Workloads.
The generalization of their finding that I’ve been using is CPU intensive and workloads with poor scaling characteristic are poor choices to be hosted on very low-power, low-cost servers. CPU intensive workloads are a lose because these workloads are CPU-bound so run best where there is maximum CPU per-server in the cluster. Or worded differently, the multi-server cluster overhead is minimized by having fewer, more-powerful nodes. Workloads with poor scaling characteristics are another category not well supported by wimpy nodes and the explanation is similar. Although these workloads may not be CPU-bound, they don’t run well over clusters with large server counts. Generally, more resources per node is the best answer if the workload can’t be scaled over large server counts.
Where very low-power, low-cost servers win is:
1. Very cold storage workloads. I last posted on these workloads last year Successfully Challenging the Server Tax. The core challenge with cold storage apps is that overall system cost is dominated by disk but the disk needs to be attached to a server. We have to amortize the cost of the server over the attached disk storage. The more disk we attach to a single server, the lower the cost. But, the more disk we attach to a single server, the larger the failure zone. Nobody wants to have to move 64 to 128 TB every time a server fails. The tension is more disk to server ratio drives down costs but explodes the negative impact of server failures. So, if we have a choice of more disks to a given server or, instead, to use a smaller, cheaper server, the conclusion is clear. Smaller wins. This is a wonderful example of where low-power servers are a win.
2. Workloads with good scaling characteristics and non-significant local resource requirements. Web workloads that just accept connections and dispatch can run well on these processors. However, we still need to consider the “and non-significant local resource” clause. If the workload scales perfectly but each interaction needs access to very large memories for example, it may be poor choice for Wimpy nodes. If the workload scales with CPU and local resources are small, Wimpy nodes are a win.
The first example above is a clear win. The second is more complex. Some examples will be a win but others will not. The better the workload scales and the less fixed resources (disk or memory) required, the bigger the win.
Good job by Willis Lang, Jignesh Patel, and Srinanth Shankar in showing us where wimpy nodes lose with detailed analysis.
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
Disclaimer: The opinions expressed here are my own and do not
necessarily represent those of current or past employers.