There is a growing gap between memory bandwidth and CPU power and this growing gap makes low power servers both more practical and more efficient than current designs. Per-socket processor performance continues to increase much more rapidly than memory bandwidth and this trend applies across the application spectrum from mobile devices, through client, to servers. Essentially we are getting more compute than we have memory bandwidth to feed.
We can attempt to address this problem two ways: 1) more memory bandwidth and 2) less fast processors. The former solution will be used and Intel Nehalem is a good example of this but costs increase non-linearly so the effectiveness of this technique will be bounded. The second technique has great promise to reduce both cost and power consumption.
For more detail on this trend:
· The Case for Low-Cost, Low-Power Servers
· 2010 the Year of the MicroSlice Servers
· Linux/Apache on ARM Processors
· ARM Cortex-A9 SMP Design Announced
This morning GigOm reported that SeaMicro has just obtained a $9.3M Department of Energy grant to improve data center efficiency (SeaMicro’s Secret Server Changes Computing Economics). SeaMicro is a Santa Clara based start-up that is building a 512 processor server based upon Intel Atom. Also mentioned was Smooth Stone who is designing a high-scale server based upon ARM processors. ARMs processors are incredibly power efficient, commonly used in embedded devices and by far the most common processor used in cell phones.
Over the past year I’ve met with both Smooth Stone and SeaMicro frequently and it’s great to see more information about both available broadly. The very low power server trend is real and advancing quickly. When purchasing servers, it needs to be all about work done per dollar and work done per joule
Congratulations to SeaMicro on the DoE grant.
b: http://blog.mvdirona.com / http://perspectives.mvdirona.com
The trick is going to be low power phy-less networking working at the chassis level. Uplinks will be done with Gigabit, but backplane might be 100Mbit to reduce power. The chassis networking will have to be smarter than just a simple managed switch.
If you look at the staff both companies are trying to hire. You can see that they are working toward this, embedded network engineers, SOC hardware engineers, etc.
Low-power servers sound great. The only concern I have with them is the increased overhead of networking. Any thoughts about this? Beyond your earlier post about networking being the last bastion of the mainframe model, of course — which I completely agree with.
Among recently announced 32nm Core-i7 I’ve noticed a 25W TDP 2core/2threads 2GHz chip with ECC support. Don’t you think this one (or similar Xeon if it follows) might be quite attractive for micro-slice servers?