Server under 30W

Two years ago I met with the leaders of the newly formed Dell Data Center Solutions team and they explained they were going to invest deeply in R&D to meet the needs of very high scale data center solutions. Essentially Dell was going to invest in R&D for a fairly narrow market segment. “Yeah, right” was my first thought but I’ve been increasingly impressed since then. Dell is doing very good work and the announcement of Fortuna this week is worthy of mention.

Fortuna, the Dell XS11-VX8, is an innovative server design. I actually like the name as proof that the DCS team is an engineering group rather than a marketing team. What marketing team would chose XS11-VX8 as a name unless they just didn’t like the product?

The name aside, this server is excellent work. It is based on the Via Nano and the entire server is just over 15W idle and just under 30W at full load. It’s a real server with 1GigE ports, full remote management via IPMI 2.0 (stick with the DCMI subset). In a fully configured rack, they can house 252 servers only requiring 7.3KW. Nice work DCS!

6 min video with more data: http://www.youtube.com/watch?v=QT8wEgjwr7k.

–jrh

James Hamilton, Amazon Web Services

1200, 12th Ave. S., Seattle, WA, 98144
W:+1(425)703-9972 | C:+1(206)910-4692 | H:+1(206)201-1859 |
james@amazon.com

H:mvdirona.com | W:mvdirona.com/jrh/work | blog:http://perspectives.mvdirona.com

7 comments on “Server under 30W
  1. When looking at component power requirements, be they processors, phys, or memory, its unusual to include the losses in up stream conversion. I wasn’t including those losses but it’s perfectly reasonable to do so but it will make phy power measurements sensitive to VFD efficiency at whatever draw currently on the device. As you know, VFD efficiency can be pathetically bad at low power draws.

    I’ll buy 5w for is for both ports and include down conversion. I misread what you wrote as 5 to 8 watt per nic. Sorry about that. I suspect that this server can be configured to not power the second port avoiding those losses. We would need to check for sure but some past Dell DCS designs have supported this.

    No debate that putting a switch on the backplane would both improve power efficiency and reduce cabling costs. Perhaps I’m just easy to please but under 30W isn’t bad even if some optimizations were left on the table.

    Thanks for the additional data Tinktank.

    –jrh
    jrh@mvdirona.com

  2. tinkthank says:

    James – my numbers for the PHYs come from my own measurements of a server board. A popular Intel PHY consumes 2.8W @ 1gbps link from the +5V standby rail. Intel’s datasheet says the part should only be 1.3W @3.3V but that does not account for conversion losses from 5v to 3.3V. Worse, the system board in question does not switch the PHYs over to the 3.3V rail after the system is powered on meaning the PHYs always draw power from the 5V standby rail, which is my experience is always less than 70% efficient. I find all of this odd as the PHY negotiates to 100mbps when the systemboard is powered off, so there’s no reason not to switch rails when it renogiates after powering on the system.

    From that experience I conjectured that two PHYs on a typical system board consume around 5W DC, more at the wall plug. Sure, Intel claims only 1.3W @ 3.3V for a single PHT and maybe in some cases that’s achievable. It does not seem to be the case on their own board designs.

    My experience with ethernet switches shows that an individual PHY on a quad or octal PHY draws a little over .75W with a 1gbps link. This is inclusive of DC-DC conversion losses but not losses through the main AC-DC power supply. Unfortunately Broadcom and Marvell have strict NDAs so I can’t repeat datasheet numbers.

    I still believe my main point stands – Dell could have reduced power consumption of the XS11-VX8 at the rack level by at least 20% by including an ethernet switch on the backplane.

    I wasn’t specifically speaking about the Rackable Cloudrack when speaking about the cost effectiveness of a mainstream AMD desktop board. I chose AMD because Intel is not at the price point of AMD just yet. Intel might get closer with the Westmere when the system board becomes just a processor and ICH with no MCH or IOH, but the rumor mill is saying Intel is charging just as much for the 5x chipsets as they did for the 4x series.

  3. James,

    James, thanks for the comments – I’m glad you like it.

    Tinktank and Wes, we designed Fortuna (the internal codename for the XS11-VX8) for a specific web hosting environment, and although we’ve seen some good opportunities for it in a few other use cases it is definitely not a general purpose machine. Power and density are at a premium in the target use case, with performance needing to fall into a window that the Via chip can easily hit. It was also important to have physically distinct servers rather than stack a number of VMs on one larger machine. 64-bit support and, interestingly and perhaps non-intuitively given my last comment, VT support were also important which excluded a few alternatives. The customer also wanted a homogenous network which made an integrated switch problematic or at the least would have greatly lengthened the development.

    The pictures don’t show it, but each server module supports one 2.5" HDD or SSD(underneath the board) as well as an SD card slot on the back. The NICs do support iSCSI.

    We’re seeing a lot of interest in this design which is why we went public with it, but again it is not for everyone. We will be evolving the general concept for a few other use cases with different power/performance/storage requirements.

    Forrest Norrod
    VP&GM, Dell DCS

  4. Tinktank, your 5W to 8W/NIC numbers are roughly correct for 10GigE but the 1GigE supported by this board is not even close to 5W. Under 1W is easy on 1GigE. Networking power is noticeable but not significant in this design.

    I hear you on the cabling hassle (and cost of hundreds) of servers in a rack. There are different approaches but the one I like is per module mini-switches connected to top of rack. This keeps the top of rack switch requirement down in the 24 to 48 port range and is often cheaper. Essentially that is the approach you were recommending as well (per module switching).

    Direct attached storage is supported by this board but the design point is cheap web servers with light storage requirements if any.

    Tinktank, you were recommending a low cost AMD solution (e.g. Rackable CloudRack). I agree that approach looks good and can produce very nice price/performance numbers (//perspectives.mvdirona.com/2009/01/23/MicrosliceServers.aspx). I don’t know the price of the Fortuna at this point (not publicly announced) but a large customer ordered it so I’m presuming it competes at least reasonably on price/performance. You may be right that this one could be a step backward but I like the design and love the overall direction of heading to very low power, low cost server modules. I like it.

    –jrh
    jrh@mvdirona.com

  5. tinkthank says:

    The Dell DCS team would not have put effort into this project unless a customer requested it. That said, I’m at a loss how this is more cost effective or power efficient than a cluster of more powerful machines.

    If Dell had stuck a Marvell or Broadcom 24 port gigabit switch on the backplane this might have made a little more sense. As it is now you’ll need a 48 port switch for every two chassises. That’s a mess of cabling and a mess of switches. That power for the switches and gigabit links isn’t free. Just eliminating the gigabit PHYs off the Via boards would probably shave 5-8 watts. They could have done that had they integrated the switch on the backplane.

    A mini-ITX or micro-ATX AMD based solution will run circles around the Via in performance. It’s bound to cost $300+ to manufacture each Via system based on the cost of similar 3.5" SBCs for the embedded market (Commell, Kontron, etc). A two core AMD on a micro-ATX board is a hair over $100 and a mini-ITX variant is around $200 with proc. I haven’t done a peformance comparison against recent Via offerings but I’d guess the dual core AMD system will deliver 4-6x the performance at 3-4x the power of the Via solution. So the AMD solution would be half the cost (counting extra sheet metal and power supplies/distribution) and a similar performance density if you pack several systems on 1U tray.

    The pictures floating around on the net don’t show disk storage in the chassis, so I’m assuming the customer is going to run iscsi or another network storage protocol off the second nic. My guess is this will be used for hosting web content for customers. Some application that doesn’t require high performance disk I/O, or that can get by on running out of RAM without hitting disk often.

    I personally think this design is a step backwards but what do I know.

  6. Wes Felter says:

    It looks like Dell overshot the mark here. Compared to a 2S Opteron/Xeon, a Fortuna 12-pack costs more, is slower, has less storage, has less RAM, lacks ECC, and is higher power. It’s not even innovative compared to RLX.

  7. Aaron deMello says:

    Awesome! Any idea what the storage is? I”m hoping there’s room for at least 1 SATA or 2 SAS.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.