Low-End Server Building Blocks

Micro and low-end servers come in all shapes and forms. Ideally, we would gather them all in our labs and make a performance per watt comparison, taking the features that make management easier into account. The reality is that while lots of servers enter our lab, many of the vendors cannot be easily convinced to ship heavy and expensive server chassis with a few tens of nodes.

In AnandTech tradition, we decided to take a look at the component level instead. By testing simple motherboard/CPU/RAM setups and then combining those measurements with the ones we get from a full blown server, we can get a more complete picture. A simple motherboard/CPU/RAM setup allows us the lowest power numbers possible, and a full blown server measurement tells us how much the more reliable cooling of a full blown server chassis adds.

ASRock's C2750D4I

The mini-ITX ASRock C2750D4I has the Atom "Avoton" C2750 SoC (2.4GHz, eight Silvermont cores) on board. If you are interested in using this board at home, Ian reviewed it in great detail. I'll focus on the server side of this board and use it to find out how well the C2750 stacks up as a server SoC.

Contrary to the Xeon E3, 16GB DIMMs are supported. The dual-channel, four DIMM slot configuration allows you to use up to 64GB. This board is clearly targeted at the NAS market, as ASRock not only made use of the six built-in SATA ports (2x SATA 6G, 4x SATA 3G) of the Atom SoC but also added a Marvell SE9172 (2x SATA 6G) and a Marvell SE9230 (4x SATA 6G) controller. Furthermore, ASRock soldered two Intel i210 gigabit chips and an AST2300 to the board. However, the Atom "Avoton" integrated 16 PCIe lanes only support four PCIe devices. The PCIe x8 slot already needs eight of them and the Marvel SE9230 takes another two PCIe lanes, so the ASRock board needs a PLX 8608 PCIEe switch.

The end result is that the ASRock C2750 board consumes more energy at idle than a simpler micro server board would. We could not get under 26W, and with four DIMMs 31W was needed. That is quite high, as Supermicro and several independent reviews report that the Supermicro A1SAM-2750F needs about 17W in the same configurations.

ASUS P9D-MH

The micro-ATX ASUS P9D-MH is a feature rich Xeon E3 based board. ASUS targets cloud computing, big data, and other network intensive applications. The main distinguishing feature is of course the dual 10 Gigabit SFP+ connectors of the Broadcom 57840S controller.

The C224 chipset provides two SATA 3G ports and four SATA 6G ports. ASUS added the LSI 2308 controller to offer eight SAS ports. The SAS drives can be configured to run in a RAID 0/1/10 setup.

The Xeon E3-1200v3 has 16 integrated PCIe 3.0 lanes. Eight of them are used by the LSI 2308 SAS controller, and the 10 Gigabit Ethernet controller gets four fast lanes to the CPU. That leaves four PCIe 3.0 lanes for a mechanical x8 PCIe slot.

The second x8 PCIe slot gets four PCIe 2.0 slots and connects to the C224 chipset. The remaining PCIe 2.0 lanes are used by the BMC, the PCIe x1 slot, and the dual gigabit Ethernet controller.

Of course, all these features come with a price. With the efficient Xeon E3-1230L (25W TDP) and four 8GB DIMMs, the board consumes 41W at idle.

Simple and Affordable: the Supermicro MicroCloud Quick Overview of the SoCs
Comments Locked

47 Comments

View All Comments

  • JohanAnandtech - Tuesday, March 10, 2015 - link

    Thanks! It is been a long journey to get all the necessary tests done on different pieces of hardware and it is definitely not complete, but at least we were able to quantify a lot of paper specs. (25 W TDP of Xeon E3, 20W Atom, X-Gene performance etc.)
  • enzotiger - Tuesday, March 10, 2015 - link

    SeaMicro focused on density, capacity, and bandwidth.

    How did you come to that statement? Have you ever benchmark (or even play with) any SeaMicro server? What capacity or bandwidth are you referring to? Are you aware of their plan down the road? Did you read AMD's Q4 earning report?

    BTW, AMD doesn't call their server as micro-server anymore. They use the term dense server.
  • Peculiar - Tuesday, March 10, 2015 - link

    Johan, I would also like to congratulate you on a well written and thorough examination of subject matter that is not widely evaluated.

    That being said, I do have some questions concerning the performance/watt calculations. Mainly, I'm concerned as to why you are adding the idle power of the CPUs in order to obtain the "Power SoC" value. The Power Delta should take into account the difference between the load power and the idle power and therefore you should end up with the power consumed by the CPU in isolation. I can see why you would add in the chipset power since some of the devices are SoCs and do no require a chipset and some are not. However, I do not understand the methodology in adding the idle power back into the Delta value. It seems that you are adding the load power of the CPU to the idle power of the CPU and that is partially why you have the conclusion that they are exceeding their TDPs (not to mention the fact that the chipset should have its own TDP separate from the CPU).

    Also, if one were to get nit picky on the power measurements, it is unclear if the load power measurement is peak, average, or both. I would assume that the power consumed by the CPUs may not be constant since you state that "the website load is a very bumpy curve with very short peaks of high CPU load and lots of lows." If possible, it may be more beneficial to measure the energy consumed over the duration of the test.
  • JohanAnandtech - Wednesday, March 11, 2015 - link

    Thanks for the encouragement. About your concerns about the perf/watt calculations. Power delta = average power (high web load measured at 95% percentile = 1 s, an average of about 2 minutes) - idle power. Since idle power = total idle of node, it contains also the idle power of the SoC. So you must add it to get the power of the SoC. If you still have doubts, feel free to mail me.
  • jdvorak - Friday, March 13, 2015 - link

    The approach looks absolutely sound to me. The idle power will be drawn in any case, so it makes sense to add it in the calculation. Perhaps it would also be interesting to compare the power consumed by the differents systems at the same load levels, such as 100 req/s, 200 req/s, ... (clearly, some higher loads will not be achievable by all of them).

    Johan, thanks a lot for this excellent, very informative article! I can imagine how much work has gone into it.
  • nafhan - Wednesday, March 11, 2015 - link

    If these had 10gbit - instead of gbit - NICs, these things could do some interesting stuff with virtual SANs. I'd feel hesitant shuttling storage data over my primary network connection without some additional speed, though.

    Looking at that moonshot machine, for instance: 45 x 480 SSD's is a decent sized little SAN in a box if you could share most of that storage amongst the whole moonshot cluster.

    Anyway, with all the stuff happening in the virtual SAN space, I'm sure someone is working on that.
  • Casper42 - Wednesday, April 15, 2015 - link

    Johan, do you have a full Moonshot 1500 chassis for your testing? Or are you using a PONK?

Log in

Don't have an account? Sign up now