Conclusion: the SoCs

The proponents of high core and thread counts are quick to discount the brawny cores for scale-out applications. On paper, wide issue cores are indeed a bad match for such low ILP applications. However, in reality, the high clock speed and multi-threading of the Xeon E3s proved to be very powerful in a wide range of server applications.

The X-Gene 1 is the most potent ARM Server SoC we have ever seen. However, it is unfortunate that AppliedMicro's presentations have created inflated expectations.

AppliedMicro insisted that the X-Gene 1 is a competitor for the powerful Haswell and Ivy Bridge cores. So how do we explain the large difference between our benchmarks and theirs? The benchmark they used is the "wrk" benchmark, which is very similar to Apache Bench. The benchmark will hit the same page over and over again, and unless you do some serious OS and Network tweaking, your server will quickly run out of ports/connections and other OS resources. So the most likely explanation is that the Xeon measurements are achieved at lower CPU load and are bottlenecked by running out of network and OS resources.

This is in sharp contrast with our Drupal test, where we test with several different user patterns and thus requests. Each request is a lot heavier, and as a result available connections/ports are not the bottleneck. Also, all CPUs were in the 90-98% CPU load range when we compared the maximum throughput numbers.

The 40nm X-Gene can compete with the 22nm Atom C2000 performance wise, and that is definitely an accomplishment on its own. But the 40nm process technology and the current "untuned" state of ARMv8 software does not allow it to compete in performance/watt. The biggest advantage of the first 64-bit ARM SoCs is the ability for an ARM processor to use eight DIMM slots and address more RAM. Better software support (compilers, etc.) and the 28nm X-Gene 2 SoC will be necessary for AppliedMicro to compete with the Intel Xeon performance/watt wise.

The Atom C2750's raw performance fails to impress us in most of our server applications. Then again we were pleasantly surprised that its power consumption is below the official TDP. Still, in most server applications, a low voltage Xeon E3 outperforms it by a large margin.

And then there's the real star, the Xeon E3-1230L v3. It does not live up to the promise of staying below 25W, but the performance surpassed our expectations. The result is – even if you take into account the extra power for the chipset – an amazing performance/watt ratio. The end conclusion is that the introduction of the Xeon-D, which is basically an improved Xeon E3 with integrated PCH, will make it very hard for any competitor to beat the higher-end (20-40W TDP) Intel SoCs in performance/watt in a wide range of "scale-out" applications.

Conclusion: the Servers

As our tests with motherboards have shown, building an excellent micro or scale-out server requires much more thought than placing a low power SoC in a rack server. It is very easy to negate the power savings of such an SoC completely if the rest of server (motherboard, fans, etc.) is not built for efficiency.

The Supermicro MicroCloud server is about low acquisition costs and simplicity. In our experience, it is less efficient with low power Xeons as the cooling tends to consume proportionally more (between 7-12W per node with eight nodes installed). The cooling system and power supplies are built to work with high performance Xeon E3 processors.

HP limits the most power efficient SoCs (such as the Atom C2730) to cartridges that are very energy efficient but also come with hardware limitations (16GB max. RAM, etc.). HP made the right choice as it is the only way to turn the advantages of low power SoCs into real-world energy efficiency, but that means the low power SoC cartridges may not be ideal for many situations. You will have to monitor your application carefully and think hard about what you need and what not to create an efficient datacenter.

Of the products tested so far, the HP Moonshot tends to impress the most. Its cleverly designed cartridges use very little power and the chassis allows you to choose the right server nodes to host your application. There were a few application tests missing in this review, namely the web caching (memcached) and web front-end tests, but based on our experiences we are willing to believe that the m300/m350 cartridge are perfect for those use cases.

Still, we would like to see a Xeon E3 low voltage cartridge for a "full web infrastructure" (front- and back-end) solution. That is probably going to be solved once HP introduces a Xeon-D based cartridge. Once that is a reality, you can really "right-size" the Moonshot nodes to your needs. But even now, the HP Moonshot chassis offers great flexibility and efficiency. The flexibility does tend to cost more than other potential solutions – we have yet to find out the exact pricing details – but never before was it so easy to adapt your tier one OEM server hardware so well to your software.

The War of the SoCs: Performance/Watt
Comments Locked

47 Comments

View All Comments

  • JohanAnandtech - Tuesday, March 10, 2015 - link

    Thanks! It is been a long journey to get all the necessary tests done on different pieces of hardware and it is definitely not complete, but at least we were able to quantify a lot of paper specs. (25 W TDP of Xeon E3, 20W Atom, X-Gene performance etc.)
  • enzotiger - Tuesday, March 10, 2015 - link

    SeaMicro focused on density, capacity, and bandwidth.

    How did you come to that statement? Have you ever benchmark (or even play with) any SeaMicro server? What capacity or bandwidth are you referring to? Are you aware of their plan down the road? Did you read AMD's Q4 earning report?

    BTW, AMD doesn't call their server as micro-server anymore. They use the term dense server.
  • Peculiar - Tuesday, March 10, 2015 - link

    Johan, I would also like to congratulate you on a well written and thorough examination of subject matter that is not widely evaluated.

    That being said, I do have some questions concerning the performance/watt calculations. Mainly, I'm concerned as to why you are adding the idle power of the CPUs in order to obtain the "Power SoC" value. The Power Delta should take into account the difference between the load power and the idle power and therefore you should end up with the power consumed by the CPU in isolation. I can see why you would add in the chipset power since some of the devices are SoCs and do no require a chipset and some are not. However, I do not understand the methodology in adding the idle power back into the Delta value. It seems that you are adding the load power of the CPU to the idle power of the CPU and that is partially why you have the conclusion that they are exceeding their TDPs (not to mention the fact that the chipset should have its own TDP separate from the CPU).

    Also, if one were to get nit picky on the power measurements, it is unclear if the load power measurement is peak, average, or both. I would assume that the power consumed by the CPUs may not be constant since you state that "the website load is a very bumpy curve with very short peaks of high CPU load and lots of lows." If possible, it may be more beneficial to measure the energy consumed over the duration of the test.
  • JohanAnandtech - Wednesday, March 11, 2015 - link

    Thanks for the encouragement. About your concerns about the perf/watt calculations. Power delta = average power (high web load measured at 95% percentile = 1 s, an average of about 2 minutes) - idle power. Since idle power = total idle of node, it contains also the idle power of the SoC. So you must add it to get the power of the SoC. If you still have doubts, feel free to mail me.
  • jdvorak - Friday, March 13, 2015 - link

    The approach looks absolutely sound to me. The idle power will be drawn in any case, so it makes sense to add it in the calculation. Perhaps it would also be interesting to compare the power consumed by the differents systems at the same load levels, such as 100 req/s, 200 req/s, ... (clearly, some higher loads will not be achievable by all of them).

    Johan, thanks a lot for this excellent, very informative article! I can imagine how much work has gone into it.
  • nafhan - Wednesday, March 11, 2015 - link

    If these had 10gbit - instead of gbit - NICs, these things could do some interesting stuff with virtual SANs. I'd feel hesitant shuttling storage data over my primary network connection without some additional speed, though.

    Looking at that moonshot machine, for instance: 45 x 480 SSD's is a decent sized little SAN in a box if you could share most of that storage amongst the whole moonshot cluster.

    Anyway, with all the stuff happening in the virtual SAN space, I'm sure someone is working on that.
  • Casper42 - Wednesday, April 15, 2015 - link

    Johan, do you have a full Moonshot 1500 chassis for your testing? Or are you using a PONK?

Log in

Don't have an account? Sign up now