Limitations of This Review

Before we get to our closing thoughts, I want to talk a bit about what we were and weren't able to do in our testing, and in what ways this limits our review. The still somewhat "rough on the edges" software ecosystem gave us a lot of headaches. Our setup was a Ubuntu 14.04 with lots of customized (Cavium's beta gcc compiler, MySQL version ThunderX 5.6) and newer software (the Linux kernel 4.2). The result was that we were slowed down by a large number of solvable (but time consuming) software and configuration problems.

Don't get me wrong: Linux on ARM has come a long way. Most software works, the exception was Spark 1.5 (Java based) which crashed with a very low level message. Still, it took quite a bit of time to get software configured correctly, while on x86 it is simply a matter of using "apt-get install".

Cavium told us that they expect that these problems will be solved in Ubuntu 16.04, and a firmware upgrade to support the new Ubuntu is on its way. That is also why we were not able to test our most real world and relevant benchmarks, such as our webserver tests. Webcaching, light webserving and Contend Delivery Networking might fit the ThunderX well as it has access to more memory and networking bandwidth than the Xeon D.

But webservers require a lot of software components (caching layer, webserver, interpreter etc.) that have to work together well. So we wait for 16.04 and the new firmware to arrive to build these complex tests.

Closing Thoughts

Exploring a new ambitious server SoC is always an interesting journey, but also a very time intensive one. We had to find out what the real performance profile of the ThunderX, not the perceived or advertised one. There have been both good and bad surprises in our journey.

Let's start with the bad: the blocking L1 cache, high latency L2-cache and high latency DRAM access are lowering the scalability of the 48 cores in a whole range of software. The ThunderX could still be a good candidate as front-end webserver, but Cavium has to solve the high power usage at low loads. Power management seems to be more problematic than single-threaded performance. Cavium promises that ThunderX2 will vastly improve power management by letting hardware handle much more of the management duties. But the current ThunderX power management capabilities seem to be rather basic. Cavium's information is sketchy at best.

The Xeon D, by comparison, offers superior performance per watt: twice as good as the ThunderX. It is clear that the ThunderX is not a good match for heavy database servers, nor for enterprise workloads where energy consumption at low load is a high priority.

The good. The 12 and 16-core Xeon Ds ($1300-$1800) and midrange E5s ($900-$2000, not including 10 GBe NICs) cost quite a bit more than the ThunderX ($800). Gigabyte could not tell us how much the R120-T30 would cost, only that the pricing would be "competitive with similar midrange Xeon E5 servers".

Although single-threaded performance is low, it is definitely not as terrible as commonly assumed. We found out that the raw integer computing power of the Thunder-X is about one-third that of the best Xeon Ds, not one-fifth as claimed in advertising materials (a difference of 65%). The ThunderX core is almost as good as the A57, while it consumes quite a bit less power and thus offers a better performance-per-watt than the latter. On the condition that you give it enough DRAM, the 48 cores are able to offer the Java performance of a midrange Xeon E5, even when we expect the system to remain under a certain response time.

Those facts – better than expected single threaded performance and midrange Xeon E5 throughput – make a world of difference. And we have yet to test the ThunderX in situations that is was really built for: network-intensive ones. The XML processing and h264 substests (SPEC CPU) indicate that there is definitely some truth in Cavium's claims that the ThunderX shines as a compute engine of a load balancer, web cache or CDN server.

As single threaded performance (SPEC CPU) is not as dramatic as commonly assumed and Java performance is already very close to what was promised, this puts the forthcoming ThunderX2 in a different light.

Using the new 14 nm FinFET technology of GlobalFoundries, Cavium claims it will reach 2.6-3.0 GHz and get a lot of microarchitectural improvements. This should result in twice as much performance per core. If indeed the ThunderX2 can offer single threaded performance at the level of the low end Xeon D (1.3-1.5 GHz), that might place it in the "SoCs with acceptable single threaded performance" and opens up new markets. Time will tell, but Cavium has proven it can deliver SoCs in a timely manner. And the fact that Cavium allows independent testing instead of just showing numbers on PowerPoint presentations tell us that they have confidence in the future of their ARMv8 endeavors.

Energy Consumption
Comments Locked

82 Comments

View All Comments

  • silverblue - Thursday, June 16, 2016 - link

    I'm not sure how this is relevant. Johan doesn't review graphics cards, other people at Anandtech do. I bet Guru3D has a much bigger team for that, and I imagine that they have a much narrower scope (i.e. no server stuff).

    I don't think I've looked at a review recently that hasn't had the comments section polluted with "where is the review for x".
  • UrQuan3 - Wednesday, June 15, 2016 - link

    Intel allows their Xeons to sometimes pull double their TDP? No wonder our new machines trip breakers long before I thought they would. I need to test instead of assuming accurate documentation.

    I can see why you chose C-Ray, I'm just sorry a more general ray tracer was not chosen. Still, not it's intended market, though I am suddenly very interested. Ray-tracing and video encoding are my top two tasks.
  • Meteor2 - Thursday, June 16, 2016 - link

    The 'T' in 'TDP' is for thermal. It's a measure of the maximum waste heat which needs to be removed over a certain period of time.
  • UrQuan3 - Wednesday, June 22, 2016 - link

    Yes, it stands for thermal, but power doesn't consumed doesn't just disappear. Convert it to light, convert it to motion, convert it to heat, etc. In this case there is a small amount of motion (electrons) and the rest has to be heat. I expect much higher instantaneous pulls, but this was sustained power. Anyway, I will track down the AVX documentation mentioned below.

    I saw the h264ref. I'll be curious about x264 (handbrake) as the authors seem interested in ARM in the last few years. Unsurprisingly, it is far less optimized than x64. I benchmarked handbrake on the Pi2, Pandaboard, and CI-20 last year, just to see what it would do.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    C-Ray was just a place holder to measure FPU energy consumption. I look into bringing a more potent raytracer into our benchmark suite (povray)

    Video encoding was in the review though, somewhat (h264ref).
  • patrickjp93 - Friday, June 17, 2016 - link

    ARM chips with vector extensions allow it as well. Intel provides separate documentation for AVX-workload TDPs.
  • Antony Newman - Wednesday, June 15, 2016 - link

    Fascinating article.

    Why would Cavium not try and use 54 x A73s in their next chip?

    If ARM are not in the business of making Silicon, and ARM think the '1.2W Ares' will help them break into the Server market ... Then Why do we think ARM isn't working with the likes of Cavium to get a Server SoC that rocks the Intel boat?

    Typos From memory : send -> sent. Through-> thought. There were a few others.

    AJ
  • name99 - Thursday, June 16, 2016 - link

    How do you know ARM aren't working with such a vendor?
    ARM has always said that they expect ARM server CPUs to only be marginally competitive (for very limited situations) in 2017, and to only be really competitive in 2020.

    That suggests, among other things, that if they are working with partners, they have a target launch between those two dates, and they regard all launches before 2017 as essentially nice for PR and fr building up the ecosystem, but essentially irrelevant for commercial purposes.
  • rahvin - Thursday, June 16, 2016 - link

    The problem as pointed out early in this article is that ARM keeps targeting Intel's current products, not the ones that will be out when they get their products out. We've had almost a dozen vendors get to the point of releasing the chip and drop it because it is simply not competitive with Intel. Most of these arm products were under taken when Intel was targeting performance without regard to performance/watt. Now that intel targets the later metric arm server chips haven't been competitive with them.

    Fact is Intel could decimate and totally take over all the markets arm chips occupy, but to do it they'd have to cannibalize their existing high profit sales. This is why they keep canceling Atom chips, the chips turned out so good they were worried they'd cannibalize much more expensive products. This is the reason Avoton is highly restricted in what products and price segments it's allowed into. If Intel opened the flood gates on Avoton they would risk cannibalizing their own server profits.
  • junky77 - Wednesday, June 15, 2016 - link

    So, they did what AMD couldn't for years? I'm trying to figure it out.. their offering seems to be a lot more interesting than AMD's stuff currently

Log in

Don't have an account? Sign up now