Limitations of This Review

Before we get to our closing thoughts, I want to talk a bit about what we were and weren't able to do in our testing, and in what ways this limits our review. The still somewhat "rough on the edges" software ecosystem gave us a lot of headaches. Our setup was a Ubuntu 14.04 with lots of customized (Cavium's beta gcc compiler, MySQL version ThunderX 5.6) and newer software (the Linux kernel 4.2). The result was that we were slowed down by a large number of solvable (but time consuming) software and configuration problems.

Don't get me wrong: Linux on ARM has come a long way. Most software works, the exception was Spark 1.5 (Java based) which crashed with a very low level message. Still, it took quite a bit of time to get software configured correctly, while on x86 it is simply a matter of using "apt-get install".

Cavium told us that they expect that these problems will be solved in Ubuntu 16.04, and a firmware upgrade to support the new Ubuntu is on its way. That is also why we were not able to test our most real world and relevant benchmarks, such as our webserver tests. Webcaching, light webserving and Contend Delivery Networking might fit the ThunderX well as it has access to more memory and networking bandwidth than the Xeon D.

But webservers require a lot of software components (caching layer, webserver, interpreter etc.) that have to work together well. So we wait for 16.04 and the new firmware to arrive to build these complex tests.

Closing Thoughts

Exploring a new ambitious server SoC is always an interesting journey, but also a very time intensive one. We had to find out what the real performance profile of the ThunderX, not the perceived or advertised one. There have been both good and bad surprises in our journey.

Let's start with the bad: the blocking L1 cache, high latency L2-cache and high latency DRAM access are lowering the scalability of the 48 cores in a whole range of software. The ThunderX could still be a good candidate as front-end webserver, but Cavium has to solve the high power usage at low loads. Power management seems to be more problematic than single-threaded performance. Cavium promises that ThunderX2 will vastly improve power management by letting hardware handle much more of the management duties. But the current ThunderX power management capabilities seem to be rather basic. Cavium's information is sketchy at best.

The Xeon D, by comparison, offers superior performance per watt: twice as good as the ThunderX. It is clear that the ThunderX is not a good match for heavy database servers, nor for enterprise workloads where energy consumption at low load is a high priority.

The good. The 12 and 16-core Xeon Ds ($1300-$1800) and midrange E5s ($900-$2000, not including 10 GBe NICs) cost quite a bit more than the ThunderX ($800). Gigabyte could not tell us how much the R120-T30 would cost, only that the pricing would be "competitive with similar midrange Xeon E5 servers".

Although single-threaded performance is low, it is definitely not as terrible as commonly assumed. We found out that the raw integer computing power of the Thunder-X is about one-third that of the best Xeon Ds, not one-fifth as claimed in advertising materials (a difference of 65%). The ThunderX core is almost as good as the A57, while it consumes quite a bit less power and thus offers a better performance-per-watt than the latter. On the condition that you give it enough DRAM, the 48 cores are able to offer the Java performance of a midrange Xeon E5, even when we expect the system to remain under a certain response time.

Those facts – better than expected single threaded performance and midrange Xeon E5 throughput – make a world of difference. And we have yet to test the ThunderX in situations that is was really built for: network-intensive ones. The XML processing and h264 substests (SPEC CPU) indicate that there is definitely some truth in Cavium's claims that the ThunderX shines as a compute engine of a load balancer, web cache or CDN server.

As single threaded performance (SPEC CPU) is not as dramatic as commonly assumed and Java performance is already very close to what was promised, this puts the forthcoming ThunderX2 in a different light.

Using the new 14 nm FinFET technology of GlobalFoundries, Cavium claims it will reach 2.6-3.0 GHz and get a lot of microarchitectural improvements. This should result in twice as much performance per core. If indeed the ThunderX2 can offer single threaded performance at the level of the low end Xeon D (1.3-1.5 GHz), that might place it in the "SoCs with acceptable single threaded performance" and opens up new markets. Time will tell, but Cavium has proven it can deliver SoCs in a timely manner. And the fact that Cavium allows independent testing instead of just showing numbers on PowerPoint presentations tell us that they have confidence in the future of their ARMv8 endeavors.

Energy Consumption
Comments Locked

82 Comments

View All Comments

  • TheinsanegamerN - Thursday, June 16, 2016 - link

    While you are right on the actual age of the chip, if you dont compare efficiency on different nodes, how on earth would you know if you made any progress?

    Unless you are suggesting that one should never compare one generation of chips to another, which is simply ludicrous. Where is this "you cane compare two different nodes" mindset coming from? I've seen it in the GPU forums as well, and it makes no sense.
  • shelbystripes - Wednesday, June 15, 2016 - link

    The E5-2600 v3 is a Haswell part, meaning it's Intel's second ("tock") core design on 22nm. So not only is this a smaller process, it's a second-gen optimization on a smaller process.

    For a first-gen 28nm part that includes power-hungry features like multiple 10GbE, these are some very promising initial results. A 14nm die shrink should create some real improvements off the bat in terms of performance per watt, and further optimizations from there should make this thing really shine.

    Given that Intel hasn't cracked 10nm at all yet, and it'll take a while for 10nm Xeons to show up once they do, Cavium has room to play catch-up. I mean, hell, they're keeping up/surpassing Xeon D in some use cases NOW, and that's a 14nm part. What Cavium needs most is power optimization at this point, and I'm sure they'll get there in time.
  • Michael Bay - Thursday, June 16, 2016 - link

    Good to know Intel is keeping you up to date with what`s happening in their uv labs.
  • rahvin - Thursday, June 16, 2016 - link

    Last I saw Intel is already running their test fabs at 10nm. Once they perfect it in the test fabs it only takes them about 6 months to roll it into a full scale fab. Maybe you an point to this source that indicates Intel has failed at 10nm.
  • kgardas - Wednesday, June 15, 2016 - link

    Nice article, but really looking to see testing of ThunderX2 and X-Gene 3. Will be interesting as Intel seems to be kind of struggling with single-threaded performance recently...
  • Drazick - Wednesday, June 15, 2016 - link

    Just a question.
    You emphasized the performance are x3 instead of x5 but I bet Intel used Intel ICC for those tests.

    Intel works hard on their Compilers and anyone who wants to extract the best of Intel CPU uses them as well.

    Since CPU means Compilers, if Intel has advantage on that department you should show that as well.

    Namely give us some results using Intel ICC.

    Thank You.
  • UrQuan3 - Wednesday, June 15, 2016 - link

    Of course, if Anandtech uses ICC, they should use better flags in gcc for ARM/ThunderX as well (core specific flags, NEON, etc). Both ICC and targeted flags give improvements. Often large ones. This was a generic test.
  • JohanAnandtech - Thursday, June 16, 2016 - link

    For integer workloads, ICC is not that much faster than gcc (See Andreas Stiller's work). And there is the fact that ICC requires licensing and other time consuming stuff. From a linux developer/administrator perspective, it is much easier just to use gcc, you simply install it from repositories, no licensing headaches and very decent performance (about 90% of icc). So tha vast majority of the **NON HPC ** software is compiled with gcc. Our added value is that we show how the processors compare with the most popular compiler on linux. That is the big difference between benchmarking to put a CPU in the best light and benchmarking to show what most people will probably experience.

    Until Intel makes ICC part of the typical linux ecosystem, it is not an advantage at all in most non-HPC software.
  • patrickjp93 - Friday, June 17, 2016 - link

    His work is woefully incomplete, lacking any analysis on vectorized integer workloads, which Intel destroys GCC in to the tune of a 40% lead.
  • phoenix_rizzen - Wednesday, June 15, 2016 - link

    "The one disadvantage of all Supermicro boards remains their Java-based remote management system. It is a hassle to get it working securely (Java security is a user unfriendly mess), and it lacks some features like booting into the BIOS configuration system, which saves time."

    It's IPMI, you can use any IPMI client to connect to it. Once you give it an IP and password in the BIOS, you can connect to it using your IPMI client of choice. There's also a web interface that provides most of the features of their Java client (I think that uses Java as well, but just for the console).

    For our SuperMicro servers, I just use ipmitool from my Linux station and have full access to the console over the network, including booting it into the BIOS, managing the power states, and even connecting to the serial console over the network.

    Not sure why you'd consider a full IPMI 2.0 implementation a downside just because the default client sucks.

Log in

Don't have an account? Sign up now