Assessing Cavium's ThunderX2: The Arm Server Dream Realized At Last
by Johan De Gelas on May 23, 2018 9:00 AM EST- Posted in
- CPUs
- Arm
- Enterprise
- SoCs
- Enterprise CPUs
- ARMv8
- Cavium
- ThunderX
- ThunderX2
SPEC CPU2006 Cont: Per-Core Performance w/SMT
Moving beyond single-threaded performance, multi-threaded performance within the confines of a single core is of course also important. The Vulcan CPU architecture was designed from the start to leverage SMT4 to keep its cores occupied and boost their overall throughput, so this is where we'll look next.
SPEC CPU2006: Single Core w/SMT | ||||||
Subtest SPEC CPU2006 Integer |
Application Type | Cavium ThunderX 2 GHz gcc 5.2 1 thread |
Cavium ThunderX2 @2.5 GHz gcc 7.2 4 threads |
Xeon 8176 @3.8 GHz gcc 7.2 2 threads |
Thunder X2 vs Xeon 8176 |
Thunder X2 vs ThunderX |
400.perlbench | Spam filter | 8.3 | 24.1 | 50.6 | 48% | 290% |
401.bzip2 | Compression | 6.5 | 22.9 | 31.9 | 72% | 350% |
403.gcc | Compiling | 10.8 | 35 | 38.1 | 92% | 330% |
429.mcf | Vehicle scheduling | 10.2 | 52.4 | 50.6 | 104% | 510% |
445.gobmk | Game AI | 9.2 | 25.1 | 35.6 | 71% | 270% |
456.hmmer | Protein seq. analyses | 4.8 | 26.7 | 41 | 65% | 560% |
458.sjeng | Chess | 8.8 | 22.4 | 37.1 | 60% | 250% |
462.libquantum | Quantum sim | 5.8 | 83.6 | 83.2 | 100% | 1440% |
464.h264ref | Video encoding | 11.9 | 34 | 66.8 | 51% | 290% |
471.omnetpp | Network sim | 7.3 | 31.1 | 41.1 | 76% | 440% |
473.astar | Pathfinding | 7.9 | 27.2 | 33.8 | 80% | 340% |
483.xalancbmk | XML processing | 8.4 | 33.8 | 75.3 | 45% | 400% |
First of all, the ThunderX2 core is a massive improvement over the simple ThunderX core. Even excluding libquantum – that benchmark could easily run 3 times faster on the older ThunderX core after some optimization and compiler improvements – the new ThunderX2 is no less than 3.7 times faster than its older brother. This kind of an IPC advantage makes the original ThunderX's 50% core advantage all but irrelevant.
Looking at the impact of SMT, on average, we see that 4-way SMT improves the ThunderX2's performance by 32%. This ranges from 8% for video encoding to 74% for pathfinding. Intel meanwhile gets a 18% boost from their 2-way SMT, ranging from 4% to 37% in the same respective scenarios.
Overall, a boost of 32% for the ThunderX2 is decent. But it does invite an obvious comparison: how does it fare relative to another SMT4 architecture? Looking at IBM's POWER8, which also supports SMT4, at first glance there seems to be some room for improvement, as the POWER8 sees a 76% boost in the same scenario.
However this isn't entirely an apples-to-apples comparison, as the IBM chip had a much wider back-end: it could issue 10 instructions while the ThunderX2 core is limited to 6 instructions per cycle. The POWER8 core was also much more power hungry: it could fit only 10 of those ultra-wide cores inside a 190W power budget on a 22 nm process. In other words, further increasing the performance gains from using SMT4 would likely require even wider cores, and in turn seriously impact the total number of cores available inside the ThunderX2. Still, it is interesting to put that 32% number into perspective.
97 Comments
View All Comments
name99 - Thursday, May 24, 2018 - link
For crying out loud!At the very least, if you want to pursue this obsession regarding vectors, look at ARM's SVE (Scalable Vector Extensions). THAT is where ARM is headed in the vector space.
Fujitsu is implementing these for the cores of its next HPC machines, and they will likely roll out into other ARM cores (maybe Apple first? but who can be sure?) over the next few years.
To the extent that Cavium has any interest in competing in HPC, if/when they choose to do so it will be on the basis of an SVE implementation, not on the basis of NEON.
Meanwhile ARMv8 NEON is very much the equivalent of SSE. Not AVX, no, but SSE (in all its versions) yes.
tuxRoller - Thursday, May 24, 2018 - link
Nice comment.BTW, centriq (rip) only supports(ed) aarch64. I've no idea how much die space that saved, though.
Wilco1 - Thursday, May 24, 2018 - link
There is Cortex-A35, smallest AArch64 core so far with FP and Neon.However there are still big differences between RISC and CISC. For example it's not feasible for CISC to get anywhere near the same size/perf/power. The mobile Atom debacle has clearly shown it's not feasible to match small and efficient RISCs even with a better process and many billions of dollars...
peevee - Thursday, May 24, 2018 - link
It is not 8.2.lmcd - Wednesday, January 23, 2019 - link
Necro but worth for historic reasons: A35 is AArch32 but ARMv8ZolaIII - Thursday, May 24, 2018 - link
It would took them a same. AVX is a SIMD FP extension to the prime architectural instruction set same as NEON and cetera. The strict difference between CISC and RISC architecture is long gone and today's one's are combined & further more implement IVIL SIMDs and more & more of DSP components as MAC's. The train only starts on prime integer instruction set (where by the way ARM is stellar) and then switches it's worker's to FP extensions and accelerated blocks of different kinds. The same way lintel grow up AVX to 512 bit in current use NEON can be scaled up & beyond. Fuitsu worked with ARM on 1024 & 2048 NEON SIMD blocks couple of years ago. Still if you think how FP is a best way to do it you are wrong, DSP's use CP and it's much more efficient power & performance wise but less scalable.On what would you like server's to be compared? Almost 90% of enterprise servers run on Linux, even Microsoft is earning more money this day's on Linux than from selling Windows desktop & server's combined.
You are very ignorant person. Why do you coment about the things you don't know anything about?
Ryan Smith - Thursday, May 24, 2018 - link
"I really think Anandtech needs to branch into different websites. Its very strange and unappealing to certain users to have business/consumer/random reviews/phone info all bunched together."Although I appreciate the feedback, I must admit that we enjoy doing a variety of things. There are a lot of cool things happening in the technology world, not all of which are in the consumer space. So rare articles like these - and we only publish a few a year - let us keep tabs on what's going on in some of those other markets.
HStewart - Wednesday, May 23, 2018 - link
I would think that a lot of this depends what type of applications are running on server. Highly mathematical and especially any with Vectors will be likely different. Also there is no support for Windows based servers which limits which applications can be done - so my guess this will be useless if desiring a VMWave server.But it is interesting that it takes a 4SMT to compete with x86 based servers from Intel and AMD and with more cores 32 vs 22/28 depending on version.
Wilco1 - Wednesday, May 23, 2018 - link
You're right, on floating point and vectors the results are different. To be precise - even more impressive. See the last page for example where it soundly beats Skylake on OpenFoam and a few other HPC benchmarks. Hence the huge interest from all the HPC companies.Note Windows has been running on Arm for quite some time. Microsoft runs Windows Server both on Centriq and ThunderX2. See eg. https://www.youtube.com/watch?v=uF1B5FfFLSA for more info.
HStewart - Wednesday, May 23, 2018 - link
Windows on ARM is DOA,