Investigating Cavium's ThunderX: The First ARM Server SoC With Ambition
by Johan De Gelas on June 15, 2016 8:00 AM EST- Posted in
- SoCs
- IT Computing
- Enterprise
- Enterprise CPUs
- Microserver
- Cavium
ThunderX SKUs: What is Cavium Offering Today?
Cavium has been promising SKUs with 16 cores or 48 cores, with clockspeed ranges between 1.8 GHz and 2.5 GHz, with a TDP of up to 95W. While I am typing this article, Cavium has not published a full spec list of the different SKUs and the real silicon is different from the paper specs. So we will simply jot down what we do know.
The SKU we tested was the ThunderX-CP CN88xx 2 GHz. It is hard to identify the CPU as the usual Linux CPU identification tools do not tell us anything. Only the BIOS can gave some info:
The SKU we tested (CN8890), has 48 cores at 2 GHz, inside a TDP of 120W and costs around $800. This is the SKU that is being produced at mass scale.
What we know so far:
- SKUs available with Clockspeeds of 1.6 GHz and 1.8 GHz with lower TDPs than 120W (2 GHz)
- Highest clock is 2 GHz
- SKUs with 24, 32, and 48 cores
- Available in all families (Cloud, Storage, Security, Networking) and currently in productions
TDP ranges from 65W (low end, 24 cores at 1.8 GHz ?) to 135W (probably a 48-core SKU at 2 GHz with most features turned on).
But it is safe to say that Cavium missed the target of 48-cores at 2.5 GHz inside a 95W power envelope. That probably was too optimistic, given the fact that the chip is baked with a relatively old 28 nm high-k metal-gate process at GlobalFoundries.
82 Comments
View All Comments
vivs26 - Wednesday, June 15, 2016 - link
Not necessarily - (read Amdahl's law of diminishing returns). The performance actually depends on the workload. Having a million cores guarantees nothing in terms of performance unless the workload is parallelizable which in the real world is not as much as we think it could be. I'm curious to see how xeon merged with altera programmable fabric performs than ARM on a server.maxxbot - Wednesday, June 22, 2016 - link
Technically true but every generation that millstone gets a little smaller, the die area and power needed to translate x86 into uops isn't huge and reduces every generation.jardows2 - Wednesday, June 15, 2016 - link
Interesting. Faster in a few workloads where heavy use of multi-thread is important, but significantly slower in more single thread workloads. For server use, you don't always want parallelized tasks. The results are pretty much across the board for all the processors tested: If the ThunderX was slower, it was slower than all the Intel chips. If it were faster, it was faster than all but the highest end Intel Chips. With the price only being slightly lower than the cheapest Intel chip being sold, I don't think this is going to be a Xeon competitor at all, but will take a few niche applications where it can do better.With no significant energy savings, we should be looking forward to the ThunderX2 to see if it will bring this into a better alternative.
ddriver - Wednesday, June 15, 2016 - link
There is hardly a server workload where you don't get better throughput by throwing more cores and servers at it. Servers are NOT about parallelized task, but about concurrent tasks. That's why while desktops are still stuck at 8 cores, server chips come with 20 and more... Server workloads are usually very simple, it is just that there is a lot of them. They are so simple and take so little time it literally makes no sense parallelizing them.jardows2 - Wednesday, June 15, 2016 - link
In the scenario you described, the single-thread performance takes on even more importance, thus highlighting the advantage the Xeon's currently have in most server configurations.niva - Wednesday, June 15, 2016 - link
Not if the Xeon doesn't have enough cores to actually process 40+ singlethreaded tasks con-currently.hechacker1 - Wednesday, June 15, 2016 - link
But kernels and VMWare know how to schedule multiple threads on 1 core if it's not being fully utilized. Single threaded IPC can make up for not having as many cores. See the iPhone SoCs for another example.ddriver - Wednesday, June 15, 2016 - link
Not if you have thousands of concurrent workloads and only like 8 cores. As fast as each core might be, the overhead from workload context switching will eat it up.willis936 - Thursday, June 16, 2016 - link
Yeah if each task is not significantly longer than a context switch. Context switches are very fast, especially with processors with many sets of SMT registers per core.ddriver - Thursday, June 16, 2016 - link
If what you suggest is correct, then intel would not be investing chip TDP in more cores but higher clocks and better single threaded performance. Clearly this is not the case, as they are pushing 20 cores at the fairly modest 2.4 Ghz.