The AMD Ryzen Threadripper 1950X and 1920X Review: CPUs on Steroids
by Ian Cutress on August 10, 2017 9:00 AM ESTCPU Web Tests
One of the issues when running web-based tests is the nature of modern browsers to automatically install updates. This means any sustained period of benchmarking will invariably fall foul of the 'it's updated beyond the state of comparison' rule, especially when browsers will update if you give them half a second to think about it. Despite this, we were able to find a series of commands to create an un-updatable version of Chrome 56 for our 2017 test suite. While this means we might not be on the bleeding edge of the latest browser, it makes the scores between CPUs comparable.
All of our benchmark results can also be found in our benchmark engine, Bench.
SunSpider 1.0.2: link
The oldest web-based benchmark in this portion of our test is SunSpider. This is a very basic javascript algorithm tool, and ends up being more a measure of IPC and latency than anything else, with most high-performance CPUs scoring around about the same. The basic test is looped 10 times and the average taken. We run the basic test 4 times.
Mozilla Kraken 1.1: link
Kraken is another Javascript based benchmark, using the same test harness as SunSpider, but focusing on more stringent real-world use cases and libraries, such as audio processing and image filters. Again, the basic test is looped ten times, and we run the basic test four times.
Google Octane 2.0: link
Along with Mozilla, as Google is a major browser developer, having peak JS performance is typically a critical asset when comparing against the other OS developers. In the same way that SunSpider is a very early JS benchmark, and Kraken is a bit newer, Octane aims to be more relevant to real workloads, especially in power constrained devices such as smartphones and tablets.
WebXPRT 2015: link
While the previous three benchmarks do calculations in the background and represent a score, WebXPRT is designed to be a better interpretation of visual workloads that a professional user might have, such as browser based applications, graphing, image editing, sort/analysis, scientific analysis and financial tools.
Overall, all of our web benchmarks show a similar trend. Very few web frameworks offer multi-threading – the browsers themselves are barely multi-threaded at times – so Threadripper's vast thread count is underutilized. What wins the day on the web are a handful of fast cores with high single-threaded performance.
347 Comments
View All Comments
mapesdhs - Friday, August 11, 2017 - link
And consoles are on the verge of moving to many-cores main CPUs. The inevitable dev change will spill over into PC gaming.RoboJ1M - Friday, August 11, 2017 - link
On the verge?All major consoles have had a greater core count than consumer CPUs, not to mention complex memory architectures, since, what, 2005?
One suspects the PC market has been benefiting from this for quite some time.
RoboJ1M - Friday, August 11, 2017 - link
Specifically, the 360 had 3 general purpose CPU coresAnd the PS3 had one general purpose CPU core and 7 short pipeline coprocessors that could only read and write to their caches. They had to be fed by the CPU core.
The 360 had unified program and graphics ram (still not common on PC!)
As well as it's large high speed cache.
The PS3 had septate program and video ram.
The Xbox one and PS4 were super boring pcs in boxes. But they did have 8 core CPUs. The x1x is interesting. It's got unified ram that runs at ludicrous speed. Sadly it will only be used for running games in 1800p to 2160p at 30 to 60 FPS :(
mlambert890 - Saturday, August 12, 2017 - link
Why do people constantly assume this is purely time/market economics?Not everything can *be* parallelized. Do people really not get that? It isn't just developers targeting a market. There are tasks that *can't be parallelized* because of the practical reality of dependencies. Executing ahead and out of order can only go so far before you have an inverse effect. Everyone could have 40 core CPUs... It doesn't mean that *gaming workloads* will be able to scale out that well.
The work that lends itself best to parallelization is the rendering pipeline and that's already entirely on the GPU (which is already massively parallel)
Magichands8 - Thursday, August 10, 2017 - link
I think what AMD did here though is fantastic. In my mind, creating a switch to change modes vastly adds to the value of the chip. I can now maximize performance based upon workload and software profile and that brings me closer to having the best of both worlds from one CPU.Notmyusualid - Sunday, August 13, 2017 - link
@ rtho782I agree it is a mess, and also, it is not AMDs fault.
I've have a 14c/28t Broadwell chip for over a year now, and I cannot launch Tomb Raider with HT on, nor GTA5. But most s/w is indifferent to the amount of cores presented to them, it would seem to me.
BrokenCrayons - Thursday, August 10, 2017 - link
Great review but the word "traditional" is used heavily. Given the short lifespan of computer parts and the nature of consumer electronics, I'd suggest that there isn't enough time or emotional attachment to establish a tradition of any sort. Motherboards sockets and market segments, for instance, might be better described in other ways unless it's becoming traditional in the review business to call older product designs traditional. :)mkozakewich - Monday, August 14, 2017 - link
Oh man, but we'll still gnash our teeth at our broken tech traditions!lefty2 - Thursday, August 10, 2017 - link
It's pretty useless measuring power alone. You need to measure efficiency (performance /watt).So yeah, a 16 core CPU draws more power than a 10 core, but it also probably doing a lot more work.
Diji1 - Thursday, August 10, 2017 - link
Er why don't you just do it yourself, they've already given you the numbers.