CPU Web Tests

One of the issues when running web-based tests is the nature of modern browsers to automatically install updates. This means any sustained period of benchmarking will invariably fall foul of the 'it's updated beyond the state of comparison' rule, especially when browsers will update if you give them half a second to think about it. Despite this, we were able to find a series of commands to create an un-updatable version of Chrome 56 for our 2017 test suite. While this means we might not be on the bleeding edge of the latest browser, it makes the scores between CPUs comparable.

All of our benchmark results can also be found in our benchmark engine, Bench.

SunSpider 1.0.2: link

The oldest web-based benchmark in this portion of our test is SunSpider. This is a very basic javascript algorithm tool, and ends up being more a measure of IPC and latency than anything else, with most high-performance CPUs scoring around about the same. The basic test is looped 10 times and the average taken. We run the basic test 4 times.

Web: SunSpider on Chrome 56

Mozilla Kraken 1.1: link

Kraken is another Javascript based benchmark, using the same test harness as SunSpider, but focusing on more stringent real-world use cases and libraries, such as audio processing and image filters. Again, the basic test is looped ten times, and we run the basic test four times.

Web: Mozilla Kraken 1.1 on Chrome 56

Google Octane 2.0: link

Along with Mozilla, as Google is a major browser developer, having peak JS performance is typically a critical asset when comparing against the other OS developers. In the same way that SunSpider is a very early JS benchmark, and Kraken is a bit newer, Octane aims to be more relevant to real workloads, especially in power constrained devices such as smartphones and tablets.

Web: Google Octane 2.0 on Chrome 56

WebXPRT 2015: link

While the previous three benchmarks do calculations in the background and represent a score, WebXPRT is designed to be a better interpretation of visual workloads that a professional user might have, such as browser based applications, graphing, image editing, sort/analysis, scientific analysis and financial tools.

Web: WebXPRT 15 on Chrome 56

Overall, all of our web benchmarks show a similar trend. Very few web frameworks offer multi-threading – the browsers themselves are barely multi-threaded at times – so Threadripper's vast thread count is underutilized. What wins the day on the web are a handful of fast cores with high single-threaded performance.

Benchmarking Performance: CPU Rendering Tests Benchmarking Performance: CPU Encoding Tests
Comments Locked

347 Comments

View All Comments

  • lefty2 - Thursday, August 10, 2017 - link

    except that they haven't
  • Dr. Swag - Thursday, August 10, 2017 - link

    How so? You have the performance numbers, and they gave you power draw numbers...
  • bongey - Thursday, August 10, 2017 - link

    Just do a avx512 benchmark and Intel will jump over 300watts , 400watts(overclocked) only from the cpu. (prime95 avx512 benchmark).See der8auer's video "The X299 VRM Disaster (en)"
  • DanNeely - Thursday, August 10, 2017 - link

    The Chromium build time results are interesting. Anandtech's results have the 1950X only getting 3/4ths of the 7900X's performance. Arstechnica's getting almost equal results on both CPUs, but at 16 compiles per day vs 24 or 32 is seeing significantly worse numbers all around.

    I'm wondering what's different between the two compile benchmarks to see such a large spread.
  • cknobman - Thursday, August 10, 2017 - link

    I think it has a lot to do with the RAM used by Anandtech vs Arstechnica .
    For all the regular benchmarking Anand used DDR4 2400, only the DDR 3200 was used in some overcloking.
    Arstechnica used DDR4 3200 for all benchmarking.
    Everyone already knows how faster DDR4 memory helps the Zen architecture.
  • DanNeely - Thursday, August 10, 2017 - link

    If ram was the determining factor, Ars should be seeing faster build times though not slower ones.
  • carewolf - Thursday, August 10, 2017 - link

    Anandtech must have misconfigured something. Building chromium is scales practically linearly. You can move jobs all the way across a slow network and compile on another machine and you still get linear speed-ups with more added cores.
  • Ian Cutress - Thursday, August 10, 2017 - link

    We're using a late March v56 code base with MSVC.
    Ars is using a newer v62 code base with clang-cl and VC++ linking

    We locked in our versions when we started testing Windows 10 a few months ago.
  • supdawgwtfd - Friday, August 11, 2017 - link

    Maybe drop it then as it is not at all usefull info.
  • Johan Steyn - Thursday, August 10, 2017 - link

    I refrained from posting on the previous article, but now I'm quite sure Anand is being paid by Intel. It is not that I argue against the benchmarks, but how it is presented. I was even under the impression that this was an Intel review.

    The previous article was stated as "Introducing Intel's Desktop Processor" Huge marketing research is done on how to market products. By just stating one thing first or in a different way, quite different messages can be conveyed without lying outright.

    By making the "Most Powerful, Most Scalable" Bold, that is what the readers read first, then they read "Desktop Processor" without even reading that is is Intel's. This is how marketing works, so Anand used slanted journalism to favour Intel, yet most people will just not realise it eat it up.

    In this review there are so many slanted journalism problems, it is just sad. If you want, just compare it to other sites reviews. They just omit certain tests and list others at which Intel excel.

    I have lost my respect for Anandtech with these last two articles of them, and I have followed Anandtech since its inception. Sad to see that you are also now bought by Intel, even though I suspected this before. Congratulations for making this so clear!!!

Log in

Don't have an account? Sign up now