CPU Tests: Legacy and Web

In order to gather data to compare with older benchmarks, we are still keeping a number of tests under our ‘legacy’ section. This includes all the former major versions of CineBench (R15, R11.5, R10) as well as x264 HD 3.0 and the first very naïve version of 3DPM v2.1. We won’t be transferring the data over from the old testing into Bench, otherwise it would be populated with 200 CPUs with only one data point, so it will fill up as we test more CPUs like the others.

The other section here is our web tests.

Web Tests: Kraken, Octane, and Speedometer

Benchmarking using web tools is always a bit difficult. Browsers change almost daily, and the way the web is used changes even quicker. While there is some scope for advanced computational based benchmarks, most users care about responsiveness, which requires a strong back-end to work quickly to provide on the front-end. The benchmarks we chose for our web tests are essentially industry standards – at least once upon a time.

It should be noted that for each test, the browser is closed and re-opened a new with a fresh cache. We use a fixed Chromium version for our tests with the update capabilities removed to ensure consistency.

Mozilla Kraken 1.1

Kraken is a 2010 benchmark from Mozilla and does a series of JavaScript tests. These tests are a little more involved than previous tests, looking at artificial intelligence, audio manipulation, image manipulation, json parsing, and cryptographic functions. The benchmark starts with an initial download of data for the audio and imaging, and then runs through 10 times giving a timed result.

Automation involves loading the direct webpage where the test is run and putting it through. All CPUs finish the test in under a couple of minutes, so we put that as the end point and copy the page contents into the clipboard before parsing the result. Each run of the test on most CPUs takes from half-a-second to a few seconds

(7-1) Kraken 1.1 Web Test

We loop through the 10-run test four times (so that’s a total of 40 runs), and average the four end-results. The result is given as time to complete the test, and we’re reaching a slow asymptotic limit with regards the highest IPC processors.

Google Octane 2.0

Our second test is also JavaScript based, but uses a lot more variation of newer JS techniques, such as object-oriented programming, kernel simulation, object creation/destruction, garbage collection, array manipulations, compiler latency and code execution.

Octane was developed after the discontinuation of other tests, with the goal of being more web-like than previous tests. It has been a popular benchmark, making it an obvious target for optimizations in the JavaScript engines. Ultimately it was retired in early 2017 due to this, although it is still widely used as a tool to determine general CPU performance in a number of web tasks.

Octane’s automation is a little different than the others: there is no direct website to go to in order to run the benchmark. The benchmark page is opened, but the user has to navigate to the ‘start’ button or open the console and initiate the JavaScript required to run the test. The test also does not show an obvious end-point, but luckily does try and aim for a fixed time for each processor. This is similar to some of our other tests, that loop around a fixed time before ending. Unfortunately this doesn’t work if the first loop goes beyond that fixed time, as the loop still has to finish. For Octane, we have set it to 75 seconds per run, and we loop the whole test four times.

(7-2) Google Octane 2.0 Web Test

It is worth noting that in the last couple of Intel generations, there was a significant uptick in performance for Intel, likely due to one of the optimizations from the code base that filtered through into the microarchitecture. Octane is still an interesting comparison point for systems within a similar microarchitecture scope.

Speedometer 2: JavaScript Frameworks

Our newest web test is Speedometer 2, which is a test over a series of JavaScript frameworks to do three simple things: built a list, enable each item in the list, and remove the list. All the frameworks implement the same visual cues, but obviously apply them from different coding angles.

Our test goes through the list of frameworks, and produces a final score indicative of ‘rpm’, one of the benchmarks internal metrics. Rather than use the main interface, we go to the admin interface through the about page and manage the results there. It involves saving the webpage when the test is complete and parsing the final result.

We repeat over the benchmark for a dozen loops, taking the average of the last five.

(7-3) Speedometer 2.0 Web Test

CPU Tests: Encoding CPU Tests: Synthetic
Comments Locked

110 Comments

View All Comments

  • ballsystemlord - Tuesday, July 21, 2020 - link

    @Ian, I love your 30,000 datapoints per article. Thanks for benching all these things.
    The AMD Phenom II 1090T (the original consumer 6 core!!!) is the CPU I'd like to see in the new suite.
  • Samus - Tuesday, July 21, 2020 - link

    Can you build an automated (filtering/categorizing) submission form for donations. I have many Xeon’s, especially the v3’s you have a shortage of, that I would be willing to donate for the cause.
  • ballsystemlord - Tuesday, July 21, 2020 - link

    @ian @Samus Use email to contact each other.
  • Dragonsteel - Tuesday, July 21, 2020 - link

    I'd like to see comparisons for the mainstream 300$ to 400$ CPUs starting with the i7 series.

    I'd really like to see i72600k on those benchmarks. Both stock and OC performance. I do run this CPU, bit am looking at upgrading soon to a comparable model. It just hasn't made sense until now with the new platform updates and more powerful GPUs.
  • Slaps - Tuesday, July 21, 2020 - link

    Would it be possible to add Counter-Strike Global Offensive? You can use the in-game console to load a demo (replay) of a professional match and let it run to get very real and consistent results.
  • ET - Tuesday, July 21, 2020 - link

    What an amazing project. Great and detailed article, too. I'm looking forward to seeing the results. I appreciate Bench, and often when I see someone on Reddit ask about an upgrade from, say, a Phenom II 1055T to FX 6120, I go to Bench to make a comparison (though of course can't often find the exact models).

    Hopefully the UI for Bench will be improved. Search and auto-completion, comparing more than 2 CPUs, these are things I'd expect.
  • DanNeely - Tuesday, July 21, 2020 - link

    y-cruncher sprint graphs are missing.
  • 137ben - Tuesday, July 21, 2020 - link

    This is an ambitious project, and it is the reason I enjoy coming to Anandtech.
  • ozzuneoj86 - Tuesday, July 21, 2020 - link

    This is amazing work! Thank you for doing this!

    One suggestion though... and I've mentioned this in past comments... please please please rename the lowest of the four quality settings for gaming benchmarks. The "IGP" setting is unnecessarily confusing to those looking at CPU benchmarks being run on a top of the line GPU. No IGP is involved. Just call it " VERY LOW" or something.
  • Meteor2 - Monday, August 3, 2020 - link

    Yes x1000!

    (What does IGP even stand for in this context?!)

Log in

Don't have an account? Sign up now