CPU Performance: Office and Web

Our previous set of ‘office’ benchmarks have often been a mix of science and synthetics, so this time we wanted to keep our office section purely on real world performance.

Agisoft Photoscan 1.3.3: link

Photoscan stays in our benchmark suite from the previous benchmark scripts, but is updated to the 1.3.3 Pro version. As this benchmark has evolved, features such as Speed Shift or XFR on the latest processors come into play as it has many segments in a variable threaded workload.

The concept of Photoscan is about translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the final 3D model in both spatial accuracy and texturing accuracy. The algorithm has four stages, with some parts of the stages being single-threaded and others multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures.

For the update to version 1.3.3, the Agisoft software now supports command line operation. Agisoft provided us with a set of new images for this version of the test, and a python script to run it. We’ve modified the script slightly by changing some quality settings for the sake of the benchmark suite length, as well as adjusting how the final timing data is recorded. The python script dumps the results file in the format of our choosing. For our test we obtain the time for each stage of the benchmark, as well as the overall time.

The final result is a table that looks like this:

(1-1) Agisoft Photoscan 1.3, Complex Test

As explained in the power tests, the 4800U with double the cores wins out here, and due to the vector pressure also wins on power efficiency. There’s still a sizeable uplift from Ice Lake to Tiger Lake at 15 W, although 28 W is needed to get something sizeable.

Mozilla Kraken 1.1

Kraken is a 2010 benchmark from Mozilla and does a series of JavaScript tests. These tests are a little more involved than previous tests, looking at artificial intelligence, audio manipulation, image manipulation, json parsing, and cryptographic functions. The benchmark starts with an initial download of data for the audio and imaging, and then runs through 10 times giving a timed result.

Automation involves loading the direct webpage where the test is run and putting it through. All CPUs finish the test in under a couple of minutes, so we put that as the end point and copy the page contents into the clipboard before parsing the result. Each run of the test on most CPUs takes from half-a-second to a few seconds.

(7-1) Kraken 1.1 Web Test

Both the Tiger Lake results are very fast, not showing much difference between the power modes. Intel pushes ahead of AMD here, and ultimately a sizable jump over Ice Lake.

Google Octane 2.0

Our second test is also JavaScript based, but uses a lot more variation of newer JS techniques, such as object-oriented programming, kernel simulation, object creation/destruction, garbage collection, array manipulations, compiler latency and code execution.

Octane was developed after the discontinuation of other tests, with the goal of being more web-like than previous tests. It has been a popular benchmark, making it an obvious target for optimizations in the JavaScript engines. Ultimately it was retired in early 2017 due to this, although it is still widely used as a tool to determine general CPU performance in a number of web tasks.

Octane’s automation is a little different than the others: there is no direct website to go to in order to run the benchmark. The benchmark page is opened, but the user has to navigate to the ‘start’ button or open the console and initiate the JavaScript required to run the test. The test also does not show an obvious end-point, but luckily does try and aim for a fixed time for each processor. This is similar to some of our other tests, that loop around a fixed time before ending. Unfortunately this doesn’t work if the first loop goes beyond that fixed time, as the loop still has to finish. For Octane, we have set it to 75 seconds per run, and we loop the whole test four times.

(7-2) Google Octane 2.0 Web Test

The Tiger Lake system reaches new records in Optane. If there’s anything this system is fast at, it is web workloads.

Speedometer 2: JavaScript Frameworks

Our newest web test is Speedometer 2, which is a test over a series of JavaScript frameworks to do three simple things: built a list, enable each item in the list, and remove the list. All the frameworks implement the same visual cues, but obviously apply them from different coding angles.

Our test goes through the list of frameworks, and produces a final score indicative of ‘rpm’, one of the benchmarks internal metrics. Rather than use the main interface, we go to the admin interface through the about page and manage the results there. It involves saving the webpage when the test is complete and parsing the final result.

We repeat over the benchmark for a dozen loops, taking the average of the last five.

(7-3) Speedometer 2.0 Web Test

Again, another good win for Tiger Lake.

CPU MT Performance: SPEC 2006, SPEC 2017 CPU Performance: Simulation and Science
Comments Locked

253 Comments

View All Comments

  • Spunjji - Thursday, September 17, 2020 - link

    Came here to leave an identical comment before I've even read the article 😂
  • DigitalFreak - Thursday, September 17, 2020 - link

    The Tiger King puns are getting old.
  • huangcjz - Thursday, September 17, 2020 - link

    I still don't get it...
  • Luminar - Thursday, September 17, 2020 - link

    RIP AMD
  • tipoo - Friday, September 18, 2020 - link

    You haven't had the...Well I can't say pleasure, of watching Tiger King then
  • Flunk - Thursday, September 17, 2020 - link

    Wow, this naming scheme is even worse than the previous one. I've been patiently explaining to people for years that the number after the I is less important than that last letter.

    E.G. H > U > Y

    I can't even imagine how you'd explain this to someone who isn't a hardcore enthusiat. You basicallly need to look up each CPU number to know where in the stack it is. Might as well give up on the numbers entirely.
  • wr3zzz - Thursday, September 17, 2020 - link

    I am with you but it sounds like the 85 in 1185G7 is the new U.
  • ingwe - Thursday, September 17, 2020 - link

    Agree with Ian and Andrei. The power/naming shenanigans are just miserable.
  • Spunjji - Thursday, September 17, 2020 - link

    Intel's product naming division is its own circle of hell.
  • CajunArson - Thursday, September 17, 2020 - link

    You guys really REALLY need to update NAMD to the 2.14 nightly builds to get a real idea of what Willow Cove can do in a workload that is very heavily used in HPC: https://www.hpcwire.com/2020/08/12/intel-speeds-na...

Log in

Don't have an account? Sign up now