Scientific and Synthetic Benchmarks

2D to 3D Rendering Agisoft PhotoScan v1.0: link

Agisoft Photoscan creates 3D models from 2D images, a process which is very computationally expensive. The algorithm is split into four distinct phases, and different phases of the model reconstruction require either fast memory, fast IPC, more cores, or even OpenCL compute devices to hand. Agisoft supplied us with a special version of the software to script the process, where we take 50 images of a stately home and convert it into a medium quality model. This benchmark typically takes around 15-20 minutes on a high end PC on the CPU alone, with GPUs reducing the time.

Agisoft PhotoScan Benchmark - Total Time

For PhotoScan, the extra cores and MHz from the Xeons means most in the first stage of the computation. The second stage shows an increas in CPU Mapping Speed, however this is the stage where the GPU can accelerate when in use.  Stage 3 benefits more from the MHz of the 8-core model, and the final stage is about even.

Console Emulation Dolphin Benchmark: link

At the start of 2014 I was emailed with a link to a new emulation benchmark based on the Dolphin Emulator. The issue with emulators tends to be two-fold: game licensing and raw CPU power required for the emulation. As a result, many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that raytraces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53; meaning that anything above this is faster than an actual Wii for processing Wii code, albeit emulated.

Dolphin Benchmark

Emulation is a pure single threaded affair, and the IPC improvements of Haswell stand out a lot against the Ivy Bridge-E based Xeons.

Point Calculations – 3D Movement Algorithm Test: link

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores.

3D Particle Movement: Single Threaded

The low core frequency of the 12-core Xeon puts it behind in our FP single threaded benchmark.

3D Particle Movement: MultiThreaded

In out multithreaded scenario, we see the situation similar to PovRay, where cores and frequency take top spots.

Encryption TrueCrypt v0.7.1a: link

TrueCrypt is an off the shelf open source encryption tool for files and folders. For our test we run the benchmark mode using a 1GB buffer and take the mean result from AES encryption.

TrueCrypt 7.1a AES

Synthetic – 7-Zip 9.2: link

As an open source compression tool, 7-Zip is a popular tool for making sets of files easier to handle and transfer. The software offers up its own benchmark, to which we report the result.

7-Zip MIPS

Real World CPU Benchmarks: Rendering, Compression, Video Conversion Gaming Benchmarks: F1 2013, Bioshock Infinite, Tomb Raider
Comments Locked

71 Comments

View All Comments

  • Ian Cutress - Tuesday, March 18, 2014 - link

    I need to spend some time to organise this with my new 2014 benchmark setup. That and I've never used bench to add data before. But I will be putting some data in there for everyone :)
  • Maxal - Tuesday, March 18, 2014 - link

    There is one sad thing - disappearance of 2C/4T high clock speed CPUs, as Oracle Enterprise Edition charges by cores.....and sometimes you need just small installation but with EE features...
  • Rick83 - Tuesday, March 18, 2014 - link

    Wouldn't L3/thread be a more useful metric than L3/core in the big table?
    HT will only really work after all, if both threads are in cache, and if you can get a CPU with HT and one without, as is the case with the Xeons, you'd get the one without because you are running more concurrent threads. That means that under optimum conditions, you have 2 threads per core that are active, and thus 2x#cores threads that need to be in the data caches.
  • HalloweenJack - Tuesday, March 18, 2014 - link

    holy shit anandtech you really have gone to the dogs - comparing a £2000 cpu against a £100 apu and saying its better..... and really? wheres the AMD AM3+ cpu`s? 8350 or 9590? seriously
  • Ian Cutress - Tuesday, March 18, 2014 - link

    Let's see. I'm not comparing it against a £100 APU, I'm comparing it against the $1000 Core i7-4960X to see the difference. We're using a new set of benchmarks for 2014, which I have already run on the APU so I include them here as a point of reference for AMD's new highest performance line. It is interesting to see where the APU and Xeon line up in the benchmarks to show the difference (if any). AMD's old high end line has stagnated - I have not tested those CPUs in our new 2014 set of benchmarks. There have been no new AM3+ platforms or CPUs this year, or almost all of last year. Testing these two CPUs properly took the best part of three weeks, including all the other work such as news, motherboard reviews, Mobile World Congress coverage, meetings, extra testing, bug fixing, conversing with engineers on how to solve issues. Sure, let's just stop all that and pull out an old system to test. If I had the time I really would, but I was able to get these processors from GIGABYTE, not Intel, for a limited time. I have many other projects (memory scaling, Gaming CPU) that would take priority if I had time.

    AKA I think you missed the point of the article. If you have a magical portal to Narnia, I'd happily test until I was blue in the face and go as far back to old Athlon s939 CPUs. But the world moves faster than that.
  • deadrats - Tuesday, March 18, 2014 - link

    any chance of updating this article with some x265 and/or Divx265 benchmarks? hevc is much more processor intensive and threading friendly, so these encoders may be perfect for showing a greater separation between the various core configurations.
  • Ian Cutress - Tuesday, March 18, 2014 - link

    If you have an encoder in mind drop me an email. Click my name at the top of the article.
  • bobbozzo - Tuesday, March 18, 2014 - link

    Hi,

    1. please change the charts' headings on the first page to say 'Cores/Threads' instead of 'Cores'.

    2. it wasn't clear on the first page that this is talking about workstation CPUs.

    3. "Intel can push core counts, frequency and thus price much higher than in the consumer space"
    I would have said core counts and cache...
    Don't the consumer parts have the highest clocks (before overclocking)?

    Thanks!
  • bobbozzo - Tuesday, March 18, 2014 - link

    "it wasn't clear on the first page that this is talking about workstation CPUs."

    As opposed to servers.
  • Ian Cutress - Tuesday, March 18, 2014 - link

    1) I had it that way originally but it broke the table layout due to being too wide. I made a compromise and hoped people would follow the table in good faith.
    2) Generally Xeon in the name means anything Workstation and above. People use Xeons for a wide variety of uses - high end for workstaitons, or low end for servers, or vice versa.
    3) Individual core counts maybe, but when looking at 8c or 12c chips in the same power bracket, the frequency is still being pushed to more stringent requirements (thus lower yields/bin counts) vs. voltages. Then again, the E3-1290 does go to 4.0 GHz anyway, so in terms of absolute frequencies you can say (some) Xeons at least match the consumer parts.

Log in

Don't have an account? Sign up now