Analyzing Performance Per Dollar

While measuring the performance per watt is an interesting metric, workstation processors are at the top of the stack when it comes to power consumption: the point of these processors (typically) is getting work done fast, and their users don't mind using a bit more power to get there. When it comes to designing a workstation level system for an office, the person who signs off on the project is not going to look at the performance per watt - they are going to look at the performance per dollar. Does this purchase represent the best value for the environment, and is the purchase likely to give us the best return? The following graphs attempt to answer that information, and we've chosen a varied selection including variable threaded loads.

For this analysis, we've taken all of the processors we have tested from what Intel has launched in 2017. This covers all of the consumer level Kaby Lake-S i7 and i5 parts (except T), the high-end desktop Kaby Lake-X processors, and all the high-end Skylake-X desktop parts. This is represented in the graphs in blue under the 'Intel 2017' title. From AMD, we have all eleven AMD Ryzen processors tested, under the red line.

Compile Performance Per Dollar

One of our popular benchmarks is our compile test. This takes a fixed version of Chromium v56, and runs it through the MSVC compiler with linking as per the directions given on by the Chromium developers. A typical test can run from 40 minutes to 3 hours depending on the processor, and taxes the single thread, the multi-thread and the memory performance of the system. We've seen that it does not particularly take kindly to processors with victim caches, such as Skylake-X or AMD Ryzen, with limited scaling on the code workflow. Despite the per-core performance dip from Skylake-S to Skylake-X, the top processor still has the best absolute performance. Converting our data to the number of compiles per day per dollar gives the following graph:

The cheaper end of the graph is surprisingly high, dominated by Intel's dual-core Pentium (with hyperthreading) being offered for super low prices. In the standard 'consumer' price range below $300, the mid-range Ryzen processors have a slight advantage, but, beyond the Ryzen 7 1700, Intel has the performance per dollar advantage all the way out to $2000.

Agisoft Performance Per Dollar

The Agisoft Photoscan software has been a key part of our performance testing for several years, demonstrating a true office workflow: archival purposes of taking 2D photos and converting them into 3D models. This is a computationally interesting algorithm, involving multiple single-threaded and multi-threaded stages.

Due to the single threaded elements of the algorithm, linear scaling is not observed as we ramp up through the core counts. At the mid-range consumer processor pricing, the twelve-thread Ryzen 5 processors sit above the quad-thread Core i5 parts, but beyond $330 or so, as we move into Intel's quad-core offerings and above, the performance per dollar is solely on Intel's side.

Blender Performance Per Dollar

The Blender benchmark has been an interesting debate in the last few months, with the new architectures from Intel and AMD pushing updates into the code for faster rendering. Our test takes a well used Blender release and one of the standard benchmarks (rather than anything vendor specified). Results are given in renders of this benchmark per day per dollar.

Interestingly AMD takes the peak PPD across the full range. At $999, where the competition is expected to be highest, AMD has over a 10% advantage. In the four digit range, even though the PPD of Intel's processors is lower, the absolute performance is still better. For our Blender test, this translates so a few seconds over a 2-3 minute test.

Cinebench R15 nT Performance Per Dollar

Next we move into the pure multithreaded benchmarks, which can be a significant number of workstation workloads. Here Intel might be at a disadvantage, with AMD offering more cores and more threads at each price point - Intel's IPC advantage will have to offset this in order to move ahead.

To lay some background here: AMD has been plugging Cinebench R15 nT benchmark numbers since the launch of Zen, citing better PPD. Intel's rebuttal is that in absolute performance, when you need the absolute best results, their hardware still wins.

Corona Rendering Performance Per Dollar

Ray tracing is another example of light threads taking advantage of more cores, more frequency, higher IPC and accelerated intructions with fast FP throughput. Citing back to Johan's EPYC review again, which showed AMD's good base FP performance, it will be an interesting comparison.

Similar to some of the previous graphs, the best PPD is held at Intel's low end Pentium processors - however these do not give the best overall throughput. In the mainstream price range, the Ryzen 5 1600 and 1600X are suprising peak results. In the $500-$800 range, Intel and AMD are about equal, however at $999 the Threadripper is ahead of the Core i9. Again, at the $1500+ range, Intel offers the better overall throughput, despite the lower PPD.

Power Consumption and Power Efficiency Intel Core i9-7980XE and Core i9-7960X Conclusion
Comments Locked

152 Comments

View All Comments

  • mapesdhs - Monday, September 25, 2017 - link

    Ian, thanks for the great review! Very much appreciate the initial focus on productivity tasks, encoding, rendering, etc., instead of games. One thing though, something that's almost always missing from reviews like this (ditto here), how do these CPUs behave for platform stability with max RAM, especially when oc'd?

    When I started building oc'd X79 systems for prosumers on a budget, they often wanted the max 64GB. This turned out to be more complicated than I'd expected, as reviews and certainly most oc forum "clubs" achieved their wonderful results with only modest amounts of RAM, in the case of X79 typically 16GB. Mbd vendors told me published expectations were never with max RAM in mind, and it was "normal" for a mbd to launch without stable BIOS support for a max RAM config at all (blimey). With 64GB installed (I used two GSkill TridentX/2400 4x8GB kits), it was much harder to achieve what was normally considered a typical oc for a 3930K (mab was the ASUS P9X79 WS, basically an R4E but with PLEX chips and some pro features), especially if one wanted the RAM running at 2133 or 2400. Talking to ASUS, they were very helpful and advised on some BIOS tweaks not mentioned in their usual oc guides to specifically help in cases where all RAM slots were occupied and the density was high, especially a max RAM config. Eventually I was able to get 4.8GHz with 64GB @ 2133. However, with the help of an AE expert (this relates to the lack of ECC I reckon), I was also able to determine that although the system could pass every benchmark I could throw at it (all of toms' CPU tests for that era, all 3DMark, CB, etc.), a large AE render (gobbles 40GB RAM) would result in pixel artefacts in the final render which someone like myself (not an AE user) would never notice, but the AE guy spotted them instantly. This was very interesting to me and not something I've ever seen mentioned in any article, ie. an oc'd consumer PC can be "stable" (benchmarks, Prime95 and all the rest of it), but not correct, ie. the memory is sending back incorrect data, but not in a manner that causes a crash. Dropping the clock to 4.7 resolved the issue. Tests like P95 and 3DMark only test parts of a system; a large AE render hammered the whole lot (storage, CPU, RAM and three GTX 580s).

    Thus, could you or will you be able at some point to test how these CPUs/mbds behave with the max 128GB fitted? I suspect you'd find it a very different experience compared to just having 32GB installed, especially under oc'd conditions. It stresses the IMCs so much more.

    I note the Gigabyte specs page says the mbd supports up to 512GB with Registered DIMMs; any chance a memory corp could help you test that? Mind you, I suspect that without ECC, the kind of user who would want that much RAM would probably not be interested in such a system anyway (XEON or EPYC much more sensible).

    Ian.
  • peevee - Monday, September 25, 2017 - link

    "256 KB per core to 1 MB per core. To compensate for the increase in die area, Intel reduced the size of the size of the L3 from 2.5 MB per core to 1.375 MB per core, keeping the overall L2+L3 constant"

    You might want to check your calculator.
  • tygrus - Monday, September 25, 2017 - link

    Maybe Intel saw the AMD TR numbers and had to add 10-15% to their expected freqs. Sure, there is some power that goes to the CPU which ends up in RAM et. al. but these are expensive room heaters. Intel marketing bunnies thought 165w looked better thn 180w to fool the customers.
  • eddieobscurant - Monday, September 25, 2017 - link

    Wow! Another intel pro review. I was expecting this but having graphs displaying intels perf/$ advantage, just wow , you've really outdone yourselves this time.

    Of course i wanted to see how long are you gonna keep delaying the gaming benchmarks of intel's core i9 due to mess rearrangement horrid performance. I guess you're expecting game developers to fix what can be fixed. It's been already several months, but on ryzen you were displaying a few issues since day 1.

    You tested amd with 2400mhz ram , when you know that performance is affected with anything below 3200mhz.

    Several different intel cpus come and go into your graphs only to show that a different intel cpu is better when core i9 lacks in performance and an amd cpu is better.

    Didn't even mention the negligent performance difference bettween the 7960x and 7980xe. Just take a look at phoronix review.

    Can this site even get any lower? Anands name is the only thing keeping it afloat.
  • mkaibear - Tuesday, September 26, 2017 - link

    Erm, there are five graphs on the performance/$ page, and three of them show AMD with a clear price/$ advantage in everything except the very top end and the very bottom end (and one of the other two is pretty much a tie).

    ...how can you possibly call that a pro-Intel review?
  • wolfemane - Tuesday, September 26, 2017 - link

    And why the heck would you want game reviews on these CPUs anyways? By now we KNOW what the results are gonna be and they won't be astonishing. And more than likely will be under a 7700k. Game benchmarks are utterly worthless for these CPUs and any kind of s surprise by the reader in their lack of overall performance in game is the readers fault for not paying attention to previous reviews.
  • Notmyusualid - Tuesday, September 26, 2017 - link

    Sorry to distract gents (and ladies?), and even though I am not a fan of liquid nitrogen, here:

    http://www.pcgamer.com/overclocked-core-i9-7980xe-...
  • gagegfg - Tuesday, September 26, 2017 - link

    EPYC 7551P vs core i9 790XE

    That is the true comparison, or not?
    $2000 vs $2000
  • gagegfg - Tuesday, September 26, 2017 - link

    EPYC 7551P vs core i9 7980XE

    That is the true comparison, or not?
    $2000 vs $2000
  • IGTrading - Tuesday, September 26, 2017 - link

    That's a perfectly valid comparison with the exception of the fact that Intel's X299 platform will look completely handicapped next to AMD's EPYC based solution and it will have just half of the computational power.

Log in

Don't have an account? Sign up now