Analyzing Performance Per Dollar

While measuring the performance per watt is an interesting metric, workstation processors are at the top of the stack when it comes to power consumption: the point of these processors (typically) is getting work done fast, and their users don't mind using a bit more power to get there. When it comes to designing a workstation level system for an office, the person who signs off on the project is not going to look at the performance per watt - they are going to look at the performance per dollar. Does this purchase represent the best value for the environment, and is the purchase likely to give us the best return? The following graphs attempt to answer that information, and we've chosen a varied selection including variable threaded loads.

For this analysis, we've taken all of the processors we have tested from what Intel has launched in 2017. This covers all of the consumer level Kaby Lake-S i7 and i5 parts (except T), the high-end desktop Kaby Lake-X processors, and all the high-end Skylake-X desktop parts. This is represented in the graphs in blue under the 'Intel 2017' title. From AMD, we have all eleven AMD Ryzen processors tested, under the red line.

Compile Performance Per Dollar

One of our popular benchmarks is our compile test. This takes a fixed version of Chromium v56, and runs it through the MSVC compiler with linking as per the directions given on by the Chromium developers. A typical test can run from 40 minutes to 3 hours depending on the processor, and taxes the single thread, the multi-thread and the memory performance of the system. We've seen that it does not particularly take kindly to processors with victim caches, such as Skylake-X or AMD Ryzen, with limited scaling on the code workflow. Despite the per-core performance dip from Skylake-S to Skylake-X, the top processor still has the best absolute performance. Converting our data to the number of compiles per day per dollar gives the following graph:

The cheaper end of the graph is surprisingly high, dominated by Intel's dual-core Pentium (with hyperthreading) being offered for super low prices. In the standard 'consumer' price range below $300, the mid-range Ryzen processors have a slight advantage, but, beyond the Ryzen 7 1700, Intel has the performance per dollar advantage all the way out to $2000.

Agisoft Performance Per Dollar

The Agisoft Photoscan software has been a key part of our performance testing for several years, demonstrating a true office workflow: archival purposes of taking 2D photos and converting them into 3D models. This is a computationally interesting algorithm, involving multiple single-threaded and multi-threaded stages.

Due to the single threaded elements of the algorithm, linear scaling is not observed as we ramp up through the core counts. At the mid-range consumer processor pricing, the twelve-thread Ryzen 5 processors sit above the quad-thread Core i5 parts, but beyond $330 or so, as we move into Intel's quad-core offerings and above, the performance per dollar is solely on Intel's side.

Blender Performance Per Dollar

The Blender benchmark has been an interesting debate in the last few months, with the new architectures from Intel and AMD pushing updates into the code for faster rendering. Our test takes a well used Blender release and one of the standard benchmarks (rather than anything vendor specified). Results are given in renders of this benchmark per day per dollar.

Interestingly AMD takes the peak PPD across the full range. At $999, where the competition is expected to be highest, AMD has over a 10% advantage. In the four digit range, even though the PPD of Intel's processors is lower, the absolute performance is still better. For our Blender test, this translates so a few seconds over a 2-3 minute test.

Cinebench R15 nT Performance Per Dollar

Next we move into the pure multithreaded benchmarks, which can be a significant number of workstation workloads. Here Intel might be at a disadvantage, with AMD offering more cores and more threads at each price point - Intel's IPC advantage will have to offset this in order to move ahead.

To lay some background here: AMD has been plugging Cinebench R15 nT benchmark numbers since the launch of Zen, citing better PPD. Intel's rebuttal is that in absolute performance, when you need the absolute best results, their hardware still wins.

Corona Rendering Performance Per Dollar

Ray tracing is another example of light threads taking advantage of more cores, more frequency, higher IPC and accelerated intructions with fast FP throughput. Citing back to Johan's EPYC review again, which showed AMD's good base FP performance, it will be an interesting comparison.

Similar to some of the previous graphs, the best PPD is held at Intel's low end Pentium processors - however these do not give the best overall throughput. In the mainstream price range, the Ryzen 5 1600 and 1600X are suprising peak results. In the $500-$800 range, Intel and AMD are about equal, however at $999 the Threadripper is ahead of the Core i9. Again, at the $1500+ range, Intel offers the better overall throughput, despite the lower PPD.

Power Consumption and Power Efficiency Intel Core i9-7980XE and Core i9-7960X Conclusion
Comments Locked

152 Comments

View All Comments

  • mapesdhs - Tuesday, September 26, 2017 - link

    In that case, using Intel's MO, TR would have 68. What Intel is doing here is very misleading.
  • iwod - Monday, September 25, 2017 - link

    If we factor in the price of the whole system, rather then just CPU, ( AMD's MB tends to be cheaper ), then AMD is doing pretty well here. I am looking forward to next years 12nm Zen+.
  • peevee - Monday, September 25, 2017 - link

    From the whole line, only 7820X makes sense from price/performance standpoint.
  • boogerlad - Monday, September 25, 2017 - link

    Can an IPC comparison be done between this and Skylake-s? Skylake-x LCC lost in some cases to skylake, but is it due to lack of l3 cache or is it because the l3 cache is slower?
  • IGTrading - Monday, September 25, 2017 - link

    There will never be an IPC comparison of Intel's new processors, because all it would do is showcase how Intel's IPC actually went down from Broadwell and further down from KabyLake.

    Intel's IPC is a downtrend affair and this is not really good for click and internet traffic.

    Even worse : it would probably upset Intel's PR and that website will surely not be receiving any early review samples.
  • rocky12345 - Monday, September 25, 2017 - link

    Great review thank you. This is how a proper review is done. Those benchmarks we seen of the 18 core i9 last week were a complete joke since the guy had the chip over clocked to 4.2GHz on all core which really inflated the scores vs a stock Threadripper 16/32 CPU. Which was very unrealistic from a cooling stand point for the end users.

    This review had stock for stock and we got to see how both CPU camps performed out of the box states. I was a bit surprised the mighty 18 core CPU did not win more of the benches and when it did it was not by very much most of the time. So a 1K CPU vs a 2K CPU and the mighty 18 core did not perform like it was worth 1K more than the AMD 1950x or the 1920x for that matter. Yes the mighty i9 was a bit faster but not $1000 more faster that is for sure.
  • Notmyusualid - Thursday, September 28, 2017 - link

    I too am interested to see 'out of the box performance' also.

    But if you think ANYONE would buy this and not overclock - you'd have to be out of your mind.

    There are people out there running 4.5GHz on all cores, if you look for it.

    And what is with all this 'unrealistic cooling' I keep hearing about? You fit the cooling that fits your CPU. My 14C/28T CPU runs 162W 24/7 running BOINC, and is attached to a 480mm 4-fan all copper radiator, and hand on my heart, I don't think has ever exceeded 42C, and sits at 38C mostly.

    If I had this 7980XE, all I'd have to do is increase pump speed I expect.
  • wiyosaya - Monday, September 25, 2017 - link

    Personally, I think the comments about people that spend $10K on licenses having the money to go for the $2K part are not necessarily correct. Companies will spend that much on a license because they really do not have any other options. The high end Intel part in some benchmarks gets 30 to may be 50 percent more performance on a select few benchmarks. I am not going to debate that that kind of performance improvement is significant even though it is limited to a few benchmarks; however, to me that kind of increased performance comes at an extreme price premium, and companies that do their research on the capabilities of each platform vs price are not, IMO, likely to throw away money on a part just for bragging rights. IMO, a better place to spend that extra money would be on RAM.
  • HStewart - Monday, September 25, 2017 - link

    In my last job, they spent over $100k for software version system.

    In workstation/server world they are looking for reliability, this typically means Xeon.

    Gaming computers are different, usually kids want them and have less money, also they are always need to latest and greatest and not caring about reliability - new Graphics card comes out they replace it. AMD is focusing on that market - which includes Xbox One and PS 4

    For me I looking for something I depend on it and know it will be around for a while. Not something that slap multiple dies together to claim their bragging rights for more core.

    Competition is good, because it keeps Intel on it feat, I think if AMD did not purchase ATI they would be no competition for Intel at all in x86 market. But it not smart also - would anybody be serious about placing AMD Graphics Card on Intel CPU.
  • wolfemane - Tuesday, September 26, 2017 - link

    Hate to burst your foreign bubble but companies are cheap in terms of staying within budgets. Specially up and coming corporations. I'll use the company I work for as an example. Fairly large print shop with 5 locations along the US West coast that's been in existence since the early 70's. About 400 employees in total. Server, pcs, and general hardware only sees an upgrade cycle once every 8 years (not all at once, it's spread out). Computer hardware is a big deal in this industry, and the head of IT for my company Has done pretty well with this kind of hardware life cycle. First off, macs rule here for preprocessing, we will never see a Windows based pc for anything more than accessing the Internet . But when it comes to our servers, it's running some very old xeons.

    As soon as the new fiscal year starts, we are moving to an epyc based server farm. They've already set up and established their offsite client side servers with epyc servers and IT absolutely loves them.

    But why did I bring up macs? The company has a set budget for IT and this and the next fiscal year had budget for company wide upgrades. By saving money on the back end we were able to purchase top end graphic stations for all 5 locations (something like 30 new machines). Something they wouldn't have been able to do to get the same layout with Intel. We are very much looking forward to our new servers next year.

    I'd say AMD is doing more than keeping Intel on their feet, Intel got a swift kick in the a$$ this year and are scrambling.

Log in

Don't have an account? Sign up now