Translating to IPC: All This for 3%?

Contrary to popular belief, increasing IPC is difficult. Attempt to ensure that each execution port is fed every cycle requires having wide decoders, large out-of-order queues, fast caches, and the right execution port configuration. It might sound easy to pile it all on, however both physics and economics get in the way: the chip still has to be thermally efficient and it has to make money for the company. Every generational design update will go for what is called the ‘low-hanging fruit’: the identified changes that give the most gain for the smallest effort. Usually reducing cache latency is not always the easiest task, and for non-semiconductor engineers (myself included), it sounds like a lot of work for a small gain.

For our IPC testing, we use the following rules. Each CPU is allocated four cores, without extra threading, and power modes are disabled such that the cores run at a specific frequency only. The DRAM is set to what the processor supports, so in the case of the new CPUs, that is DDR4-2933, and the previous generation at DDR4-2666. I have recently seen threads which dispute if this is fair: this is an IPC test, not an instruction efficiency test. The DRAM official support is part of the hardware specifications, just as much as the size of the caches or the number of execution ports. Running the two CPUs at the same DRAM frequency gives an unfair advantage to one of them: either a bigger overclock/underclock, and deviates from the intended design.

So in our test, we take the new Ryzen 7 2700X, the first generation Ryzen 7 1800X, and the pre-Zen Bristol Ridge based A12-9800, which is based on the AM4 platform and uses DDR4. We set each processors at four cores, no multi-threading, and 3.0 GHz, then ran through some of our tests.

For this graph we have rooted the first generation Ryzen 7 1800X as our 100% marker, with the blue columns as the Ryzen 7 2700X. The problem with trying to identify a 3% IPC increase is that 3% could easily fall within the noise of a benchmark run: if the cache is not fully set before the run, it could encounter different performance. Shown above, a good number of tests fall in that +/- 2% range.

However, for compute heavy tasks, there are 3-4% benefits: Corona, LuxMark, CineBench and GeekBench are the ones here. We haven’t included the GeekBench sub-test results in the graph above, but most of those fall into the 2-5% category for gains.

If we take out Cinebench R15 nT result and the Geekbench memory tests, the average of all of the tests comes out to a +3.1% gain for the new Ryzen 2700X. That sounds bang on the money for what AMD stated it would do.

Cycling back to that Cinebench R15 nT result that showed a 22% gain. We also had some other IPC testing done at 3.0 GHz but with 8C/16T (which we couldn’t compare to Bristol Ridge), and a few other tests also showed 20%+ gains. This is probably a sign that AMD might have also adjusted how it manages its simultaneous multi-threading. This requires further testing.

AMD’s Overall 10% Increase

With some of the benefits of the 12LP manufacturing process, a few editors internally have questioned exactly why AMD hasn’t redesigned certain elements of the microarchitecture to take advantage. Ultimately it would appear that the ‘free’ frequency boost is worth just putting the same design in – as mentioned previously, the 12LP design is based on 14LPP with performance bump improvements. In the past it might not have been mentioned as a separate product line. So pushing through the same design is an easy win, allowing the teams to focus on the next major core redesign.

That all being said, AMD has previously already stated its intentions for the Zen+ core design – rolling back to CES at the beginning of the year, AMD stated that they wanted Zen+ and future products to go above and beyond the ‘industry standard’ of a 7-8% performance gain each year.

Clearly 3% IPC is not enough, so AMD is combining the performance gain with the +250 MHz increase, which is about another 6% peak frequency, with better turbo performance with Precision Boost 2 / XFR 2. This is about 10%, on paper at least. Benchmarks to follow.

Improvements to the Cache Hierarchy: Lower Latency = Higher IPC Precision Boost 2 and XFR2: Ensuring It Hertz More
Comments Locked

545 Comments

View All Comments

  • DisoRDeR4 - Thursday, April 19, 2018 - link

    Thanks for the review, but I noticed a minor error -- your AMD Ryzen Cache Clocks graph on the 3rd page shows data for the 2700X, but in the preceding text it is referred to as the 2800X.
  • IGTrading - Thursday, April 19, 2018 - link

    AMD wins all gaming benchmarks, hands down and does this at a real 105W TDP.

    In my opinion, it is not fair to say that Intel "wins" the single threaded scenarios as long as we see clearly that the 8700 and the 8700K have the "multi-core enhancement" activated and the motherboard allows them to draw 120W on a regular basis, like your own graphs show.

    Allow AMD's Ryzen to draw 120W max and auto-overclock and only the would we have a fair comparison.

    In the end, I guess that all those that bought the 7700K and the 8700K "for gaming" are now very pissed off.

    The former have a 100% dead/un-upgradeable platform while the latter spent a ton of money on a platform that was more expensive, consumes more power and will surely be rendered un-upgradeable soon by Intel :) while AMD already rendered it obsolete (from the "best of the best" POV) or at least the X370+8700K is now the clear second-best in 99% of the tests @ the same power consumption while losing all price/performance comparisons.

    IMHO ... allowing the 8700 & 8700K to draw 120W instead of 65W / 95W and allowing auto-overclocking while the AMD Ryzen is not tested with equivalent settings is maybe the only thing that needs to be improved with regards to the fairness of this review.

    Thank you for your work Ian!
  • Luckz - Monday, April 23, 2018 - link

    The 2700X draws so much more than its fake on-paper TDP it's not funny. With XFR2 and PB2 of course.

    PBO can add even more.
  • Ninjawithagun - Thursday, April 19, 2018 - link

    Incorrect comparison. Why does every review keep making the same mistake?? It has nothing to do with price. Comparing like CPU architectures is the only logical course of action. 6 core/12 thread vs 8 core/16 thread makes no sense. Comparing the Intel 8700K 6 core/12 thread @ $347 to the AMD 2600X 6 core/12 thread @ $229.99 makes the most sense here. Once the proper math is done, AMD destroys Intel in performance vs. cost, especially when you game at any resolution higher than 1080P. The GPU becomes the bottleneck at that point, negating any IPC benefits of the Intel CPUs. I know this how? Simple. I also own a 8700K gaming PC ;-)
  • SmCaudata - Thursday, April 19, 2018 - link

    I'd like to see more scatterplots with performance versus cost. Also, total cost (MB+CPU+cooler if needed) would be ideal. Even an overall average of 99th percentile 4k scores in gaming (one chart) would be interesting.... hmmm maybe a project for the afternoon.
  • Luckz - Monday, April 23, 2018 - link

    The English-language version of the Tomshardware review has a million plots on the last page (14). 4K is complete irrelevant for plotting though since you're GPU-limited there.
  • Krysto - Thursday, April 19, 2018 - link

    Wrong. Performance at a given price level is absolutely a metric chip buyers care about - if not the MOST important metric.

    People usually think "Okay, I have this $300 budget for a CPU, which is the best CPU I can get for that money?" - It's irrelevant whether one has 4 cores or 8 cores or 16 cores. They will get the best CPU for the money, regardless of cores and threads.

    Compared core vs core or thread vs thread is just a synthetic and academic comparison. People don't actually buy based on that kind of thinking. If X chip has 15% better gaming performance than the Y chip for the same amount of money, they'll get the X chip, regardless of cores, threads, caches, and whatnot.
  • Ninjawithagun - Thursday, April 19, 2018 - link

    Incorrect. Cost vs. Cost is only one of many factors to consider, but is not a main one, especially if the competition has a processor of equal quality for much less cost. Comparing an Intel 6 core/12 thread CPU to an AMD 8 cores/16 thread CPU makes absolutely no sense if you are measuring cost vs. performance. Your argument makes no sense, sorry.
  • fallaha56 - Thursday, April 19, 2018 - link

    Ok by your rationale we should compare Threadripper to 8700k too
  • Ninjawithagun - Thursday, April 19, 2018 - link

    Now you are just being stupid.

Log in

Don't have an account? Sign up now