Performance Claims:

+18% IPC vs. Skylake,
+47% Performance vs. Broadwell

With every new product generation, the company releasing the product has to put some level of expectations on performance. Depending on the company, you’ll either get a high level number summarizing performance, or you’ll get reams and reams of benchmark data. Intel did both, especially with a headline ‘+18%’ value, but in recent months the company has also been on a charge about what sort of benchmarking is worth doing. I want to take a quick diversion down that road, and give my thoughts on the matter.

First, I want to define some terms, just so we’re all on the same page.

  • A synthetic test is a benchmark engineered to probe a feature of the processor, often to find its peak capability in one or several specific task. A synthetic test does not often reflect a real-world scenario, and likely doesn’t use real world software. Synthetic benchmarks are designed to be stable and repeatable, and the analysis often describing how a processor performs in an ideal scenario.
  • A real-world test uses software that the user ends up using, along with a representative workload for that software. These tests are usually most applicable to end-users looking to purchase a product, as they can see actual use-case results. Real-world tests can have obvious pitfalls: it can be hard to test across multiple machines with only a single license, and testing one piece of software has no guarantee on performance on another.

A typical analysis of a processor does two things: what can it do (synthetic) and how does it perform (real-world). Users interested in the development of a platform, how it will expand and grow, or engineers peering over the fence, or even investors looking at the direction the company is going, will look at what products can do. People looking at what to use, what to work with, are more interested in the performance. Reviewers should get this concept, and companies like Intel should get this too – with Intel hiring a number of ex-reviewers of late, this is coming through.

A couple of months ago, Intel approached subsets of reviewers to discuss best benchmarking practices. On the table were real-world benchmarks, and which benchmarks represent the widest array of the market. Under fire was Cinebench, a semi-synthetic test (it uses a real-world engine on example data) that Intel believed didn’t represent the performance of a processor.

Intel provided data from one of its commissioned surveys on software that people use. Their data was based on a list of all consumers, from entry-level users up to prosumers, casual gamers, and enthusiasts, but also covering commercial use cases. At the top of the list were the obvious examples, such as OS and browsers: Explorer.exe, Edge, Chrome. In the top set were important widely distributed software packages, such as Photoshop (all versions), Steam, WinRAR, Office programs, and popular games like Overwatch. The point Intel was trying to make with this list is that a lot of reviewers run software that isn’t popular, and should aim to cover the widest market as possible.

The key point they were trying to make was that Cinebench, while based on Cinema4D and a rendering tool used by a number of the community, wasn’t the be-all and end-all of performance. Now this is where Intel’s explanation became bifurcated: despite this being a discussion on what benchmarks reviewers should consider using, Intel’s perspective was that citing a single number, as Intel’s competitors have done, doesn’t represent true performance in all use cases. There was a general feeling that users were taking single numbers like this and jumping to conclusions. So despite the fact that the media in the room all test multiple software angles, Intel was clear in that they didn’t want a single number to dominate the headlines, especially when it’s from software that is ranked (according to Intel’s survey) somewhere in the 1400s.

Needless to say, Intel got a bit of backlash from the press in the room at the time. Key criticisms were that those present, when they get hardware, test a variety of software, not just Cinebench, to try and give a more overall view. Other key elements included that the survey covered all users, from consumer, commercial, and workstation: a number of the press in the room have audiences that are enthusiasts, so they will cater their benchmark accordingly. There was also a discussion that a number of software packages listed in the top 100 are actually difficult to benchmark, due to licensing arrangements designed to stop repeated installs across multiple systems. Typically most software vendors aren’t interested in working with the benchmark community to help evaluate performance, in the event that it exposes deficiencies in their code base. There was also the way in that readers were adapting over time: most focused readers want their specific software tested, and it is impossible to test 50 different software packages, so a few that can be streamlined in a benchmark suite are used as a representative sample, and typically Cinebench is one of those in the rendering arena, alongside POV-Ray, Corona, etc.

Intel, at this stage in the discussion, still went on to show how the new hardware performs on a variety of tests. We’ve covered these images before on previous pages, but Intel stated a significant uplift in graphics compared to the current 14nm offerings, from 40% up to 108%:

As well as comparisons to the competition:

Aside from 3DMark, these are all ‘real-world’ tests.

Move forward a few weeks, and Intel’s Tech Day where Ice Lake is discussed, and Intel brings up IPC.

Intel’s big statement is that Sunny Cove, a 2019 product, offers 18% more instructions per clock against Skylake, a 2015 product. In order to come to that conclusion, as expected, Intel has to turn to synthetic testing: SPEC2006, SPEC2017, SYSMark 2014 SE, WebXPRT, and Cinebench R15. Wait, what was that last one? Cinebench?

So there are two topics to discuss here.

First is the 18% increase over four years – that’s the equivalent to a 4.2% compound annual growth rate. Some users will state that we should have had more, and that Intel’s issues with its 10nm manufacturing process means that this should have been a 2017 product (which would have been an 8.6% CAGR). Ultimately Intel built enough of an IPC increase lead over the last decade to afford something like this, and it shows that there isn’t an IPC wall just yet.

Second is the use of Cinebench, and the previous version at that. Given what was discussed above, various conclusions could be drawn. I’ll leave those up to you. Personally, I wouldn’t have included it.

Aside from IPC, Intel also spoke about actual single-threaded performance about Sunny Cove in its 15W mode.

At a brief glance, I would have expected this graph to be from real-world analysis. But given the blurb at the bottom it shows that these results are derived from SPEC2006, specifically 1-thread int_rate_base, which means that these are synthetic results, so we’ll analyze them with that in mind. This test also gets lots of benefit from turbo, with each test likely to fit inside the turbo window of an adequately cooled system.

The base line here is Broadwell, Intel’s 5th Generation processor, which if you remember was the first Intel processor to have an integrated FIVR on the mobile parts for power efficiency. In this case we see that Intel puts Skylake as +9% above Broadwell, then moving through Kaby Lake and Whiskey Lake we see the effect of increasing that peak turbo frequency and power budget: when we moved from dual core to quad core 15W mobile processors, that peak turbo power budget increased from 19W to 44W, allowing longer turbo. Overall we hit +42% for 8th Gen Whiskey Lake over Broadwell.

Ice Lake, by comparison, is +47% over Broadwell. When moving from Broadwell to Ice Lake, which Intel expects most of its users to do, that’s a sizable single threaded performance jump, I won’t dispute that, although I will wait until we see real world data to come to a better conclusion.

However, if we compare Ice Lake to Whiskey Lake, we see only a +3.5% increase in single threaded performance. For a generation-on-generation increase, that’s even lower than the four-year CAGR from Skylake. Some of you might be questioning why this is happening, and it all comes down to frequency.

Intel’s current 8th Gen Whiskey Lake, the i7-8565U, has a peak turbo frequency of 4.8 GHz. In 15W mode, we understand that the peak frequency of Ice Lake is under 4.0 GHz, essentially handing Whiskey Lake a ~20% frequency advantage.

If this sounds odd, turn over to the next page. Intel is going to start tripping over itself with its new product lines, and we’ll do the math.

Wi-Fi 6: Implementing AX over AC* Competing Against Itself: 3.9 GHz Ice Lake-U on 10nm vs 4.9 GHz Comet Lake-U on 14nm
Comments Locked

107 Comments

View All Comments

  • s.yu - Thursday, August 1, 2019 - link

    "Charge 4+hrs in 30 mins"
    ...Ok, I think "4+hrs battery life under 30 min. charging" sounds better, or just Intel's version.
  • 29a - Thursday, August 1, 2019 - link

    Should Intel go ahead with the naming scheme, it is going to offer a cluster of mixed messages.

    I believe the word you are looking for there is clusterfuck.
  • ifThenError - Friday, August 2, 2019 - link

    To bad the article doesn't state any further details about the HEVC encoders. Would be interesting to hear if Intel only improved the speed or if they also worked on compression and quality.

    I bought a Gemini Lake system last year to try the encoding in hardware and have very mixed feelings about Intel's Quick Sync since. The encoding speed is impressive with the last generation already, and all the while CPU and GPU are practically in idle. On the downside the image quality and compression ratio is highly underwhelming and not even near usable for “content creation“ or mere transcoding. It suffices for video calls at best. Even encoding h264 in software reaches far better compression efficiency while being not much slower on a low end CPU.

    IIRC Intel promised some “quality mode” for their upcoming encoders, but I can't remember if that was for the gen11 graphics.
  • intel_gene - Friday, August 2, 2019 - link

    There is some information on GNA available. It is accessed through Intel's OpenVINO.
    https://docs.openvinotoolkit.org/latest/_docs_IE_D...
    https://github.com/opencv/dldt/tree/2019/inference...
    There is some background information here:
    https://sigport.org/sites/default/files/docs/Poste...
  • urbanman2004 - Friday, August 2, 2019 - link

    I wonder what happens to Project Athena if none of the products released by the vendor partners/OEMs meet the criteria that Intel's established.
  • GreenReaper - Saturday, August 3, 2019 - link

    Plagues of snakes, owls, eagles, Asari, etc.
  • gambita - Monday, August 5, 2019 - link

    nice of you to do intels bidding and promote and help their pr
  • howtomakedeliciousfood - Thursday, August 8, 2019 - link

    www.howtomakedeliciousfood.com
  • HikariWS - Sunday, August 11, 2019 - link

    These improvements on serial performance are great, it's awesome to have bigger buffers and more execution units. But in clock area it seems to be a big drawback.

    I'm sure clock issues is the reason we won't have any Ice Lake on desktop, and Comet Lake on laptops on the same generation. But, why no 6C Ice Lake? This opened a but alert sign on me.

    But what also called my attention is its IGP power. Most mid range and above laptops ae using nVidia GPU. That's sad for us who want performance and won't play on it, because mid laptops are alrdy all coming with nVidia GPU which makes them more expensive.

    Now I hope to have these segments using Intel IGP and not have nVidia GPU anymore. Good to us on having less money wasted on hardware we don't need, bad for nVidia.
  • nils_ - Wednesday, August 14, 2019 - link

    Can you please stop eating the chips? Yield must be bad enough as it is!

Log in

Don't have an account? Sign up now