Competing Against Itself: 3.9 GHz Ice Lake-U on 10nm vs 4.9 GHz Comet Lake-U on 14nm

At the same time that Intel is releasing Ice Lake, we have confirmed from multiple sources that the company intends to release another generation of mobile products based on 14nm as well. This line of hardware, also called Intel 10th Gen Core, will be under the internal codename ‘Comet Lake’, and go after a similar power distribution to what Ice Lake will. There are a few differences in the design worth noting, and a big one that Intel will have a hard time organizing its marketing materials for.

The differences between Ice Lake-U and Comet Lake-U are set to be quite confusing. Leaks from various OEMs about upcoming products give us the following:

Ice Lake: The Core i7-1065G7

Ice Lake-U hardware, based on 10nm, will be given a ‘G’ in the product name, such as i7-1065G7. This breaks down such that

  • i7 = Core i7
  • 1065 = from the 10th Gen Core
  • 1065 = position ‘65’ relative to the rest of the other Ice lake processors,
  • G7 = ‘Graphics Level 7’, which we believe to be the highest.

Intel has stated that the Ice Lake-U hardware will come in at 9W, 15W, and 28W, as described in the previous pages, offering a highest turbo clock of 4.1 GHz, 64 EUs of Gen11 graphics, suitable for up to 1.1 TF of FP64 calculations. We suspect that the 4.1 GHz turbo frequency will be given to the 28W model following previous Intel launches, which means that the 15W part is likely to turbo to a few hundred MHz lower. Based on the Ice Lake plans we know, it seems that Intel is only targeting up to quad-core designs, but Ice Lake does support LPDDR4. Due to using the 10nm process, and with additional power refinements, Ice Lake hardware is expected to have longer a battery life compared to Comet Lake, although we will see this in product reviews through the year.

Comet Lake: The Core i7-10510U

Contrast this to Comet Lake-U, which is another round of processors based on 14nm. OEMs have given some light onto these processors, which should offer up to six cores. The naming of the processors follows on from the 8th Gen and 9th Gen parts, but is now under 10th Gen. This means that the Core i7-10510U breaks down as:

  • i7 = Core i7
  • 10510 = from the 10th Gen Core family,
  • 10510 =  position ‘51’ relative to the rest of Comet Lake
  • U = U-series processor, 15-28W

OEM listings have shown Comet Lake-U to turbo up to 4.9 GHz on the best quad-core processor, while we have seen 9th gen hardware hit 5.0 GHz in the larger H-series designs.

For a full side-by-side comparison:

Ice Lake-U vs Comet Lake-U
Ice Lake-U* AnandTech Comet Lake-U*
10+ Lithography '14nm class'
i7-1065G7 Example CPU Name i7-10510U
9W
15W
28W
TDP Options 15W
28W?
Same as 9th Gen?
Up to 4C Core Counts Up to 6C (expected)
Sunny Cove CPU Core Skylake+++
Up to 64 EUs
Gen11
GPU GT2 Core Up to 24 EUs
Gen9.5
3.9G (15W)
4.1G (28W)
Highest Turbo 4.9G? (15W)
5.0G+ ?
DDR4-3200
LPDDR4-3733
DDR DDR4-2667
LPDDR3-2133
AVX-512 AVX AVX2
*All details are not yet confirmed by Intel, but shown on partner websites/trusted sources

Should Intel go ahead with the naming scheme, it is going to offer a cluster of mixed messages, even to end-users that understand the naming scheme. For those that don’t, there might not be an obvious way to tell a 10th Gen Ice Lake system and a 10th Gen Comet Lake system part from just reading the specification sheet, especially if the vendor lists it just as ‘10th Gen Core i7’.

Intel is trying to mitigate some of this with Project Athena, which is a specification for premium 10th Gen designs. In order to meet Athena specifications, you technically don’t need to have an Ice Lake processor, but it definitely does help with the graphics and battery life targets. We’re unsure at this point if Intel will add in distinct labeling to Athena approved devices or not, but this might be one way to discern between the two. The other is to look for the letter: G means Ice Lake, U means Comet Lake.

So the question is about what matters most to users?

If you want raw CPU frequency and cores, then Comet Lake still has benefits there, even if we add on Intel’s expected ‘+18%’ IPC claims. It would all come down to how the turbo plays out in each device, and Intel states that it is working closer than ever before with its OEM partners to optimize for performance.

Ice Lake systems on the other hand are going to offer better graphics, are initially all likely to be under the Project Athena heading, and provide good connectivity (Wi-Fi 6), good displays, and really nice battery life for the class of device. Ice Lake is going to play more in the premium space too, at least initially, which might indicate that Comet Lake could be angled down the price bracket.

To be honest, we should have been expecting this. When Dr. Murthy Renduchintala joined Intel a couple of years ago, he was quoted as saying that he wants to disaggregate the ‘generation’ from the lithography, and offer a range of products within each generation. The fruits of that campaign started with the last round of mobile platforms, and the fruits of that endeavor will ripen through the Ice Lake/Comet Lake kerfuffle*. It’s going to be agonizing to tell users the difference, and even more so if OEMs do not list exact CPU specifications in their online listings. Intel has been so forthright with two distinct brands, the ‘X’ Gen Core and the Core ‘i7/i5/i3’ naming, that now both are ultimately meaningless to differentiate between two different types of products.

What should be the solution here? On initial thoughts, I would have pushed Ice Lake as an 11th Gen Core. It’s a new and exciting product, with a updated microarchitecture, better graphics, and leading edge lithography, along with Project Athena, it needs to be categorically separated from any other processors it might be competing with. It’s either that, or come up with an alternative naming scheme for it all. At this point, Intel is heading to a sticky mess, where it’s competing against itself and the casual user who hasn’t done meticulous research might not end up with the optimum product.

*To be clear, in the past, Intel used to separate product line microarchitecture based on the nth Gen Core designation. This does not happen anymore – a single ‘nth Gen Core’ brand might have 3+ different microarchitectures depending on what product you are looking at. It is ultimately confusing for any end-customers that have a passing knowledge of Intel’s product lines, and highly annoying to anyone with technical prowess discussing Intel’s products. I hate it. I understand it, but I hate it.

Performance Claims: +18% IPC vs. SKL, +47% Perf vs. BDW Intel’s Ice Lake and Sunny Cove: A Welcome Update, with Questions on Execution
Comments Locked

107 Comments

View All Comments

  • name99 - Wednesday, July 31, 2019 - link

    That’s an idiotic chain of reasoning.
    ARM Macs will ship with macOS, not iOS. To believe otherwise only reveals that you know absolutely nothing of how Apple thinks.

    As for comparison, the rough number is A12X gets ~5200 on GB4, Intel best (non-OC’d) gets ~5800. That’s collapsing lots of numbers down to one, but comparing benchmark by benchmark you see Apple does very well (almost matching Intel) across an awful lot.

    If Apple can maintain its past pace (and there is no reason why not...) we can expect A13X to be anywhere from 20% to 35% faster, which puts it well into “fastest [non-OC’d] CPU on earth” territory for most single-threaded use cases. Can they achieve this? Absolutely.
    Just process improvement can get them 10% frequency. I expect A13X to clock around 2.8GHz.
    Then there is LPDDR5 which I expect they will be using, so substantially improved memory bandwidth. Then I expect they'll have SVE (2x256) and accompanying that basically double the bandwidth all the way out from L1 to DRAM.
    These are just the obvious basics. There are a bunch of things they can still do that represent “fairly easy” improvements to get to that 25% or so. (These include more aggressive fusion, a double-pumped ALU, attached ALUs to load/store to allow load-ok and op-store fusion, a micro-op cache, long-term-parking, criticality prediction, ...)

    So, if it’s so easy, why doesn’t Intel also do it? Why indeed? That’s why I occasionally post my alternative rant about how INTC is no longer an engineering company, it is now pretty much purely a finance company...
  • ifThenError - Friday, August 2, 2019 - link

    Sorry, but both these comments seem mighty uninformed. The MacBooks Air and Pro currently and in the foreseeable future all run on Intel CPUs. The Apple Chips A12/13 are used in iPhone, iPad and the likes.

    And regarding your prediction, your enthusiasm seems way over the top. What are you even talking about? Micro-op cache on a RISC processor? Think again. Aren't RISC commands all micro ops already?
  • name99 - Sunday, August 4, 2019 - link

    Strong the Dunning-Kruger is with this one...
    Dude, seriously, learn something about MODERN CPU design, more than just buzz-words from the 80s.
    To get you started, how about you read
    https://www.anandtech.com/show/14384/arm-announces...
    and concentrate on understanding EVERY aspect of what's being added to the CPU and why.
    Note in particular that 1.5K Mop cache...

    More questions to ask yourself:
    - Why was 80s RISC obsessed with REDUCED instructions?
    - Why was ARM (especially ARMv8) NOT obsessed with that? Look at the difference between ARMv8 and, say, RISC-V.
    - Why is op-fusion so important a part of modern high performance CPUs (both x86 and ARM [and presumably RISC-V if they EVER ship a high-performance part, ha...])?
    - which are the fast (shallow logic, even if it's wide) and which are the slow (deep logic) parts of a MODERN pipeline?
  • ifThenError - Monday, August 5, 2019 - link

    Oh my, this is so entertaining you should charge for the reading.

    You demand to go beyond just buzz words (what would be good) while your posts look like entries to a contest on how many marketing phrases can be fit into a paragraph.
    Then you even manage to combine this with highly rude idiom. Plus you name a psychological effect but fail to transfer it to self-reflexion. And as cherry on the top you obviously claim for yourself to understand „EVERY aspect“ of a CPU (an unimaginably complex bit of engineering) but even manage to confuse micro- and macro-op cache and the conceptual differences of these.

    I'm really impressed by your courage. Publicly posting so boldly on such a thin basis is brave.
    Your comments add near zero information but are definately worth the read. Pure comedy gold!

    Please see this as an invitation to reply. I'm looking forwards to some more of your attempts to insult.
  • Techgeek43 - Tuesday, July 30, 2019 - link

    Fantastic article Ian, I for one, cannot wait for ice lake laptops
    Wonderful in-depth analysis, with an interesting insight into the Intel brand
  • repoman27 - Tuesday, July 30, 2019 - link

    "The high-end design with 64 execution units will be called Iris Plus, but there will be a ‘UHD’ version for mid-range and low-end parts, however Intel has not stated how many execution units these parts will have."

    Ah, but they have: Ice Lake-U Iris Plus (48EU, 64EU) 15 W, Ice Lake-U UHD (32EU) 15 W. So their performance comparisons may even be to the 15 W Iris Plus with 64 EUs, rather than the full fat 28 W version.

    I know you have access to the media slide decks, but Intel has also posted product briefs for the general public that contain a lot of this info: https://www.intel.com/content/www/us/en/products/d...

    "On display pipes, Gen11 has access to three 4K pipes split between DP1.4 HBR3 and HDMI 2.0b. There is also support for 2x 5K60 or 1x 4K120 with a 10-bit color depth."

    The three display pipes are not limited to 4K, and are agnostic of transport protocol—each of them can be output via the eDP 1.4b port, one of the 3 DDI interfaces which can support either DisplayPort 1.4 or HDMI 2.0b, or one of the up to 4 Thunderbolt 3 ports. Both HDMI and DP support HDCP 2.2, and DisplayPort also supports DSC 1.1. The maximum single pipe, single port resolution for HDMI is 4K60 10bpc (4:2:2), and for DisplayPort it's 4K120/5K60 10bpc (with DSC).

    Thunderbolt 3 integration for Ice Lake-Y is only up to 3 ports.
  • abufrejoval - Tuesday, July 30, 2019 - link

    What I personally liked most about the GT3e (48 EU) and GT4e (72 EU) Skylake variant SoCs was, that they didn't cost the extra money they should have, especially when you consider that the iGPU part completely dwarfs the CPU cores (which Intel makes you bleed for) and is much better than everything else combined together (have a look at the WikiChips layouts
    https://en.wikichip.org/wiki/intel/microarchitectu...

    Of course, a significantly better graphics performance is never a bad thing, especially when it also doesn't cost extra electrical power: The bigger iGPUs might have actually been more energy efficient than their GT2 brethren at a graphics load that pushed the GT2 towards its frequency limits. And in any case if you don't crunch it on graphics, the idle consumption is near perfect: One of the reasons most laptop dGPU designs won't even bother to run 2D on the dGPU any more but leave that to Intel.

    The biggest downside was that you couldn't buy them outside an Apple laptop or Intel NUC.

    But however much Intel goes into Apple mode (the major customer for these beefier iGPUs) in terms of "x time faster than previous", the result aren't going to turn ultrabooks with this configuration into "THD gaming machines".

    To have a good feel as to where these could go and whether they are worth the wait, just have a look at the Skull Canyon nuc6i7kyk review on this site: That SoC uses 72 EUs and 128MB of eDRAM and should put a pretty firm upper limit to what a 64 EU Ice Lake can do: Most of the games in that review are somewhat dated yet fail to reach 20FPS at THD.

    So if you want to game on the device, you'd be much better of with a dGPU however small and chose the smallest iGPU variant available. No reason to wait, Whisky + Nvidia will do better.

    If you want real gaming performance, you need to put real triple digit Watts and the bandwidth only GDDR5/6 or HBM can deliver to work even at THD, but with remote gaming perhaps it doesn't have to be on your elegant slim ultrabook. There again anything but the GT2 configuration is wasted, because only need the VPU part for decoding Google Stadia (or Steam Remote) streams, which is the same for all configurations.

    For some strange reason, Intel has been selling GT3/4 NUCs at little or no premium over GT2 variants and in that case I have been seriously tempted. And only once I even managed to find a GT3e laptop once for a GT2 price (while the SoC is literally twice as big and the die carrier even adds eDRAM at zero markup), which I stil cherish.

    But if prices are anywhere related to the surface area of the chip (as they are for the server parts), these high powered GTs are something that only Apple users would buy.

    That's another reaons, I (sadly) don't expect them to be sold in anything bug Macs and some NUCs, no ChuWi notebooks or Mini-ITX boards.
  • abufrejoval - Tuesday, July 30, 2019 - link

    ...(need edit)

    Judging from the first 10nm generation, GPUs where the part where obtaining economically feasible yields didn't work out. Unless they have really, really fixed 10nm it's not hard to imagine that Intel could be selling high-count EU SoCs to Apple below cost, to keep them for another generation as flagship customer and perhaps due to long-term contractual obligations.

    But maintaining GT2/3/4 price egality for the rest of the market seems suicidal even if you have a fab lead.

    Not that I expect we'll ever be told: In near monopoly situations the so called market ecnomy becomes surprisingly complex.
  • willis936 - Wednesday, July 31, 2019 - link

    What the hell is a THD in this context?
  • jospoortvliet - Monday, August 5, 2019 - link

    Probably full HD (True HD)?

Log in

Don't have an account? Sign up now