Conclusion: Is Intel Smothering AMD in Sardine Oil?

Whenever a new processor family is reviewed, it is easy to get caught up in the metrics. More performance! Better power consumption! Increased efficiency! Better clock-for-clock gains! Amazing price! Any review through a singular lens can fall into the trap of only focusing on that specific metric. So which metrics matter more than others? That depends on who you are and what the product is for. 

Tiger Lake is a mobile processor, featuring Intel's fastest cores and new integrated graphics built with an updated manufacturing process. This processor will be cast into the ultra-premium notebook market, as it carries the weight of the best Intel has to offer across a number of its engineering groups. Intel is actively working with its partners to build products to offer the best in performance for this segment right before a discrete GPU is absolutely needed.

As a road warrior, pairing the right performance with power efficiency is a must. In our benchmarks, due to the new process node technology as well as the updated voltage/frequency scaling, we can see that Tiger Lake offers both better performance at the same power compared to Ice Lake, but it also extends the range of performance over Ice Lake, assisted by that much higher turbo boost frequency of 4.8 GHz. When Tiger Lake gets into retail systems, particularly at the 15 W level, it is going to be fun to see what sort of battery life improvements during real-world workflows are observed.

As an engineer, genuine clock-for-clock performance gains get me excited. Unfortunately Tiger Lake doesn't deliver much on this front, and in some cases, we see regressions due to the rearranged cache depending on the workload used. This metric ignores power - but power is the metric on which Tiger Lake wins. Intel hasn't really been wanting to talk about the raw clock-for-clock performance, and perhaps understandably so (from a pure end-user product point of view at any rate).

Tiger Lake has updates for security as well as Control-Flow Enforcement Technology, which is a good thing, however these are held behind the vPro versions, creating additional segmentation in the product stack on the basis of security features. I’m not sure I approve of this, potentially leaving the non-vPro unsecure and trying to upsell business customers for the benefit.

The new Tiger Lake stills falls down against the competition when we start discussing raw throughput tests. Intel was keen to promote professional workflows with Tiger Lake, or gaming workflows such as streaming, particularly at 28 W rather than at 15 W. Despite this we can easily see that the 15 W Renoir options with eight cores can blow past Tiger Lake in a like-for-like scenario in our rendering tests and our scalable workloads. The only times Intel scores a win is due to accelerator support (AVX-512, DP4a, DL Boost). On top of that, Renoir laptops in the market are likely to be in a cheaper price bracket than what Intel seems to be targeting.

If Intel can convince software developers to jump on board with using its accelerators, then both the customers will benefit as will Intel’s metrics. The holy grail may be when it comes to OneAPI, enabling programmers to target different aspects of Intel’s eco-system under the same toolset. However OneAPI is only just entering v1.0, and any software project base building like that requires a few years to get off the ground.

For end-user performance, Tiger Lake is going to offer a good performance improvement over Ice Lake, or the same performance at less power. It’s hard to ignore. If Intel’s partners can fit 28 W versions of the silicon into the 15 W chassis they were using for Ice Lake, then it should provide for a good product.

We didn’t have too much time to go into the performance of the new Xe-LP graphics, although it was clear to see that the 28 W mode does get a good performance lift over the 15 W mode, perhaps indicating that DG1 (the discrete graphics coming later) is worth looking out for. Against AMD’s best 15 W mobile processor and integrated graphics, our results perhaps at the lower resolutions were skewed towards AMD, but the higher resolutions were mostly wins for Intel - it seemed to vary a lot depending on the game engine.

As a concept, Tiger Lake’s marketing frustrates me. Not offering apples-to-apples data points and claiming that TDP isn’t worth defining as a singular point is demonstrating the lengths that Intel believes it has to go to in order to redefine its market and obfuscate direct comparisons. There was a time and a place where Intel felt the need to share everything, as much as possible, with us. It let us sculpt the story of where we envisaged the market was going, and OEMs/customers were on hand to add their comments about the viewpoints of the customer base from their perspective. It let us as the press filter back with comments, critiques, and suggestions. The new twist from Intel’s client division, one that’s actually been progressing along this quagmire path, will only serve to confuse its passionate customer base, its enthusiasts, and perhaps even the financial analysts.

However, if we’re just talking about the product, I’m in two minds for Tiger Lake. It doesn’t give those raw clock-for-clock performance gains that I’d like, mostly because it’s almost the same design as Ice Lake for the CPU cores, but the expansion of the range of performance coupled with the energy efficiency improvements will make it a better product overall. I didn’t believe the efficiency numbers at first, but successive tests showed good gains from both the manufacturing side of Intel as well as the silicon design and the power flow management. Not only that, the new Xe-LP graphics seem exciting, and warrant a closer inspection.

Tiger Lake isn’t sardine oil basting AMD just yet, but it stands to compete well in a number of key markets.

Xe-LP GPU Performance: F1 2019
Comments Locked

253 Comments

View All Comments

  • blppt - Saturday, September 26, 2020 - link

    Sure, the box sitting right next to my desk doesn't exist. Nor the 10 or so AMD cards I've bought over the past 20 years.

    1 5970
    2 7970s (for CFX)
    1 Sapphire 290x (BF4 edition, ridiculously loud under load)
    2 XFX 290 (much better cooler than the BF4 290x) mistakenly bought when I thought it would accept a flash to 290x, got the wrong builds, for CFX)
    2 290x 8gb sapphire custom edition (for CFX, much, much quieter than the 290x)
    1 Vega 64 watercooled (actually turned out to be useful for a Hackintosh build)
    1 5700xt stock edition

    Yeah, i just made this stuff up off the top of my head. I guarantee I've had more experience with AMD videocards than the average gamer. Remember the separate CFX CAP profiles? I sure do.

    So please, tell me again how I'm only a Nvidia owner.
  • Santoval - Sunday, September 20, 2020 - link

    If the top-end Big Navi is going to be 30-40% faster than the 2080 Ti then the 3080 (and later on the 3080 Ti, which will fit between the 3080 and the 3090) will be *way* beyond it in performance, in a continuation of the status quo of the last several graphics card generations. In fact it will be even worse this generation, since Big Navi needs to be 52% faster than the 2080 Ti to even match the 3070 in FP32 performance.

    Sure, it might have double the memory of the 3070, but how much will that matter if it's going to be 15 - 20% slower than a supposed "lower grade" Nvidia card? In other words "30-40% faster than the 2080 Ti" is not enough to compete with Ampere.

    By the way, we have no idea how well Big Navi and the rest of the RDNA2 cards will perform in ray-tracing, but I am not sure how that matters to most people. *If* the top-end Big Navi has 16 GB of RAM, it costs just as much as the 3070 and is slightly (up to 5-10%) slower than it in FP32 performance but handily outperforms it in ray-tracing performance then it might be an attractive buy. But I doubt any margins will be left for AMD if they sell a 16 GB card for $500.

    If it is 15-20% slower and costs $100 more noone but those who absolutely want 16 GB of graphics RAM will buy it; and if the top-end card only has 12 GB of RAM there goes the large memory incentive as well..
  • Spunjji - Sunday, September 20, 2020 - link

    @Santoval, why are you speaking as if the 3080's performance characteristics are not already known? We have the benchmarks in now.

    More importantly, why are you making the assumption that AMD need to beat Nvidia's theoretical FP32 performance when it was always obvious (and now extremely clear) that it has very little bearing on the product's actual performance in games?

    The rest of your speculation is knocked out of what by that. The likelihood of an 80CU RDNA 2 card underperforming the 3070 is nil. The likelihood of it underperforming the 3080 (which performs like twice a 5700, non-XT) is also low.
  • Byte - Monday, September 21, 2020 - link

    Nvidia probably has a good idea how it performs with access to PS5/Xbox, they know they had to be aggressive this round with clock speeds and pricing. As we can see 3080 is almost maxed, o/c headroom like that of AMD chips, and price is reasonable decent, in line with 1080 launch prices before minepocalypse.
  • TimSyd - Saturday, September 19, 2020 - link

    Ahh don't ya just love the fresh smell of TROLL
  • evernessince - Sunday, September 20, 2020 - link

    The 5700XT is RDNA1 and it's 1/3rd the size of the 2080 Ti. 1/3rd the size and only 30% less performance. Now imagine a GPU twice the size of the 5700XT, thus having twice the performance. Now add in the node shrink and new architecture.

    I wouldn't be surprised if the 6700XT beat the 2080 Ti, let alone AMD's bigger Navi 2 GPUs.
  • Cooe - Friday, December 25, 2020 - link

    Hahahaha. "Only matching a 2080 Ti". How's it feel to be an idiot?
  • tipoo - Friday, September 18, 2020 - link

    I'd again ask you why a laptop SoC would have an answer for a big GPU. That's not what this product is.
  • dotjaz - Friday, September 18, 2020 - link

    "This Intel Tiger" doesn't need an answer for Big Navi, no laptop chip needs one at all. Big Navi is 300W+, no way it's going in a laptop.

    RDNA2+ will trickle down to mobile APU eventually, but we don't know if Van Gogh can beat TGL yet, I'm betting not because it's likely a 7-15W part with weaker Quadcore Zen2.

    Proper RDNA2+ APU won't be out until 2022/Zen4. By then Intel will have the next gen Xe.
  • Santoval - Sunday, September 20, 2020 - link

    Intel's next gen Xe (in Alder Lake) is going to be a minor upgrade to the original Xe. Not a redesign, just an optimization to target higher clocks. The optimization will largely (or only) happen at the node level, since it will be fabbed with second gen SuperFin (formerly 10nm+++), which is supposed to be (assuming no further 7nm delays) Intel's last 10nm node variant.
    How well will that work, and thus how well 2nd gen Xe will perform, will depend on how high Intel's 2nd gen SuperFin will clock. At best 150 - 200 MHz higher clocks can probably be expected.

Log in

Don't have an account? Sign up now