Performance Expectations & First Thoughts

Wrapping up this GPU architecture deep dive, while Intel didn’t use this year’s architecture day to discuss specific products and SKUs, the company did take a moment to discuss performance expectations for Xe-LP, and offer some quick videos of Xe-LP in action. Unfortunately we weren’t allowed to record these demos (least someone leak them), but we’ll post them here as soon as Intel releases copies to the public.

At any rate, as previously discussed, Intel’s goal was to double Ice Lake’s (Gen11) graphics performance, which Xe-LP will be accomplishing via a combination of a wider GPU (more hardware), a more power-efficient GPU (allowing higher clocks), and a more throughput-efficient GPU (higher IPC). This is a lofty goal given the fact that they don’t get the benefit of a wholly new process node, but Intel does seem rather confident about the performance potential of its new 10nm SuperFin process node, as well as the payoff from the tried-and-true method of brute forcing things by throwing more hardware at it.

Looking at our own performance data from reviews of Ice Lake and Ryzen 3000 “Renoir” laptops, if Intel can meet their performance goals then Tiger Lake should be able to pull ahead of AMD’s comparable U-series Ryzen APUs. As always, this is going to be game-dependent, but high-end Ice Lake laptops were never behind by more than 30% or so in GPU-limited scenarios. But since we’re talking about mobile scenarios, the power and cooling will always be a potential wildcard that can hold a laptop back. So for ultraportable gaming laptops in particular, Intel will undoubtedly want its partners to build laptops with the cooling capabilities to match, to give Tiger Lake every possible chance to succeed.

Framerates aside, Intel also expects Xe-LP’s performance to significantly raise the bar on image quality. With integrated graphics generally bringing up the rear in terms of image quality in order to deliver the necessary framerates, doubling their iGPU performance would allow a lot of games to be run at higher image quality settings. This again would vary from game to game, but at least for promotional purposes, Intel is eyeballing Tiger Lake/Xe-LP being able to run at high image quality in games where Ice Lake could only manage low.

But Xe-LP isn’t just an integrated graphics solution: it’s for discrete graphics too. And while we eagerly anticipate more information on DG1, given Intel’s focus today on architecture over products, we’re left with more questions than answers. Intel has a very interesting and OEM-friendly plan in place with Xe-LP, and by leveraging the same architecture for both the iGPU and an optional discrete GPU, OEMs are going to love the fact that they don’t have to validate and load separate GPU drivers for the integrated and discrete GPUs.

Most importantly, however, Intel is also refusing to answer the 10 million pixel question: will Tiger Lake’s iGPU be able to work in concert with the DG1? Intel has certainly not made any efforts to shoot down that idea, but they also aren’t confirming it, either. And even then, if they utilize mutli-GPU rendering, will they get it right? Multi-GPU rendering on the desktop is all but dead, and for good reason: it tends not to play nicely with certain modern rendering techniques, and it can add a fair bit of input lag. The answer to this question – and whether Intel has been able to conquer the traditional drawbacks of multi-GPU rendering – will absolutely have a huge impact on the commercial viability of the DG1 GPU. So we’ll be eagerly awaiting the answer to those questions.

Otherwise, Xe-LP marks an important step in the evolution of Intel’s GPU architectures, never mind a huge stepping stone in their plans to become a top-to-bottom GPU supplier. Though only destined for laptops, Xe-LP is the basis of something much bigger for Intel: Xe-LP will be the foundation of an entire generation of GPUs to come. So what Intel does here with regards to features, architecture, and above all else power efficiency will have enormous repercussions to come, for everything from gaming hardware to supercomputers. In many ways it’s the dawn of a new era for Intel, and one they are hoping will be a better era than what they leave behind.

Xe-LP Media & Display Controllers
Comments Locked

33 Comments

View All Comments

  • mode_13h - Thursday, August 13, 2020 - link

    I can't speak to Direct 3D, but OpenGL talks about work group invocations. I don't believe "threads" is mentioned anywhere in the API.
  • Dolda2000 - Thursday, August 13, 2020 - link

    Admittedly I haven't read the whole article yet, but it strikes me how the presentations seems to be comparing the new GPU to the previous GPU, rather than presenting it as a new architecture. Does this confirm that using the "Xe" moniker for this product is just marketing, and that it in fact is an evolution of previous Gen architectures?

    I mean, I don't mind if that's the case, I just wish they wouldn't overmarket it.
  • Ryan Smith - Thursday, August 13, 2020 - link

    " is an evolution of previous Gen architectures?"

    It is an evolution of the previous Gen architectures. A major evolution, but an evolution none the less. Not even Intel is going to do a clean sheet design when they have bits and pieces that already work fine.
  • Dolda2000 - Thursday, August 13, 2020 - link

    Certainly, they're not going to create a new clean-slate ALU design just for the sake of it, but it has always been my impression that Xe (at least Xe-HPC) was going to be a more-or-less new architecture. Maybe that has just been my misunderstanding the whole, and Xe-HPC too is going to be fundamentally Gen-based (though I seem to recall that being explicitly denied at some point), but what I was getting at here was that Xe-HPC is going to be the new architecture, and meanwhile this is "merely" an evolution of Gen for which they're just borrowing the product name of their higher-end offering to make it seem like more than what it is.
  • mode_13h - Thursday, August 13, 2020 - link

    You should distinguish between the ISA and uArch of the shader cores (EUs) vs. the macro-architecture of the GPU (e.g. buses, memories, caches, fixed-function units, etc.).

    So, you can have a macro-architecture that's *very* different, even while the ISA is a small evolution and the uArch of the EUs is somewhere in between.
  • tipoo - Thursday, August 13, 2020 - link

    RDNA 1 still has significant GCN bits in it, I'm sure Nvidia does the same a few generations in a row, there's no necessary contention between it being an evolution and it being marked as something substantially new.
  • abufrejoval - Thursday, August 13, 2020 - link

    IMHO the overhead of multi GPU rendering with an iGPU and dGPU can't really be offset by the small contribution the iGPU is likely to make to a beefy dGPU.

    More likely will be dGPU via Thunderbolt 4 and very seamless transitions on docking/undocking and that's good enough.

    Too bad that won't work nearly as well with Ryzen notebooks so there again consumer choice goes down the drain somewhat. Not that I believe TB dGPU is a really an attractive market unless prices change dramatically.
  • mode_13h - Thursday, August 13, 2020 - link

    Agreed. I think it would work much better to task the iGPU with other compute tasks that involve less communication bandwidth with the dGPU. Things like physics, AI, audio processing, etc.
  • brucethemoose - Thursday, August 13, 2020 - link

    Maybe post processing? Like an Intel version lf ReShade? IIRC the frames have to come back to the IGPU's display block anyway.
  • tipoo - Thursday, August 13, 2020 - link

    In this case the IGP would be nearly equivalent to DG1

Log in

Don't have an account? Sign up now