Xe-LP Feature Set: DirectX FL 12_1 with Variable Rate Shading

Kicking off the proper part of our architectural deep dive, let’s start with a quick summary of Xe-LP’s graphics feature set. I call this a quick summary as there is unfortunately not a whole lot new to talk about here.

From an API-level perspective, Xe-LP’s feature set is going to be virtually identical to that of Intel’s Gen11 graphics. Not unlike AMD with their RDNA1 architecture, Intel has decided to concentrate their efforts on updating the low-level aspects of their GPU architecture, making numerous changes downstairs. As a result, relatively little has changed upstairs with regards to graphics features.

The net result is that Xe-LP is a DirectX feature level 12_1 accelerator, with a couple of added features. In particular, tier 1 variable rate shading, which was first introduced for Intel in their Gen11 hardware, is back again in Xe-LP. Though not as capable as the newer tier 2 implementation, it allows for basic VRS support, with games able to set it on a per-draw call basis. Notably, Intel remains the only vendor to support tier 1; AMD and NVIDIA have (or are) going straight to tier 2.

DirectX 12 Feature Levels
  12_2
(DX12 Ult.)
12_1
GPU Architectures Intel: Xe-HPG?
NVIDIA: Turing
AMD: RDNA2
Intel: Gen9, Gen11, Xe-LP
NVIDIA: Maxwell 2, Pascal
AMD: Vega, RDNA (1)
Ray Tracing
(DXR 1.1)
Yes No
Variable Rate Shading
(Tier 2)
Yes No
(Gen 11/Xe-LP: Tier 1)
Mesh Shaders Yes No
Sampler Feedback Yes No
Conservative Rasterization Yes Yes
Raster Order Views Yes Yes
Tiled Resources
(Tier 2)
Yes Yes
Bindless Resources
(Tier 2)
Yes Yes
Typed UAV Load Yes Yes

The good news for Intel, at least, is that they were already somewhat ahead of the game with Gen11, shipping 12_1 support for even their slowest integrated GPUs before AMD had phased it into all of their products. So at this point, Intel is still at parity with other integrated graphics solutions, if not slightly ahead.

The downside is that it also means that Intel is the only hardware vendor launching a new GPU/architecture in 2020 without support for the next generation of features, which Microsoft & co are codifying as DirectX 12 Ultimate. The consumer-facing trade name for feature level 12_2, DirectX Ultimate incorporates support for variable rate shading tier 2, along with ray tracing, mesh shaders, and sampler feedback. And to be fair to Intel, expecting ray tracing in an integrated part in 2020 was always a bit too much of an ask. But some additional progress would always be nice to see. Plus it puts DG1 in a bit of an odd spot, since it’s a discrete GPU without 12_2 functionality.

The Intel Xe-LP GPU Architecture Deep Dive Xe-LP By The Slice: 50% Larger With 96 EUs
POST A COMMENT

33 Comments

View All Comments

  • regsEx - Thursday, August 13, 2020 - link

    HPG will use EM cores for ray tracing? Reply
  • Mr Perfect - Thursday, August 13, 2020 - link

    "On the capacity front, the L3 cache can now be as large as 16MB"

    I apologize for being off topic, but I just had a surreal moment realizing that this piddly little iGPU can have the same amount of L3 cache as my Voodoo 3 had video ram. How far we've come.
    Reply
  • Brane2 - Thursday, August 13, 2020 - link

    As usual, no useful info.
    They'll make a GPU that looks every bit like... GPU.
    What a shocker.
    Who knew ?
    Reply
  • GreenReaper - Thursday, August 13, 2020 - link

    "As a result, integer throughput has also doubled: Xe-LP can put away 8 INT32 ops or 32 INT16 ops per clock cycle, up from 4 and 16 respectively on Gen11." -- but the graph says 4 and 8 respectively on Gen11. (The following line also appears odd as a result.) Reply
  • Ryan Smith - Thursday, August 13, 2020 - link

    Thanks! That was meant to be 16 ops for Gen11 in the table. Reply
  • neogodless - Thursday, August 13, 2020 - link

    > from reviews of Ice Lake and Ryzen 3000 “Renoir” laptops,

    It is my understanding that the Renoir codename refers to what are commercially Ryzen 4000 mobile APUs, like the 4700U, 4800H and 4900HS.
    Reply
  • FullmetalTitan - Thursday, August 13, 2020 - link

    In addition to groaning at the joke at the end of page 1, I find the timing to be perfect as I just last night got my partner to start watching the Stargate series Reply
  • Valantar - Friday, August 14, 2020 - link

    As always here on AT, an absolutely excellent article, distilling a pile of complex information down to something both understandable and interesting. I'm definitely looking forward to seeing how Tiger Lake's Xe iGPU performs, and the DG1 too. I doubt their drivers will be up to par for a few years, but a third contender should be good for the GPU market (though with a clear incumbent leader there's always a chance of the small fish eating each other rather than taking chunks out of the bigger one). Looking forward to the next few years of GPUs, this is bound to be interesting! Reply
  • onewingedangel - Friday, August 14, 2020 - link

    The approach taken with DG1 seems a little odd. It's too similar to the iGPU by itself, just with more power/thermal headroom and less memory contention.

    Unless it works in concert with the IGP, you'd think it better to either remove the iGPU from the CPU entirely (significantly reducing die size) and package DG1 with the CPU die when a more powerful GPU is not going to be used, or to add a HBM controller to the CPU and make the addition of a HBM die the graphics upgrade option when the Base iGPU is not quite enough.
    Reply
  • Digidi - Friday, August 14, 2020 - link

    Nice article! The Fron end look huge. 2 Rasterizer for only 700 Shaders is a massive Change. Reply

Log in

Don't have an account? Sign up now