Xe-LP Media & Display Controllers

Our final stop on our deep dive through the Xe-LP architecture are the non-rendering aspects of the GPU: its media and display controllers. Changes here aren’t quite as eye-catching as changes to the core architecture, but the improvements made in these blocks help to keep the overall GPU current by supporting new media formats as well as new display connectivity protocols.

First off, let’s talk about the media engine. There are no crazy overhauls to speak of here, but for Xe-LP Intel has made some sensible additions to the engine. The marquee feature here is easily support for AV1 decode acceleration, making Intel the first vendor out of the Big 3 to add hardware decode support for the new codec.

The up-and-coming royalty free codec is widely expected to become the de facto successor to H.264/AVC, as while HEVC has been on the market for a number of years (and is already supported in all recent GPUs), the madcap royalty situation around the codec has discouraged its adoption. By contrast, AV1 should deliver similar or slightly better quality than HEVC without royalties for its use in distribution, which makes it a lot more palatable to content vendors. The one downside to AV1 thus far is that it’s pretty CPU heavy, which makes hardware decode support all the more important not only for battery life reasons, but even ensuring smooth, glitch-free playback.

Meanwhile, similar to Intel’s rendering performance goals, the company has worked to improve the encoding and decoding throughput of the media engine. According to the company the updated block is now capable of up to 2x the encode and decode throughput. For consumer machines this is unlikely to matter too much, but it’s something that will be particularly important for the SG1 server product, which will focus on bulk encoding.

Finally, among smaller changes made to the media engine, Intel has added official support for HDR and Dolby Vision playback. HDR support continues to roll out to PCs, if only slowly, so this is a notable step in ensuring that newer PCs can handle HDR content encoded in those formats. Also notable is an improvement to Intel’s HEVC encoding block, which now supports the HEVC Screen Content Coding (SCC) extension, which is designed to improve HEVC compression rates on content with static or nearly-static images.

Xe-LP Display Controller: DisplayPort 1.4, HDMI 2.0, & 8K Displays

Last but certainly not least, we have Xe-LP’s display engine. Like the media block there are no radical changes here, but there are some welcome improvements throughout.

Perhaps the biggest change here is that after several years, Intel has finally added a fourth display pipeline, meaning that the GPU can now drive four independent displays. Prior to this, Gen11 and earlier designs could only handle three displays, and though even that is more than most people will use, 4 displays has become the magic number for other GPU designs. Complicating matters is the recent push for dual screen laptops and other mobile devices with multiple displays, which would then eat up two of those three outputs.

On which note, Intel has also added a second embedded DisplayPort output, which would be extremely useful for those dual screen devices.

Otherwise, the basic display output choices are unchanged from Gen11. Xe-LP supports DisplayPort 1.4 as well as HDMI 2.0. The latter is a bit disappointing since HDMI 2.1 televisions are now shipping, but it’s admittedly not unusual for Intel to take an extra generation to adopt newer HDMI standards. These display outputs can also be fed into a USB4/Thunderbolt 4 port, where DisplayPort data is a first-class citizen and can be muxed in the signal, or the port reconfigured via alt modes.

Not evident on Intel’s block diagrams, the company has made some plumbing changes to better feed the display controllers. Specifically, the company has increased the bandwidth available to the display engine so that it can handle the kind of extreme, high-resolution displays that DisplayPort 1.4 was designed to feed. As a result the controller now has enough bandwidth and internal processing power to drive 8K UHD displays, as well as the recent generation of 360Hz displays.

Feed the Beast: New L1 Cache & Dual Ring Buses Performance Expectations & First Thoughts
Comments Locked

33 Comments

View All Comments

  • regsEx - Thursday, August 13, 2020 - link

    HPG will use EM cores for ray tracing?
  • Mr Perfect - Thursday, August 13, 2020 - link

    "On the capacity front, the L3 cache can now be as large as 16MB"

    I apologize for being off topic, but I just had a surreal moment realizing that this piddly little iGPU can have the same amount of L3 cache as my Voodoo 3 had video ram. How far we've come.
  • Brane2 - Thursday, August 13, 2020 - link

    As usual, no useful info.
    They'll make a GPU that looks every bit like... GPU.
    What a shocker.
    Who knew ?
  • GreenReaper - Thursday, August 13, 2020 - link

    "As a result, integer throughput has also doubled: Xe-LP can put away 8 INT32 ops or 32 INT16 ops per clock cycle, up from 4 and 16 respectively on Gen11." -- but the graph says 4 and 8 respectively on Gen11. (The following line also appears odd as a result.)
  • Ryan Smith - Thursday, August 13, 2020 - link

    Thanks! That was meant to be 16 ops for Gen11 in the table.
  • neogodless - Thursday, August 13, 2020 - link

    > from reviews of Ice Lake and Ryzen 3000 “Renoir” laptops,

    It is my understanding that the Renoir codename refers to what are commercially Ryzen 4000 mobile APUs, like the 4700U, 4800H and 4900HS.
  • FullmetalTitan - Thursday, August 13, 2020 - link

    In addition to groaning at the joke at the end of page 1, I find the timing to be perfect as I just last night got my partner to start watching the Stargate series
  • Valantar - Friday, August 14, 2020 - link

    As always here on AT, an absolutely excellent article, distilling a pile of complex information down to something both understandable and interesting. I'm definitely looking forward to seeing how Tiger Lake's Xe iGPU performs, and the DG1 too. I doubt their drivers will be up to par for a few years, but a third contender should be good for the GPU market (though with a clear incumbent leader there's always a chance of the small fish eating each other rather than taking chunks out of the bigger one). Looking forward to the next few years of GPUs, this is bound to be interesting!
  • onewingedangel - Friday, August 14, 2020 - link

    The approach taken with DG1 seems a little odd. It's too similar to the iGPU by itself, just with more power/thermal headroom and less memory contention.

    Unless it works in concert with the IGP, you'd think it better to either remove the iGPU from the CPU entirely (significantly reducing die size) and package DG1 with the CPU die when a more powerful GPU is not going to be used, or to add a HBM controller to the CPU and make the addition of a HBM die the graphics upgrade option when the Base iGPU is not quite enough.
  • Digidi - Friday, August 14, 2020 - link

    Nice article! The Fron end look huge. 2 Rasterizer for only 700 Shaders is a massive Change.

Log in

Don't have an account? Sign up now