The Next Generation Gen11 Graphics: Playable Games and Adaptive Sync!

Some of the first words out of the mouth of Raja Koduri about graphics is that Intel has a duty to its one billion customers with integrated graphics to give them something that is useful, and that it is time for Intel to provide graphics which people can actually play games on. Given his expertise on the matter, it shouldn’t sound too far-fetched: more people play games than ever before, and these users want to play no matter what their hardware. To that end, Raja stated that Gen11 graphics is the first step in a new graphics policy to provide the performance and features to let gamers play the most popular games, no matter what implementation.

Gen11: Intel’s first GT2 TFLOPS Graphics

In 2015, Intel launched the Skylake processor with Gen9 integrated graphics. Rather than moving straight to Gen10 the next time around, we were given Gen 9.5 in both Kaby Lake and Coffee Lake, which supposedly draw features from what would have been Gen 10. Actually, the graphics for Intel’s failed 10nm Cannon Lake chip were meant to be called Gen10, however Intel never released a Cannon Lake processor with working integrated graphics, and because Gen11 goes above and beyond what Gen10 would have been, we’ve gone straight to Gen11. Make sense? Well Intel didn’t even bother to acknowledge Gen10 in its history graph:

We will see Gen11 graphics being paired with Sunny Cove cores on 10nm sometime in 2019 according to the roadmaps. However rather than give a detailed architecture layout for the new product, we instead were given a rather high level diagram.

From here we can deduce a few things. We were told that this configuration is the GT2 config, which will have 64 execution units, up from 24 in Gen9.5. These 64 EUs are split into four slices, with each slice being made of two sub-slices of 8 EUs a piece. Each sub-slice will have an instruction cache and a 3D sampler, while the bigger slice gets two media samplers, a PixelFE, and additional load/store hardware. Intel lists Gen11 targeting efficiency, performance, advanced 3D and media capabilities, and a better gaming experience.

Intel didn’t go into too much detail regarding how the EUs are at higher performance, however the company did say that the FPU interfaces inside the EU are redesigned and it still has support for fast (2x) FP16 performance as seen in Gen9.5. Each EU will support seven threads as before, which means that the entire GT2 design will essentially have 512 concurrent pipelines. In order to help feed these pipes, Intel states that it has redesigned the memory interface, as well as increasing the L3 cache of the GPU to 3 MB, a 4x increase over Gen9.5, and it is now a separate block in the unslice section of the GPU.

Other features include tile-based rendering, which Intel stated the graphics hardware will be able to enable/disable on a render pass basis. This will make Intel the final member of the PC GPU vendor community to implement this, following NVIDIA in 2014 and AMD in 2017. While not a panacea to all performance woes, a good tile rendering setup plays well to the bandwidth limitations of an integrated GPU. Meanwhile Intel's lossless memory compression has also improved, with Intel listing a best case performance boost of 10% or a geometric mean boost of 4%. The GTI interface now supports 64 bytes per clock read and write to increase throughput, which works with the better memory interface.

Coarse Pixel Shading, Intel's implementation of multi-rate shading and similar in scope to NVIDIA’s own Variable Pixel Shading, is also supported. This allows the GPU to reduce the amount of total shading work required by shading some pixels on a less than 1:1 basis. Intel showed two demos for CPS, where pixel shading was reduced either as a function of object distance from camera (so you do less work when things are further away), or reduced as a function of how close the object is to the center of the screen, designed to help features like foveated rendering for VR. With a 2x2 pixel stencil applied – meaning only one pixel shading operation was done per block of 4 pixels – Intel stated a ~30% increase in frame rates in supported games. Unfortunately this needs to be applied on a game-by-game basis in order to prevent significant image quality losses, so the performance gains won't be immediate or universal.

For the media block, Intel says that the Gen11 design includes a ground up HEVC encoder design, with high quality encode and decode support. Intel cited the fact that its media fixed function units are already used in the datacenter for video processing, and home users can take advantage of the same hardware. Intel also stated that by using parallel decoders it can either support concurrent video streams or they can be combined to support a single large stream, and this scalable design will allow future hardware to push the peak resolutions up to 8K and beyond.

The highlight of the display engine is support for Adaptive Sync technologies. We were told that it was announced back at the launch of Skylake, but now it is finally ready to go into Intel’s integrated graphics. This goes in hand with HDR support due to its high-precision data path.

One thing in this presentation that Intel didn’t mention directly is that Gen11 graphics would appear to have Type-C video output support, potentially indicating that Intel has integrated the necessary mux into the chipset itself, removing another IC from the motherboard design.

Sunny Cove Microarchitecture: A Peek At the Back End Demonstrating Sunny Cove and Gen11 Graphics
Comments Locked

148 Comments

View All Comments

  • zodiacfml - Thursday, December 13, 2018 - link

    YES
  • Raqia - Thursday, December 13, 2018 - link

    For ultra-mobile, not only are battery/power/heat issues but supply is one as well due to Intel being locked down to their own manufacturing division. On top of that, they have a lock on x86 by not licensing to any competitors but AMD, who despite competitive stretches inevitably stumbles (either due to themselves or Intel's non-engineering financial efforts) and leaves the industry with dry spells of performance improvements. Intel's gross margins on their chips remain >60% as a result whereas ARM SoCs even after licensing is closer to 20-30%.
  • Raqia - Thursday, December 13, 2018 - link

    Keller declared that the technology is in its infancy, and feature wise the 2019 version of the Atom simply won't be competitive with leading ARM SoCs like the 8cx. The slowness you refer to only occurs when running native 32 bit x86 code on the WOW emulation layer, but the value of this feature is mostly in the compatibility being there at all. If performance and compatibility of legacy code matters to you then certainly Windows on Arm isn't suitable. However, it will matter even less now with the new native compilation tools and ports of important sub platforms like Chromium.
  • 29a - Thursday, December 13, 2018 - link

    "Windows on ARM is horribly slow and therefore shitty."

    Sounds a lot like Windows on Atom.
  • MonkeyPaw - Saturday, December 15, 2018 - link

    I’m betting Apple wanted one for MacBook Air, or maybe MS for Surface Go. It would be the right amount of performance for both devices, an both companies would have the clout to get it done. I’d lean toward Apple because the GPU is pretty big.
  • Kevin G - Wednesday, December 12, 2018 - link

    Typo:
    "a physical address space up to 52 bits. This means, according to Intel, that the server processors could theoretically support 4 TB of memory per socket."

    That should be petabytes instead of terabytes. The limit is for an entire system, not per socket as additional sockets will not grant any additional capacity.
  • gamerk2 - Thursday, December 13, 2018 - link

    NUMA systems could potentially be per-socket rather then OS wide.
  • HStewart - Wednesday, December 12, 2018 - link

    It sounds like Intel has been working on increasing performance in two ways
    1. 7nm change for the future - because of limitations found with 10nm
    2. 10nm enhance for corrections for performance of issues with Cannon Lake

    But most importantly, architexture improvements like faster single thread execution and new instructions and multi-core improvements will in long term significantly improve performance
  • ishould - Wednesday, December 12, 2018 - link

    Forgive me if I take 2 metric tons of salt with any roadmaps Intel provides these days. They haven't exactly had the most accurate timelines as of late (past four years)
  • HStewart - Wednesday, December 12, 2018 - link

    It appears they realize that and coming out with document to indicated they have made corrections - this is better than not knowing what they are planning - or as some AMD Fans would like to believe that they lost the battle.

Log in

Don't have an account? Sign up now