Going Beyond Gen11: Announcing the XE Discrete Graphics Brand

Not content with merely talking about what 2019 will bring, we were given a glimpse into how Intel is going to approach its graphics business in 2020 as well. It was at this point that Raja announced the new product branding for Intel’s discrete graphics business:

Intel will use the Xe branding for its range of graphics that were unofficially called ‘Gen12’ in previous discussions. Xe will start from 2020 onwards, and cover the range from client graphics all the way to datacenter graphics solutions.

Intel actually divides this market up, showing that Xe also covers the future integrated graphics solutions as well. If this slide is anything to go by, it would appear that Intel wants Xe to go from entry to mid-range to enthusiast and up to AI, competing with the best the competition has to offer.

Intel stated that Xe will start on Intel’s 10nm technology and that it will fall under Intel’s single stack software philosophy, such that Intel wants software developers to be able to take advantage of CPU, GPU, FPGA, and AI, all with one set of APIs. This Xe design will feed the foundation of several generations of graphics, and shows that Intel is now ready to rally around a brand name moving forward.

There was some confusion with one of the slides, as it would appear that Intel might be using the new brand name to also refer to some of it's FPGA and AI solutions. We're going to see if we can get an answer on that in due course.

Demonstrating Sunny Cove and Gen11 Graphics Changing How Chips are Made: 3D Packaging with FOVEROS
POST A COMMENT

149 Comments

View All Comments

  • CajunArson - Wednesday, December 12, 2018 - link

    There's nothing whatsoever revolutionary about "chiplets". A 10 year old core 2 quad used exactly the same technology that AMD calls "chiplets" in 2019 and AMD fantards like you even insulted the Core 2 quad for doing it.

    Maybe you should actually read the article about what a active interposer can do vs. wiring standard hunks of silicon to a PCB in literally the same way it was done in the 1970s before you run around acting like AMD is the only company to ever innovate anything.
    Reply
  • sgeocla - Wednesday, December 12, 2018 - link

    I've been reading articles about Intel 10nm promises for years now. And then we got failed laptop chips and low power pc boxes to appease 'mass production' status and not get sued by investors for false claims.
    Forgive me if I abstain my drooling until Intel actually delivers something that does not require industrial chillers. BTY where is that 28 core HEDT chips anyway ?
    Reply
  • Targon - Wednesday, December 12, 2018 - link

    There is always a point where the WHEN something is used sets a new direction. Multi-CPU in a workstation/server moving to a single processor with multiple cores was a big shift. Moving from two cores linked together when cache coherency was a big problem to a single dual-core without an interposer was a better way to go. It all comes down to if there is a performance boost or degradation as a result of the technology/implementation.

    With that said, a single CPU with 64 cores is fairly significant, and keeping the performance from being horrible with that many cores is the reason AMD has been praised. Price/performance for the server market and such.

    For a long time, Intel was seen as the king when it came to clock speeds and performance, but Intel hasn't had a significant boost to IPC in over three years. Intel has also been promising 10nm for three years, and still no sign of it, with the promise of holidays 2019.

    So, Intel still has nothing, they have vague promises of ways they will improve performance, but it remains to be seen if the performance will actually be better if 10nm slips again. On the flip side, AMD clearly has significant performance boosts from Ryzen 3rd generation in 2019(March/April being when many expect it). 7nm from AMD isn't a, "will they?" question, it isn't even a "when?", with CES in one month and with it, the answers. IPC improvements due to design improvements not related to chiplets at all would be good, as well as higher clock speeds. So, there is a potential for 30+ percent higher performance in one generation.

    Yes, I don't expect AMD to deliver huge performance jumps again for years, but we may see things such as Gen-Z support, going beyond two memory channels for the mainstream Ryzen chips when the next socket comes out in 2020/2021, and other things that may boost system/platform performance while AMD figures out how to get more CPU performance.

    Intel is still trying to do things the same way, just faster. Faster CPU, faster links to individual devices, fabric on a system level will be Intel trying to reinvent what AMD has been working toward.

    I will also note again that some things are not always about being new, but are more about presentation and implementation. Palm really popularized the idea of apps that users could install on a small portable device(PDA), but Apple popularized it with the iPhone. In some cases, the implementation really is good, and will get the respect of the industry, in other cases, you see that something is clearly a case of following the lead of another player.

    So, in the PC industry, is Intel leading the way with innovations, or is AMD in the drivers seat?
    Reply
  • iwod - Thursday, December 13, 2018 - link

    No one insulted Core 2 Quad for doing it, and neither did AMD. But Intel did Insult AMD and went on full force bad mouthing AMD. Reply
  • Spunjji - Thursday, December 13, 2018 - link

    Using a term like "fantard" straight-up devalues your argument, but the blatantly false statement about the C2Q using "exactly the same technology" seals the deal.

    Chiplets refers to the CPU being divided into multiple sections (cores and un-core) on a single package using dedicated interconnects. It's not at all the same technology as having two discrete CPUs joined by the FSB on a single package. Both are novel approaches to particular problems, although the C2Q (and Pentium D before it) were criticized for their inefficiency by using the FSB for inter-core communication. We don't know how "chiplets" will pan out yet, so the jury's out.

    Bash the fans for talking nonsense all you want, but maybe don't sink to their level.
    Reply
  • edzieba - Wednesday, December 12, 2018 - link

    If you think through-package interconnects compare to through-silicon interconnects, then I have some HBM on DIMMs to sell you. Reply
  • Spunjji - Thursday, December 13, 2018 - link

    Noice. :D Reply
  • III-V - Wednesday, December 12, 2018 - link

    I love how everyone thinks AMD is the pioneer with chiplets. They're not. That would be Marvell.

    And Intel themselves has been hinting that it's a good way to go, looking at their EMIB solution.

    But AMD fan boys are a special breed of stupid...
    Reply
  • sgeocla - Wednesday, December 12, 2018 - link

    The electric car was pioneered more than a hundred years.
    It's one thing to pioneer something and a whole different thing to actually develop it into something that is affordable to millions and drags the whole industry forward.

    If you think pioneering is all there is to it I have hundreds of grapehene battery designs you should invest you narrow-minded-driven life savings into.
    Reply
  • evernessince - Wednesday, December 12, 2018 - link

    You have some issues buddy. How about not being toxic next time. Reply

Log in

Don't have an account? Sign up now