In a very short tweet posted to their Twitter feed yesterday, Intel revealed/confirmed the launch date for their first discrete GPU developed under the company’s new dGPU initiative. The otherwise unnamed high-end GPU will be launching in 2020, a short two to two-and-a-half years from now.

The tweet was posted amidst reports that Intel had given the same details to a small group of analysts last week, with the tweet being released to confirm those reports. The nature of the meeting itself hasn’t been disclosed, but Intel regularly gives analysts extremely broad timelines for new technologies as part of outlining their plans to remain on top of the market.

This new GPU would be the first GPU to come out of Intel’s revitalized GPU efforts, which kicked into high gear at the end of 2017 with the hiring of former AMD and Apple GPU boss Raja Koduri. Intel of course is in the midst of watching sometimes-ally and sometimes-rival NVIDIA grow at a nearly absurd pace thanks to the machine learning boom, so Intel’s third shot at dGPUs is ultimately an effort to establish themselves in a market for accelerators that is no longer niche but is increasingly splitting off customers who previously would have relied entirely on Intel CPUs.

Interestingly, a 2020 launch date for the new discrete GPU is inside the estimate window we had seen for the project. But the long development cycle for a high-end GPU means that this project was undoubtedly started before Raja Koduri joined Intel in late 2017 – most likely it would have needed to kick off at the start of the year, if not in 2016 – so this implies that Koduri has indeed inherited an existing Intel project, rather than starting from scratch. Whether this is an evolution of Intel’s Gen GPU or an entirely new architecture remains to be seen, as there are good arguments for both sides.

Intel isn’t saying anything else about the GPU at this time. Though we do know from Intel’s statements when they hired Koduri that they’re starting with high-end GPUs, a fitting choice given the accelerator market Intel is going after. This GPU is almost certainly aimed at compute users first and foremost – especially if Intel adopts a bleeding edge-like strategy that AMD and NVIDIA have started to favor – but Intel’s dGPU efforts are not entirely focused on professionals. Intel has also confirmed that they want to go after the gaming market as well, though what that would entail – and when – is another question entirely.

Source: Intel

Comments Locked


View All Comments

  • Chaser - Wednesday, June 13, 2018 - link

    Competition is good for consumers. As it stands Nvidia doesn't have much today.
  • manju_rn - Wednesday, June 13, 2018 - link

    How about having a seperate socket in motherboard just for GPU. Then it will kick off a complete different design of motherboard and GPU chips will be sold as chips and not bulks boards. And of course seperate set of rams for gpu
  • HStewart - Wednesday, June 13, 2018 - link

    This idea reminds me the days of before 486 with math processor - old school.

    EMIB is much, much better - with faster access between CPU, GPU and HDM2 memory.
  • coder543 - Wednesday, June 13, 2018 - link

    so, kinda like nVidia's mezzanine connector or the older MXM standard? Oh, but you want the chip and RAM to be separate... yeah, this idea isn't going to happen any time soon. The kind of RAM that GPUs use isn't okay with being dozens of centimeters from the GPU die.
  • PeachNCream - Wednesday, June 13, 2018 - link

    A socketed GPU is possible with HBM and Intel already has one CPU package with an AMD dGPU using HBM in production plus the Knights' series accelerators use HBM. Intel seems to lean toward integration in order to exert control over performance so the company is unlikely to care much for 3rd party GPU companies and therefore may lean toward HBM. A socketed graphics solution is actually realistically possible nowadays.
  • manju_rn - Wednesday, June 13, 2018 - link

    ^Exactly, there is so much integration possible that current generation of GPU board have it redundant - power caps for e.g. It will also establish standards across the different class so that people one day can switch either of the GPU chips. Currently i believe apart from chip, it is the additional redundant components in the GP boards that hogs the cost. Historically, it is always done with other different componentsn 1970s had extension cards for everything, including mice and paralell ports . Look where we are now
  • foxalopex - Wednesday, June 13, 2018 - link

    I suspect if I was Nvidia I'd be worried. Intel is probably far more likely to work with AMD on open standards than with anything Nvidia proprietary.
  • HStewart - Wednesday, June 13, 2018 - link

    But the advantage of EMIB is that it does not matter the maker of GPU - AMD will be gone from EMIB in 2020.
  • tomatotree - Wednesday, June 13, 2018 - link

    I'm really hoping Intel adopts Freesync and the Nvidia tax on Gsync displays goes away. If both AMD and Intel are using Freesync, plus the next-gen consoles, and then some TV makers start to adopt it for console use as well, then the extra 100-200 premium for Gsync starts to make a lot less sense to be locked into.
  • JackNSally - Wednesday, June 13, 2018 - link

    Intel would be shooting themselves in the foot to come up with there own solution. Easiest for Intel would be FreeSync.

Log in

Don't have an account? Sign up now