In a very short tweet posted to their Twitter feed yesterday, Intel revealed/confirmed the launch date for their first discrete GPU developed under the company’s new dGPU initiative. The otherwise unnamed high-end GPU will be launching in 2020, a short two to two-and-a-half years from now.

The tweet was posted amidst reports that Intel had given the same details to a small group of analysts last week, with the tweet being released to confirm those reports. The nature of the meeting itself hasn’t been disclosed, but Intel regularly gives analysts extremely broad timelines for new technologies as part of outlining their plans to remain on top of the market.

This new GPU would be the first GPU to come out of Intel’s revitalized GPU efforts, which kicked into high gear at the end of 2017 with the hiring of former AMD and Apple GPU boss Raja Koduri. Intel of course is in the midst of watching sometimes-ally and sometimes-rival NVIDIA grow at a nearly absurd pace thanks to the machine learning boom, so Intel’s third shot at dGPUs is ultimately an effort to establish themselves in a market for accelerators that is no longer niche but is increasingly splitting off customers who previously would have relied entirely on Intel CPUs.

Interestingly, a 2020 launch date for the new discrete GPU is inside the estimate window we had seen for the project. But the long development cycle for a high-end GPU means that this project was undoubtedly started before Raja Koduri joined Intel in late 2017 – most likely it would have needed to kick off at the start of the year, if not in 2016 – so this implies that Koduri has indeed inherited an existing Intel project, rather than starting from scratch. Whether this is an evolution of Intel’s Gen GPU or an entirely new architecture remains to be seen, as there are good arguments for both sides.

Intel isn’t saying anything else about the GPU at this time. Though we do know from Intel’s statements when they hired Koduri that they’re starting with high-end GPUs, a fitting choice given the accelerator market Intel is going after. This GPU is almost certainly aimed at compute users first and foremost – especially if Intel adopts a bleeding edge-like strategy that AMD and NVIDIA have started to favor – but Intel’s dGPU efforts are not entirely focused on professionals. Intel has also confirmed that they want to go after the gaming market as well, though what that would entail – and when – is another question entirely.

Source: Intel

Comments Locked


View All Comments

  • HStewart - Wednesday, June 13, 2018 - link

    It not the nm that makes the difference, it is number of transistors
  • peevee - Wednesday, June 13, 2018 - link

    Of course. These nms are just marketing BS. But the point is the current "7nm" is already better than Intel's still not-working "10 nm", and their "5 nm" will be better yet.
  • HStewart - Wednesday, June 13, 2018 - link

    All I saying there is more than just nm. The technology inside makes a big difference and yes this is market but opposite of frequency wars where higher number is better, in this case a smaller nm is used instead of frequency.
  • peevee - Thursday, June 14, 2018 - link

    "All I saying there is more than just nm"

    Way more, because "nm" BS is just branding. But behind these brands, working "7nm" is already better than Intel's still-not-working "10nm".
  • techconc - Wednesday, June 13, 2018 - link

    Yeah, but the better the process, the more transistors can be used in the same die space. So, yes, the size of the process does matter.
  • peevee - Thursday, June 14, 2018 - link

    These numbers before "nm" do not correspond to any real sizes. Just branding.
  • hammer256 - Wednesday, June 13, 2018 - link

    So, what's going on with their xeon-phi line of effort? Is this going to be a replacement for that, or is this in parallel?
    Also, what kind of software support will this have? I assume openCL, maybe Intel will actually make a good effort at it. Whatever it is, for people well entrenched in the CUDA framework, it will take some enticing for sure...
    Don't underestimate the importance of software support, I would say that's a big part of what made Nvidia so successful in the compute space. I remember reading a few years back that Nvidia actually has more software than hardware engineers...
  • jordanclock - Wednesday, June 13, 2018 - link

    Xeon Phi, while it evolved from a project involving x86-based GPUs, is not in any way related to this. This dGPU would likely be an evolution of the existing iGPUs.
  • ZipSpeed - Wednesday, June 13, 2018 - link

    I, for one, am glad that there will be a 3rd entrant. Nvidia definitely needs to be taken down a notch or two. However, looking at Intel's efforts outside their core and foundry business, well it sucks. I'm sure we all have high hopes that they finally get it right, but this is Intel we're talking about. I don't think they will be targeting the high end, not initially. Expect GPUs in the mainstream arena.
  • HStewart - Wednesday, June 13, 2018 - link

    Intel already has the low end market - with there Integrated GPU's - so logically this will be higher - it will be minimum the performance of Kaby G GPU from AMD.,

Log in

Don't have an account? Sign up now