On Monday, Intel announced that it had penned a deal with AMD to have the latter provide a discrete GPU to be integrated onto a future Intel SoC. On Tuesday, AMD announced that their chief GPU architect, Raja Koduri, was leaving the company. Now today the saga continues, as Intel is announcing that they have hired Raja Koduri to serve as their own GPU chief architect. And Raja's task will not be a small one; with his hire, Intel will be developing their own high-end discrete GPUs.

Starting from the top and following yesterday’s formal resignation from AMD, Raja Koduri has jumped ship to Intel, where he will be serving as a Senior VP for the company, overseeing the new Core and Visual Computing group. As a chief architect and general manager, Intel is tasking Raja with significantly expanding their GPU business, particularly as the company re-enters the discrete GPU field. Raja of course has a long history in the GPU space as a leader in GPU architecture, serving as the manager of AMD’s graphics business twice, and in between AMD stints serving as the director of graphics architecture on Apple’s GPU team.

Meanwhile, in perhaps the only news that can outshine the fact that Raja Koduri is joining Intel, is what he will be doing for Intel. As part of today’s revelation, Intel has announced that they are instituting a new top-to-bottom GPU strategy. At the bottom, the company wants to extend their existing iGPU market into new classes of edge devices, and while Intel doesn’t go into much more detail than this, the fact that they use the term “edge” strongly implies that we’re talking about IoT-class devices, where edge goes hand-in-hand with neural network inference. This is a field Intel already plays in to some extent with their Atom processors on the GPU side, and their Movidius neural compute engines on the dedicated silicon sign.

However in what’s likely the most exciting part of this news for PC enthusiasts and the tech industry as a whole, is that in aiming at the top of the market, Intel will once again be going back into developing discrete GPUs. The company has tried this route twice before; once in the early days with the i740 in the late 90s, and again with the aborted Larrabee project in the late 2000s. However even though these efforts never panned out quite like Intel has hoped, the company has continued to develop their GPU architecture and GPU-like devices, the latter embodying the massive parallel compute focused Xeon Phi family.

Yet while Intel has GPU-like products for certain markets, the company doesn’t have a proper GPU solution once you get beyond their existing GT4-class iGPUs, which are, roughly speaking, on par with $150 or so discrete GPUs. Which is to say that Intel doesn’t have access to the midrange market or above with their iGPUs. With the hiring of Raja and Intel’s new direction, the company is going to be expanding into full discrete GPUs for what the company calls “a broad range of computing segments.”

Reading between the lines, it’s clear that Intel will be going after both the compute and graphics sub-markets for GPUs. The former of course is an area where Intel has been fighting NVIDIA for several years now with less success than they’d like to see, while the latter would be new territory for Intel. However it’s very notable that Intel is calling these “graphics solutions”, so it’s clear that this isn’t just another move by Intel to develop a compute-only processor ala the Xeon Phi.

NVIDIA are at best frenemies; the companies’ technologies complement each other well, but at the same time NVIDIA wants Intel’s high-margin server compute business, and Intel wants a piece of the action in the rapid boom in business that NVIDIA is seeing in the high performance computing and deep learning markets. NVIDIA has already begun weaning themselves off of Intel with technologies such as the NVLInk interconnect, which allows faster and cache-coherent memory transfers between NVIDIA GPUs and the forthcoming IBM POWER9 CPU. Meanwhile developing their own high-end GPU would allow Intel to further chase developers currently in NVIDIA’s stable, while in the long run also potentially poaching customers from NVIDIA’s lucrative (and profitable) consumer and professional graphics businesses.

To that end, I’m going to be surprised if Intel doesn’t develop a true top-to-bottom product stack that contains midrange GPUs as well – something in the vein of Polaris 10 and GP106 – but for the moment the discrete GPU aspect of Intel’s announcement is focused on high-end GPUs. And, given what we typically see in PC GPU release cycles, even if Intel does develop a complete product stack, I wouldn’t be too surprised if Intel’s first released GPU was a high-end GPU, as it’s clear this is where Intel needs to start first to best combat NVIDIA.

More broadly speaking, this is an interesting shift in direction for Intel, and one that arguably indicates that Intel’s iGPU-exclusive efforts in the GPU space were not the right move. For the longest time, Intel played very conservatively with its iGPUs, maxing out with the very much low-end GT2 configuration. More recently, starting with the Haswell generation in 2013, Intel introduced more powerful GT3 and GT4 configurations. However this was primarily done at the behest of a single customer – Apple – and even to this day, we see very little OEM adoption of Intel’s higher performance graphics options by the other PC OEMs. The end result has been that Intel has spent the last decade making the kinds of CPUs that their cost-conscious customers want, with just a handful of high-performance versions.

I would happily argue that outside of Apple, most other PC OEMs don’t “get it” with respect to graphics, but at this juncture that’s beside the point. Between Monday’s strongly Apple-flavored Kaby Lake-G SoC announcement and now Intel’s vastly expanded GPU efforts, the company is, if only finally, becoming a major player in the high-performance GPU space.

Besides taking on NVIDIA though, this is going to put perpetual underdog AMD into a tough spot. AMD’s edge over Intel for the longest time has been their GPU technology. The Zen CPU core has thankfully reworked that balance in the last year, though AMD still hasn’t quite caught up to Intel here on peak performance. The concern here is that the mature PC market has strongly favored duopolies – AMD and Intel for CPUs, AMD and NVIDIA for GPUs – so Intel’s entrance into the discrete GPU space upsets the balance on the latter. And while AMD is without a doubt more experienced than Intel, Intel has the financial and fabrication resources to fight NVIDIA, something AMD has always lacked. Which isn’t to say that AMD is by any means doom, but Intel’s growing GPU efforts and Raja’s move to Intel has definitely made AMD’s job harder.

Meanwhile, on the technical side of matters, the big question going forward with Intel’s efforts is over which GPU architecture Intel will use to build their discrete GPUs. Despite their low performance targets, Intel’s Gen9.5 graphics is a very capable architecture in terms of features and capabilities. In fact, prior to the launch of AMD’s Vega architecture a couple months back, it was arguably the most advanced PC GPU architecture, supporting higher tier graphics features than even NVIDIA’s Pascal architecture. So in terms of features alone, Gen9.5 is already a very decent base to start from.

The catch is whether Gen9.5 and its successors can efficiently scale out to the levels needed for a high-performance GPU. Architectural scalability is in some respects the unsung hero of GPU architecture design, as while it’s kind of easy to design a small GPU architecture, it’s a lot harder to design an architecture that can scale up to multiple units in a 400mm2+ die size. Which isn’t to say that Gen9.5 can’t, only that we as the public have never seen anything bigger than the GT4 configuration, which is still a relatively small design by GPU standards.

Though perhaps the biggest wildcard here is Intel’s timetable. Nothing about Intel’s announcement says when the company wants to launch these high-end GPUs. If, for example, Intel wants to design a GPU from scratch under Raja, then this would be a 4+ year effort and we’d easily be talking about the first such GPU in 2022. On the other hand, if this has been an ongoing internal project that started well before Raja came on board, then Intel could be a lot closer. Given what kind of progress NVIDIA has made in just the last couple of years, I can only imagine that Intel wants to move quickly, and what this may boil down to is a tiered strategy where Intel takes both routes, if only to release a big Gen9.5(ish) GPU soon to buy time for a new architecture later.

In directing these tasks, Raja Koduri has in turn taken on a very big role at Intel. Until recently, Intel’s graphics lead was Tom Piazza, a Sr. Fellow and capable architect, but also an individual who was never all that public outside of Intel. By contrast, Raja will be a much more public individual thanks to the combination of Intel’s expanded GPU efforts, Raja’s SVP role, and the new Core and Visual Computing group that has been created just for him.

For what Intel is seeking to do, it’s clear why they picked Raja, given his experience inside and outside of AMD, and more specifically, with integrated graphics at both AMD and Apple. The flip side to that however is that while Apple’s graphics portfolio boomed under Raja during his time at the company, his most recent AMD stint didn’t go quite as well. AMD’s Vega GPU architecture has yet to live up to all of its promises, and while success and failure at this level is never the responsibility of a single individual, Intel will certainly be looking to have a better launch than Vega. Which, given the company’s immense resources, is definitely something they can do.

But at the end of the day, this is just the first step for Intel and for Raja. By hiring an experienced hand like Raja Koduri and by announcing that they are getting into high-end discrete GPUs, Intel is very clearly telegraphing their intent to become a major player in the GPU space. Given Intel’s position as a market leader it’s a logical move, and given their lack of recent discrete GPU experience it’s also an ambitious move. So while this move stands to turn the PC GPU market as we know it on its head, I’m looking forward to seeing just what a GPU-focused Intel can do over the coming years.

Source: Intel

Comments Locked

200 Comments

View All Comments

  • SquarePeg - Wednesday, November 8, 2017 - link

    That's worth a thumbs up.
  • xype - Thursday, November 9, 2017 - link

    You forgot the


    5 years later, Apple: "Lol we’re making our own CPUs and GPUs anyway, bozos!"

    :P
  • colemar - Thursday, November 9, 2017 - link

    BOBOSTRUMF,

    But there is no IP licensing in the Intel/AMD deal just announced. It's an Intel project where Intel funded AMD for a custom GPU design.
  • Inteli - Wednesday, November 8, 2017 - link

    If Intel ends up developing enthusiast/gaming GPUs instead of focusing on just neural network/compute GPUs, I would be very excited. We haven't had 3 players in the enthusiast GPU space for a very long time, and that third option could help balance out the market, provided Intel is competitive and RTG becomes competitive.
  • realistz - Wednesday, November 8, 2017 - link

    AMD was holding Radeon team hostage. This is a promotion for Raja.
  • tipoo - Wednesday, November 8, 2017 - link

    I appreciate filing this under "Woah".
  • Ryan Smith - Wednesday, November 8, 2017 - link

    It really is that kind of a week.

    Even if you bought the rumors of Raja being hired by Intel, announcing the hire and the initiative to produce dGPUs the very next day is quite a turnabout.
  • Yojimbo - Wednesday, November 8, 2017 - link

    Yeah, Qualcomm, Broadcom, Intel, NVIDIA... There's a big melee brewing. Most likely with IBM sitting on the sidelines. NVIDIA just wants to sell their GPU accelerators on every platform, but this move by Intel could force their hand to try to develop a platform themselves. Even if NVIDIA reacts just by staying their course, Broadcom seems to be gunning for Intel's data center dominance. Maybe NVIDIA buys IBM's Power unit. I don't think it's likely, but maybe NVIDIA can buy AMD and sell the graphics portion to Qualcomm :D Could Intel revoke the x86 license without risking anti-trust investigation? What would happen to Intel's access to AMD's x86 patents if they revoke the license?

    I was just trying to imagine yesterday what the market would be like if, when AMD approached NVIDIA to buy them before AMD bought ATI, AMD's board had accepted Jensen Huang's rumored demand to lead the combined company. Would they have made the same Fusion mistake under Huang? I have no idea. Fusion must have been the strategy that led AMD to pursue the merger, but NVIDIA seemed convinced that memory incompatibility would sink it, which turned out to be correct.
  • lmotaku - Wednesday, November 8, 2017 - link

    Because of how old x86 is now, it's partially open. Intel can only require a license to specific features which AMD already pays for. AMD made the x86-64 instructions which Intel pays the license fee to.

    Why exactly would Intel say "Stop adding new features to our old x86 instruction set!" "We never wanted to be able to use 64bit registers!" ?

    Would be kinda stupid to say "don't make improvements for us!" Since AMD made AMD64, Intel hasn't really changed x86 all that much. It's different than back in the day where you could put an AMD CPU on an Intel-based mobo and it used all the same designs. That would be a problem and Intel wouldn't let that fly today without royalties.

    Other than that, your other speculations are anyone's guess because I didn't pay close enough attention when that stuff was going on.

    I think ATi got a lot of slack back in the day but they always did what they did now. Provide better performance for less money. I don't care about wattage, never did. It's definitely a plus, but ppw has never been on my radar, only ppd, and AMD has continued to delver that. They have a big enough spot on the playing field to keep going—especially now that they have dominated in consoles and iGPUs.

    This era of GPUs (R9 to RX to Vega) as far as ppw goes ts been underwhelming each time. R9 did faily well, but RX was supposed to push them further. We were disappointed a little because they continued to just be "Okay" or "Good enough". Problem is, we expected more yet again from Vega. It didn't come, and Raja will get a lot of slack for this folly and Lisa Su will be praised for how well she did with the CPUs. It could have been a bad strategy, lack of money, or anything that caused it, but I think AMD would have been better off keeping the ATi branding. Loyal fans have been waiting for ARG, or something to come from AMD. (Ati Radeon Group for example). ATi stood upon "Technology you can trust", which we still trust ATi, and RTG was flashy and modern for a bit, but we need a from the ground up revival like Ryzen.
  • Yojimbo - Thursday, November 9, 2017 - link

    I think you misinterpreted the x86 part of my post. My point was Intel's reaction to NVIDIA buying AMD. Intel does not want NVIDIA to have an x86 license. If NVIDIA were to buy AMD and Intel could find a way to deny NVIDIA an x86 license, they would.

    Also, x86 is partially open, but that doesn't help a newcomer who wants to compete with Intel with a modern x86 CPU. A lot of software takes advantage of the newer instructions. Not being able to handle those instructions would be a no go.

    As far as ATI, look at the historical market share. NVIDIA and ATI were neck and neck back then, with NVIDIA usually a little out in front. AMD bought ATI when ATI's market share was near a high water mark (they had just passed NVIDIA for a short time, if I remember), and AMD's CPU business was just going sour. GCN was introduced in 2011, and I think it was the first architecture that was built from the ground up to enact AMD's Fusion strategy. It was designed to be an architecture for heterogeneous processors in APUs. Over the years it has showed its problems trying to be a big, discrete GPU. NVIDIA first took over the high end of the market and then with Maxwell took over most of the whole PC GPU market. AMD didn't have the money to fix the problems. I think Mantle was a desperate attempt to try to relieve some of the deficiencies of the architecture as a high-power discrete gaming GPU by exposing some of the features that were put into the architecture for AMD's heterogeneous computing plans. But, much like GPUOpen, developers don't like when a hardware company asks them to do the platform work for them, not when they have other options; it adds a lot of complexity and cost... for not much gain.

    As far as AMD's recent GPUs, they were "okay" and just "good enough" (I would argue they were not nearly good enough. From a consumer's perspective, once AMD lowered their operating margins to razor thin levels they may have been "good enough", but from a business perspective the fact that they had to operate with such low margins meant the products were not good enough) not because that's what they wanted them to be, but because that's what their architecture and their financial resources allowed them to be.

    You shouldn't be creating a dichotomy between Lisa Su succeeding with CPUs and Raja Koduri failing with GPUs. That is myopic. Lisa Su is CEO of all of AMD, and is responsible for the allocation of resources to various parts of the company. Firstly, before Lisa Su arrived, AMD made the strategic decision to pursue Fusion and design their GPU architecture around it, as discussed above. Later, after AMD's CPU business turned sour, AMD made a strategic decision to put resources into reviving their CPU business at the expense of their GPU business, which is what was bringing in money to them at the time. I think this decision must have been made when Lisa Su was at the company, but before she was CEO. However, Lisa Su continued with the strategy when she took over. In fact, I think she probably intensified it, because she enacted various cost cutting measures (as the company was facing a financial crisis). Look at AMD's R&D budget and compare it to NVIDIA's. Then consider that AMD is simultaneously trying to compete with Intel in CPUs. Finally, consider two more things: 1) The re-hiring of Jim Keller by Lisa Su (not yet CEO) in 2012, and 2) the relative success of Zen to AMD's GPUs. Just follow the probabilities to figure out where the money most likely was going. Raja Koduri and his RTG was making a Thermopylae-like stand. Only he doesn't get the adulations as Leonidas does, so he decided to go somewhere else. Could he have done better than he did? Maybe, I have no idea. Could he have taken the fight to the enemy? Certainly not.

Log in

Don't have an account? Sign up now