Final Words

Does This Make Intel and AMD the Best of Frenemies?

It does seem odd, Intel and AMD working together in a supplier-buyer scenario. Previous interactions between the two have been adversarial at best, or have required calls to lawyers at the worst. So who benefits from this relationship?

AMD: A GPU That Someone Else Sells? Sure, How Many Do You Need?

Some users might point to AMD’s financials as being a reason for this arrangement, in the event that Zen didn’t take off then this was a separate source of income for AMD. Ultimately AMD is looking healthier since Ryzen, and even if Intel did rock up with piles of money, the scope of the product is unclear how much volume Intel would be requesting.

Or some might state that this sort of product, if positioned correctly, would encroach into some of AMD’s markets, such as laptop APUs or laptop GPUs. My response to this is that it actually ends up a win for AMD: Intel is currently aiming at 65W/100W mobile devices, which is a way away from the Ryzen Mobile parts that should come into force during 2018. For every chip they sell to Intel, that’s a sale for them. It means that there discrete-class graphics in a system that might have had an NVIDIA product in it instead. One potential avenue is that NVIDIA’s laptop GPU program is extensive: now with Intel at the helm driving the finished product rather than AMD, there is scope for AMD-based graphics to appear in many more devices than if they went alone. People trust Intel on this, and have done for years: if it is marketed as an Intel product, it’s a win for AMD.

Intel: What Does Intel Get Out Of This?

Intel’s internal graphics, known as ‘Gen’ graphics externally, has been third best behind NVIDIA and AMD for grunt. It had trouble competing against ARM’s Mali in the low power space, and the scaling of the design has not seemed to lend itself to large, 250W GPUs for desktops. If you have been following certain analysts that keep tabs on Intel’s graphics, you might have read the potential woes and missed targets that have potentially happened behind closed doors every time there has been a node shrink. Even though Intel has competed with GT3/4 graphics with eDRAM in the past (known as the Crystalwell products), some of which performed well, they came at additional expense for the OEMs that used them.

So rather than scale Gen graphics out to something bigger, Intel worked with AMD to purchase Radeon Vega. It is unclear if Intel approached NVIDIA to try something similar, as NVIDIA is the bigger company, but AMD has a history of being required by one of Intel’s big OEM partners: Apple. AMD has also had a long term semi-custom silicon strategy in place, while NVIDIA does not advertise that part of their business as such.

What Intel gets is essentially a better version of their old Crystalwell products, albeit at a higher power consumption. The end product, Intel with Radeon RX Vega M Graphics, aim to offer other solutions (namely Intel + NVIDIA MX150/GTX1050) but with reduced board space, allowing for thinner/lighter designs or designs with more battery. A cynic might suggest that either way, it was always going to be an Intel sale, so why bother going to the effort? One of the tracks of Intel’s notebook products in recent years is trying to convince users to upgrade more frequently: for the last couple of years, users who buy 2-in-1s were found to refresh their units quicker than clamshell devices. Intel is trying to do the same thing here with a slightly higher class of product. Whether the investment to create such a product is worth it will bear out in sales numbers.

It's Not Completely Straightforward

One thing is clear though: Intel’s spokespersons that gave us our briefing were trained very specifically to avoid mentioning AMD by name about this product line. Every time I had expected them to say ‘AMD Graphics’ in our pre-briefing, they all said ‘Radeon’. As far as the official line goes, the graphics chip was purchased from ‘Radeon’, not from AMD. I can certainly understand trying to stay on brand message, and avoiding the name from an x86 competitive standpoint, but this product fires a shot across the bow of NVIDIA, not AMD. Call a spade a spade.

Aside from the three devices that will be coming with the new processors, from HP, Dell, and the Intel NUC, one interesting side story came out of this. Intel has already had interest from a cloud gaming company for these new processors. In the same way that a massive GPU based-datacenter can offer many users cloud gaming services, these new chips are set to be in the datacenter for 1080p gaming at super high density, perhaps moreso than current GPU solutions. An interesting thought.


Intel NUC Enthusiast 8: The Hades Canyon Platform

The HP and Dell units are set to be announced later this week during CES. For information about the Intel NUC, using the overclockable Core i7-8809G processor, Ganesh has the details in a separate news post.

8th Gen Gets More Complex
Comments Locked

66 Comments

View All Comments

  • MFinn3333 - Monday, January 8, 2018 - link

    Intel and AMD were forced to work together because they hated nVidia.

    This is a relationship of Spite.
  • itonamd - Monday, January 8, 2018 - link

    and hate qualcomm works for windows 10
  • artk2219 - Wednesday, January 10, 2018 - link

    Never underestimate the power of hatred against a mutual enemy. It worked for the allies in World War II, at least until that nice little cold war bit that came after :).
  • itonamd - Monday, January 8, 2018 - link

    Good Jobs. but still dissapointed. intel not use hbm2 as l4 cache and share processor graphics when user wants to use graphics card and acording to ark.intel . And it is still 4 cores not 6 cores like i7 8700k
  • Cooe - Monday, January 8, 2018 - link

    It's a laptop chip first and foremost, and the best & latest Intel has in it's mobile line is 4c/8t Kaby Lake for power reasons (and the max 100W power envelope here precludes 6c/12t Coffee Lake already, even if a hypothetical mobile CL part existed). Not to be rude, but your expectations were totally unreasonable considering the primary target market (thin & light gaming laptops & mobile workstations).
  • Bullwinkle-J-Moose - Monday, January 8, 2018 - link

    "It's a laptop chip first and foremost" ???
    -----------------------------------------------
    It may have been presented that way initially but there were hints for other products from the very beginning

    Once the process is optimized over the next few years, we may start seeing some very capable 4K TV's without the need for thunderbolt graphics cards

    Now, about that latency problem.......
    Whats new for gaming TV's at CES?
  • Kevin G - Monday, January 8, 2018 - link

    6c/12t would have been perfectly possible with Vega M under 100W. The catch is both the CPU and GPU wouldn't coexist well under full load. The end result would be a base clock lower than what Intel would have liked on both parts for that fully loaded scenario. Though under average usage (including gaming were 4c/8t was enough), turbo would kick in and everything would be OK.

    The more likely scenario is that Intel simply didn't have enough time in development of this product to switch Kaby Lake for Coffer Lake in time and get this validated. Remember that Coffee Lake was added to the road map when Cannon Lake desktop chips were removed.
  • Bullwinkle-J-Moose - Monday, January 8, 2018 - link

    You had it right the first time

    Coffer Lake holds all the cash
  • Hurr Durr - Monday, January 8, 2018 - link

    Come on, this particular cartel is quite obvious.
  • Strunf - Monday, January 8, 2018 - link

    This all proves that X86 wars are a thing of the past and NVIDIA is pushing these two into a corner...

Log in

Don't have an account? Sign up now