The Larrabee Chapter Closes: Intel's Final Xeon Phi Processors Now in EOL
by Ian Cutress & Anton Shilov on May 7, 2019 2:15 PM EST- Posted in
- CPUs
- Intel
- Enterprise
- MIC
- Xeon Phi
- Servers
- Knights Mill
Intel this week initiated its product discontinuance plan for its remaining Xeon Phi 7200-series processors codenamed Knights Mill (KML), bringing an end to the family of processors that have now been superceded by the likes of Intel's 56-core Xeon Platinum 9200 family. Xeon Phi parts have been used primarily by supercomputers during its lifetime.
Customers interested in final Intel Xeon Phi 7295, 7285 and 7235 processors will have to place their final orders on these devices by August 9, 2019. Intel will ship the final Xeon Phi CPUs by July 31, 2020. Intel’s Knights Mill processors feature 64, 68, or 72 upgraded Silvermont x86 cores paired with AVX-512 units and MCDRAM. The parts were essentially Knights Landing parts optimized for Deep Learning applications.
Intel launched several generations of Xeon Phi over the years, including Knights Ferry, Knights Corner, Knights Landing, Knights Hill (never released), and Knights Mill. The product started off as the Larrabee project, aimed at designing a general purpose x86 compute graphics solution for Intel. We had a first glimpse of the initial architecture way back in 2008, however the graphics part of the project was killed by mid 2010, and the product lived on as a many-core processor with large vector compute units.
- April 2007: Intel Developer Forum - Beijing 2007: Penryn and Intel's High End GPU
- March 2008: Opening the Kimono: Intel Details Nehalem and Tempts with Larrabee
- August 2008: Intel's Larrabee Architecture Disclosure: A Calculated First Move
- September 2009: IDF 2009 - World's First Larrabee Demo
- December 2009: Intel Cancels Larrabee Retail Products, Larrabee Project Lives On
- May 2010: Intel Kills Larrabee GPU, Will Not Bring a Discrete Graphics Product to Market
- June 2010: Intel MIC: 22nm, 50+ Cores, Larrabee for HPC Announced
In 2016, one of the original developers of Larrabee, Tom Forsyth, wrote an piece detailing the project, some of its goals, and how far the part had been developed with graphics in mind, before being released as a many-core processor. Here's a quote, and it's well worth a read.
From Tom Forsyth: Why Didn't Larrabee Fail?
PRIMARY GOAL: VIRTUAL SUCCESS! It would have been a real success if it had ever shipped. Larrabee ran Compute Shaders and OpenCL very well - in many cases better (in flops/watt) than rival GPUs, and because it ran the other graphical bits of DirectX and OpenGL pretty well, if you were using graphics APIs mainly for compute, it was a compelling package. Unfortunately when the "we don't do graphics anymore" orders came down from on high, all that got thrown in the trash with the rest. It did also kickstart the development of a host of GPGPU-like programming models such as ISPC and CILK Plus, and those survive and are doing well.
AVX-512 Support Propogation by Various Intel CPUs Newer uArch supports older uArch |
||||||
Xeon | General | Xeon Phi | ||||
Skylake-SP | AVX512BW AVX512DQ AVX512VL |
AVX512F AVX512CD |
AVX512ER AVX512PF |
Knights Landing | ||
Cannon Lake | AVX512VBMI AVX512IFMA |
AVX512_4FMAPS AVX512_4VNNIW |
Knights Mill | |||
Cascade Lake-SP | AVX512_VNNI | |||||
Cooper Lake | AVX512_BF16 | |||||
Ice Lake | AVX512_VNNI AVX512_VBMI2 AVX512_BITALG AVX512+VAES AVX512+GFNI AVX512+VPCLMULQDQ (not BF16) |
AVX512_VPOPCNTDQ | ||||
Source: Intel Architecture Instruction Set Extensions and Future Features Programming Reference (pages 16) |
As for the Xeon Phi family, Knights Mill was the last product. Last year Intel discontinued its Xeon Phi 7210, 7210F, 7230, 7230F, 7250, 7250F, 7290, and 7290F processors, known as Knights Landing. The final shipments of Knights Landing for old systems will be made by July 19, 2019, with Knights Mill on July 31, 2020.
Related Reading
- Intel Begins EOL Plan for Xeon Phi 7200-Series ‘Knights Landing’ Host Processors
- Intel Discontinues Xeon Phi 7200-Series ‘Knights Landing’ Coprocessor Cards
- Intel’s "Knights Landing" Xeon Phi Coprocessor Detailed
- Intel @ SC15: Launching Xeon Phi “Knights Landing” & Omni-Path Architecture
- SuperComputing 15: Intel’s Knights Landing / Xeon Phi Silicon on Display
- Intel Lists Knights Mill Xeon Phi on ARK: Up to 72 cores at 320W with QFMA and VNNI
- Intel Announces Knights Mill: A Xeon Phi For Deep Learning
Source: Intel
22 Comments
View All Comments
tipoo - Tuesday, May 7, 2019 - link
I still 'member that they took Project Offset down with Larabeehttps://www.youtube.com/watch?v=TWNokSt_DjA
CiccioB - Tuesday, May 7, 2019 - link
That's was because it was manufactured with almost 2 PP of advantage with respect to the competition (that's nvidia).
As nvidia got access to better PP Intel's advantage was annihilated in a simple generation to never be acquired again as the idea to use fat x86 cores with even fatter AVX512 units to do parallel computing was a complete.. stupid idea. Intel probably thought that with the (at the time) big advantage on the competition assured by its very advanced PPs they could use x86 even when it was a clear looser in other circumstances.
And in fact they tried to put it anywhere, also in smartphones against ARM. Loosing another war miserably as ARM closed the gap from 2 to just once PP of disadvantage.
Intel's hope is that the new trends is to use more and more computing shaders to do what nowadays fixed function units do in GPUs. Ray tracing is the technology deputed at extinguish the use of those units (in favor of others as we have seen with Turing) where Intel has no experience nor patents.
As ray trace will take its way into the gaming engines and development choice (leaving old and now quite clumsy rasterization effects and workaround behind) their new GPUs will close the gap with those of the competition.
Their advantage, as it is now that of AMD with console market, will be the immense pool of users that will have its technology at disposal as a give away as integrated graphics.
Any future game (and more those simple and with limited budged) will try to exploit their technology as much as possible to run well on the basic HW and enlarge the potential market.
AMD, and even more nvidia, will have to cope with Intel's decisions on the technology (and implementation) that will pave the future. If they are not going to support that they way Intel does, the chances the new features/extensions/HW units are going to fail are high, even if they potentially are really powerful.
AshlayW - Tuesday, May 7, 2019 - link
Shame that a company such as intel (anti-competitive, anti-consumer, anti-innovation {in consumer market]) would 'pave the future' of graphics, simply because a large number of people have their crap in their PC. Hopefully large number dimishes as Ryzen erodes their marketshare. No doubt intel will abuse their position to squeeze very last damn cent out of the consumer at the expense of innovation and fair pricing.Calin - Wednesday, May 8, 2019 - link
I think you are living in another world, one where Intel plays in midrange and high-end gaming. While Intel did some great leaps in integrated graphics prowess in their high-end chips (the ones with integrated DRAM), most of what they sell is scrape-of-the-bottom-of-the-barrel performance. Also, with an old process, Intel can't compete economically in the lucrative graphics market.CiccioB - Wednesday, May 8, 2019 - link
@CalinI'm speaking about the new future graphics processors Intel is creating, not the old ones (which however are not so terrible and have a decent support unlike AMD APUs).
PP in this context is worthless. They are going to fill ~80% of the PC market with their own GPUs and so they give the developers a wide market to target.
It doesn't matter if they are not the top of the hill in performance or efficiency.
AMD is neither, but they are now guiding the way the engine games are developed thanks to their monopoly on console HW. This will soon change, however as Intel is going to be a new big player in the graphics market that developers will want to consider.
However powerful will be AMD GPUs, they will still trail Intel in sold units, as Intel sells all their consumer CPUs with a integrated GPU while AMD is only providing a few APUs in laptops as their discrete GPUs are terrible in power efficiency and none wants them in the mobile market. On the desktop they are not (yet) putting a GPU into their Ryzen CPUs, so Intel has an easy path to invade the market with whatever (crappy or not) graphics technology they are going to create. It just needs to be "powerful enough".
mode_13h - Thursday, May 9, 2019 - link
> AMD is neither, but they are now guiding the way the engine games are developed thanks to their monopoly on console HW.From talking to some AMD employees I know, I think this advantage is more imagined than real. In many cases, it seems the console and PC engines are developed by separate teams and have little in common.
The flip side is that if Intel dominates the PC graphics landscape, it won't necessarily give them an advantage in consoles. Plus, gaming might just end up in the cloud, like everything else. The next gen of consoles might be the last.
Oh, and AMD also sells Ryzen APUs for the desktop, in case you didn't notice.
CiccioB - Monday, May 13, 2019 - link
I didn't say that Intel will have advantage in console market (where it is not present).I said that games engines and game optimization will be written to take into account ALSO Intel new GPUs as they are going to give decent performances (even though maybe not the best ones) and be spread everywhere on all ultramobile/laptop/desktop device enlarging the gamer population potentials.
[blockquote]Oh, and AMD also sells Ryzen APUs for the desktop, in case you didn't notice.[/blockquote]
Yes, I know, but how many of them did they sell?
You probably have not understood completely the difference in market presence (and share) that Intel will have once it will deliver it's new integrated GPUs.
mode_13h - Thursday, May 9, 2019 - link
> Intel probably thought that with the (at the time) big advantage on the competition assured by its very advanced PPs they could use x86 even when it was a clear looser in other circumstances.I think the reason was simpler than that. Intel had a history of getting burned by non-x86 (remember Itanium?). They probably learned this lesson too well, eyeing anything non-x86 with great skepticism.
CiccioB - Thursday, May 9, 2019 - link
Well, the lesson was even more simple: x86 outside the market where it benefits of retro-compatibility simply has not chance to do anything better than the competition.Good for them that they manage to get rid of the entire competition in the server market as they could produce much cheaper CPU than others.
But outside these markets, x86 is simply outperformed by anything else. They even failed to keep pace with nvidia and their GPUs in parallel computing, despite the difficulties that exists to make a (or even more than one) GPU compute decently.
Without saying that Xeon Phi exploited almost nothing of those x86 cores (which could be anything else and enjoy better energy efficiency) and most of the crunching power just came from those beefy AVX units.
mode_13h - Thursday, May 9, 2019 - link
Yeah, I'm just saying that within Intel, it was probably very difficult for anything non-x86 to succeed, politically. I get that the ISA is rather poor and there's only so much lipstick you can put on that pig.I would be interested in knowing why they killed the successor to the i860 - another big, non-x86 push the company once made. I heard rumors that it was really promising, but maybe it fell victim to grand dreams of Itanium.