Intel on Thursday notified its partners and customers that it would be discontinuing its Itanium 9700-series (codenamed Kittson) processors, the last Itanium chips on the market. Under their product discontinuance plan, Intel will cease shipments of Itanium CPUs in mid-2021, or a bit over two years from now. The impact to hardware vendors should be minimal – at this point HP Enterprise is the only company still buying the chips – but it nonetheless marks the end of an era for Intel and their interesting experiment into a non-x86 VLIW-style architecture.

The current-generation octa and quad-core Itanium 9700-series processors were introduced by Intel in 2017, in the process becoming the final processors based on the IA-64 ISA. Kittson for its part was a clockspeed-enhanced version of the Itanium 9500-series ‘Poulson’ microarchitecture launched in 2012, and featured a 12 instructions per cycle issue width, 4-way Hyper-Threading, and multiple RAS capabilities not found on Xeon processors back then. It goes without saying that the writing has been on the wall for Itanium for a while now, and Intel has been preparing for an orderly wind-down for quite some time.

At this point, the only systems that actually use Itanium 9700-series CPUs are the HPE Integrity Superdome machines, which are running the HP-UX 11i v3 operating system and launched in mid-2017. So Intel's sole Itanium customer will have to submit their final Itanium orders – as well as orders for Intel’s C112/C114 scalable memory buffers – by January 30, 2020. Intel will then ship its last Itanium CPUs by July 29, 2021. HPE for its part will support their systems through at least December 31, 2025, but depending on how much stock HPE wants to keep on hand, they'll presumably stop selling them a few years sooner than that.

With the EOL plan for the Itanium 9700-series CPUs in place, it certainly means that this is the end of the road for the whole Itanium project, both for HPE and Intel. The former has been offering Xeon-based NonStop and Integrity servers for years now, whereas the latter effectively ceased development of new CPUs featuring the IA-64 ISA earlier this decade. The machines running these CPUs will of course continue their operations through at least late 2025 (or until HPE drops HP-UX 11i v3) simply because mission-critical systems are bought for the long-haul, but Intel will cease shipments of Itaniums in 2.5 years from now.

Related Reading:

Source: Intel

POST A COMMENT

59 Comments

View All Comments

  • HStewart - Friday, February 01, 2019 - link

    One thing this does not mean Itanium is gone, Intel still has 9700 series but with Microsoft dropping support in Windows it is pretty much end of life. I don't see why someone needs one when they can get Xeons. Reply
  • SarahKerrigan - Friday, February 01, 2019 - link

    It does mean Itanium is gone. That's what this article is about. In a year, no more 9700 orders.

    And until Xeon can run the full set of Itanium operating systems, this affects customers. NSK has been ported; VMS and GCOS 8 have announced ports, but neither are shipping on Xeon today AFAICT; HP-UX, likely the largest installed base of all of them, has no announced port, just a vague "we'll provide some kind of migration path to Linux."
    Reply
  • rocky12345 - Friday, February 01, 2019 - link

    It has been a very long time since I heard anything about the IA64 platform. Like other have said Intel probably hoped that at some point IA64 would be adopted into the desktop platform in some shape or form so they could become the main supplier of CPU's down the road. AMD released x64 before Intel was able to switch in some sort of IA64 onto the desktop market.

    My theory is if AMD was not able to get x64 out first and because Intel had locked down the IA64 to the point nobody could clone it like x86 that right now AMD would not be making CPU's or at least would be a much smaller player in the market. Just just look at what Intel did to Nvidia back in the day when they pulled Nvidia's chipset license which disallowed Nvidia from making any Intel based chipset and that was just from Nvidia's license running out and they could not get it renewed by Intel.
    Reply
  • frenchy_2001 - Friday, February 01, 2019 - link

    No intel did not pull nvidia's license, they changed their chipset interface, from the Front Side Bus (FSB), which had been licensed to many companies (SIS, VIA, ATI, Nvidia...) to QPI in the Core iX processors.
    Intel has not licensed QPI to anybody, which explains why there is no 3rd party chipset anymore.
    Reply
  • NuclearArmament - Friday, February 01, 2019 - link

    I hate AMD for killing this processor. It truly was the future of computing. Reply
  • HStewart - Friday, February 01, 2019 - link

    It was not AMD that did - more likely advancements in Xeon's I am sure Intel is using some of advancements in Itanium to advance x86 processors especially Xeons. By way only 9300 is discontinued. Reply
  • SarahKerrigan - Friday, February 01, 2019 - link

    Typical HStewart lies. The PCN Anandtech links to specifically lists the 9700 series as being discontinued. Reply
  • SarahKerrigan - Friday, February 01, 2019 - link

    Eh, not really. I spent a lot of years working at a very intimate level with IPF, and while there were a lot of things I liked, they were almost all uarch, not ISA.

    The good:
    +Ridiculously fast caches. Single-cycle L1, large and fast L2 and L3.
    +Four load-store units. Meant that you could keep a LOT of memory throughput going. Doesn't apply to Poulson or Merced though.
    +Two large, fast FMA units. Paired with the above, meant some linear algebra code performed very well.
    +Speculative loads - software-controlled speculation that didn't entirely suck.

    The bad:
    -Code density was absolutely atrocious. Best case, assuming no NOP padding (ie, favorable templates for your code stream) was 128 bits for three ops. That's also assuming you don't use the extended form (82 bits IIRC) that took up two slots in your instruction word.
    -Advanced loads never worked well and had strange side effects. This is *not* software speculation done right.
    -L1 miss rate was always high IME, both on I and D side. I've assumed there was a trade-off made here that resulted in the undersized 2-way L1 that was accessible in one cycle.
    -Intel never seems to have felt SIMD beyond MMX equivalence was necessary. There were technical-compute apps that would have benefited from it.
    -Intel never seems to have taken multithreading seriously. The switch-on-event multithreading in Montecito and up offered tiny gains IME, and at least one OEM (SGI) didn't bother supporting it at all. Even FGMT would have been a welcome improvement.
    -I feel like there was a tendency in the IPF design to jump to whizz-bang features that didn't offer much in real code - RSE comes to mind.

    In summary, I had a lot of fun working with Itanium. It had a million dials and switches that appealed to me as a programmer. But in-order cores have progressively looked more and more like the way forward, and IPF was never consistently good enough to disprove that.
    Reply
  • mode_13h - Sunday, February 03, 2019 - link

    Only 2-way? Ouch!

    Lack of FP SIMD sounds like a really bad decision. Perhaps, by the time it became feasible, the writing was already on the wall.

    I assume switch-on-event is referring to events like cache-miss?

    What's RSE?

    Why do you say in-order cores looked like the way forward? Only in GPUs and ultra-low power.
    Reply
  • SarahKerrigan - Sunday, February 03, 2019 - link

    I meant out-of-order was the future. Embarrassing typo. I held onto the "maybe this EPIC thing can work out!" gospel up until Power7 came out, but after that, it was pretty clear where the future was going.

    There was very basic FP SIMD, but IIRC only paired single-precision ops on existing registers. I suspect that 128b SIMD would have been seen as heretical by the original IPF design group - remember that Multiflow had NO VECTORS coffee mugs! That being said, it wasn't really an outlier - the other RISC/UNIX players were all pretty late to the SIMD party. SPARC never got a world-class vector extension until Fujitsu's HPC-ACE, and while IBM had VMX in its pocket for years, Power6 was the first mainline Power core to ship it (and VMX performance was decidedly underwhelming on P6; P7 improved it greatly.)

    RSE was the Register Stack Engine. The idea was that registers would automatically fill/spill to backing store across function calls in a way that was mostly transparent to the application.

    Switch-on-event was indeed long-running events. IIRC the main things were either a software-invoked switch hint or an L3 miss. It took something like 15 cycles to do a complete thread switch (pipeline flush of thread 1 + time for ops from thread 2 to fill the pipeline) on Montecito/Tukwila. Per my recollection, Poulson knocked a couple cycles off SoEMT thread switch times (as well as doing some funky stuff with how it was implemented internally), but it was still several.

    Yeah, 2-way L1 was pretty painful. It was offset a little bit by the fact that - especially after Montecito shipped - the L2 was fast (7 cyc L2I, 5-6 cyc L2D IIRC) and *very* large, but the hit rate for L1I and L1D was fairly embarrassing, especially on code with less-than-perfectly-regular memory access patterns.
    Reply

Log in

Don't have an account? Sign up now