Intel has begun its product discontinuance program for its "Poulson" Itanium 9500-series processors. Intel’s customers will have to make their final orders for these CPUs this fall and the last Poulson chips will be shipped in early 2021.

Intel’s Itanium 9500-series lineup consists of four CPUs: the quad-core Itanium 9520 and 9550 as well as the eight-core Itanium 9540 and 9560. All of the these processors were released in Q4 2012, and were supplanted with the newer "Kittson" 9700 CPUs last year. Now Intel has set the entire Poulson family to discontinued in Q1 2021, a little more than eight years after their release.

Intel’s customers are advised to place their orders on Itanium 9500-series processors by September 17, 2018. Orders will become non-cancelable on September 28, 2018. The final Poulson chips will be shipped by March 5, 2021. Keep in mind that HP Enterprise (the only company that uses Itanium) will cease selling servers based on the Itanium 9500-series on March 31, 2018, so demand for Poulson products is not going to be high in the coming years.

Intel’s Poulson processor (pictured above on the right, image by Silicon.fr) was among the most significant microarchitectural and performance advancements of the Itanium products throughout their history: the CPU doubled issue width to 12 instructions per cycle, got 4-way Hyper-Threading, received higher frequencies, as well as up to eight cores. By contrast, Intel’s latest Itanium 9700-series processors run only slightly faster than the highest-end 9500-series chips.

The retirement of the Poulson family will mean that Intel's 9700 processors will be the only Itanium parts on the market – and indeed they will be the last Itanium processors altogether as Intel has ceased further Itanium development. Meanwhile only a single vendor – long-time Itanium partner HP Enterprise – is still selling Itanium-based servers. But even so, expect Itanium machines to be around for years to come; HPE’s Integrity machines are used primarily for mission- and business-critical applications, where customers are invested into the platform for the very long term.

Related Reading:

Image Source: Silicon.fr

Source: Intel

Comments Locked

27 Comments

View All Comments

  • mode_13h - Sunday, March 11, 2018 - link

    point = missed.

    I'm saying Intel was looking to solve all computing problems inside their CPUs. And if we were still reliant on CPUs for these things, they'd have had a point. But it turned out to be much easier for sound cards to include DSPs and graphics cards to include ever more sophisticated & programmable ASICs.
  • mode_13h - Saturday, March 10, 2018 - link

    Lol. That must've been some tech journalist not knowing what they were talking about. TriMedia was only an embedded platform. It made no attempt at upward-compatible ISA, nor do I think it had a MMU (not that Apple even used the one they had, back then).

    If they were looking at it for any reason, then it would've been as an accelerator chip for audio & potentially graphics processing. They *did* have a lot of floating point horsepower, but I'm not sure they were ever that much faster (if at all) than the Intel or PPC CPUs of their day. They'd have excelled only on price/perf or perf/W.

    I remember even reading about concerns over VLIW running out of steam, due to the networking problems involved in moving data between ever growing register files and large numbers of execution units. Enter ideas like Transport-Triggered Architectures.

    Say, did any ideas from asynchronous computing architectures ever go mainstream? Or, are modern CPU cores still fully synchronous?
  • name99 - Saturday, March 10, 2018 - link

    Uhh, when Apple was looking at Trimedia we were already using PPC.
    I'll be the first to admit that VM under old-school macOS wasn't going to win any awards for elegance or performance, but it was there and did work.

    The real issue I wanted to clarify was that Trimedia was looked at as a media co-processor, not as a CPU replacement.
    And yeah, a SUBSTANTIAL concern (which proved correct) was that mainstream CPUs at that time were improving so fast that a dedicated SW stack for a media co-processor made little sense --- the speed advantage today would be gone next year, and the co-processor would be slower than the CPU the year after that. No-one (correctly) expected these weird non-mainstream designs to upgrade any faster than every four or five years.
  • mode_13h - Sunday, March 11, 2018 - link

    Just because they were using PPC, you brazenly assume they were using VM? Nooo...

    Even through MacOS 8, VM was disabled by default. Memory corruption and hard-crashes were a daily affair, for serious Mac users. The company I worked for built hardware/software mainly for Mac users, at the time. It was a pretty dismal affair.

    TriMedia was good for its price and power envelope. Nice little embedded chip found a lot of uses. The worst thing was they failed to stay on the same performance curve as the big CPUs, so it faded into obscurity.
  • Elstar - Sunday, March 11, 2018 - link

    If you read my original post about TriMedia, I used the word "coprocessor". I don't think anybody in this forum is seriously arguing that TriMedia was ever a realistic CPU replacement. (I also never said that TriMedia was an Apple design, but I digress.)

    It is sometimes hard to predict sometimes which "weird non-mainstream designs" might turn out to have huge hidden demand once they are reasonably available. FPUs were once weird/non-mainstream chips. So were MMUs. So were GPUs. Now TPUs / neural-nets are the trendy dedicated-chip specialization that is getting integrated into mainstream chips.

    Also, unlike the TriMedia days, weird and constantly changing instruction set architectures aren't a dealbreaker anymore. GPUs have proven that translation of partially compiled code to machine code can work quite well.
  • mode_13h - Sunday, March 11, 2018 - link

    Even coprocessor implies some level of application programmability. Of course it's not an Apple design.

    I think FPUs were never weird, per se. Intel had one for the 8086 - the 8087 - dating back to 1980. The use of floating point (and its implementation in hardware) goes back long before that.

    GPUs provide a useful admonition that it's not the destiny of all discrete processing elements to get integrated into the CPU.

    The same will necessarily hold for neural network accelerators. Sure, you might get token support, the same way you get a token iGPU, but they will never be comparable to the capabilities of the discrete chips.

    > GPUs have proven that translation of partially compiled code to machine code can work quite well.

    This is their secret weapon. It still has yet to be de facto for general-purpose, although MS has probably done more than anyone to make that a reality. On a related note, see Web Assembly.
  • Elstar - Monday, March 12, 2018 - link

    Heh. Floating point hardware is decades older than the 8087. Before the 8087 and IEEE 754, FPUs all behaved in subtlety different ways that were a pain to debug. If that isn't the definition of "weird", then I don't know what is.

    I wouldn't dismiss iGPUs as token. For 99.99% of people, SoCs are good enough. The fact that discrete GPUs still exist at all is proof that people are *finding* problems to solve with discrete GPUs.

    Also, if you care about performance, then WebAssembly is awful. It is basically JavaScript byte code, and JavaScript was not designed for performance. Yes, web browsers have done an amazing job making JavaScript run fast, but making a terrible design run fast will never be as good as designing something to be fast from the start.
  • mode_13h - Tuesday, March 13, 2018 - link

    I never said there was nothing weird about FP hardware, just reacting to the idea of having FP in hardware. That's what you seemed to be saying was weird.

    As for iGPUs, the point was they don't eliminate the need for dGPUs, the same way that iFPUs killed dFPUs. It's not a trend, as you seemed to be implying.

    I don't know if you've had your head in the sand, but the market for dGPUs has been white hot. Doesn't matter whether it's because of gaming, VR, crypto, or what. The point is the same - if iGPUs really were a complete replacement, then this wouldn't be so.

    > WebAssembly

    You clearly don't know what you're talking about. The initial version has some annoying limitations (e.g. 64k pages), but there's no good reason why C++ (for example) compiled to WebAssembly can't approach comparable performance to natively-compiled C++.

    Compiling to WebAssembly avoids most of the performance limitations encountered when compiling to JavaScript. It was designed to compete with native compiled code, and is lower-level (like true asm) than Java's Forth-based machine model.
  • Johan Steyn - Friday, March 16, 2018 - link

    I believe the main reason Itanium failed, was because of AMD64. AMD brought very powerful Opterons, which caught Intel off guard. Intel then was forced to accept AMD64 and rebrand it. MS made it clear that they would not support another version of x86 64.

    It is strange that "puny" companies can change the coarse of history. Yes there were other factors as well, but AMD64 was for sure the catalyst. Intel began developement on x86-64 Xeons which soon surpassed the Itaniums in performance.
  • boozed - Thursday, March 8, 2018 - link

    And the Itanic finally slips below the waves

Log in

Don't have an account? Sign up now