R.I.P: FireStream (2006 - 2012)

It goes without saying that with GCN AMD has significantly improved their GPU compute capabilities across the board. Consumer compute has already benefitted through the Radeon HD 7000 series, while professional graphics will begin benefiting with the FirePro W series, and GCN will be laying the foundation for the Heterogeneous System Architecture (HSA) in 2014. All of AMD’s product lines are benefiting from GCN… all of them but one: FireStream.

For those of you not familiar with FireStream, FireStream is (or rather was) AMD’s line of dedicated GPU compute products. Initially launched in 2006 and based on AMD’s R580 GPU, the FireStream was AMD’s first product geared exclusively towards GPU computing. Since 2006 AMD has regularly updated the product line, releasing new cards based on the RV670, RV770, and RV870 GPUs.

The most recent refresh of the product was the release of the FireStream 9300 series in 2010, which saw the FireStream family move to AMD’s first meaningfully capable OpenCL GPU, the RV870. Since then AMD ended up choosing to skip a 2011 refresh of the product based on their Cayman (VLIW4) GPU, which was a somewhat odd move at the time. While VLIW4 is not the kind of superior compute architecture that GCN is, it was still fundamentally designed to improve AMD’s compute performance, which it did thanks to the use of narrower SIMDs that allowed for a partial shift away from ILP to TLP. Nevertheless, as we found out after the fact with the launch of GCN, major users weren’t interested in moving to VLIW4, almost certainly having early knowledge that VLIW4 would be a dead-end architecture to be replaced by GCN in 2012.

In any case, with the release of GCN and its significant compute enhancements we have been expecting a major update to the FireStream product family. But as it turns out AMD has other plans.

Starting with the launch of the FirePro W series the FireStream family of products is being discontinued entirely. From here on the FirePro family will officially be pulling double-duty as both AMD’s professional graphics product and AMD’s compute product.

So why is AMD choosing now to discontinue the FireStream family at this point in time? Officially, AMD believes the FireStream family to be redundant. AMD’s FireStream cards were nearly identical to their FirePro cards in both build and performance, with the only practical difference being that FireStream cards had most of their display connectors removed. And AMD is right – by all accounts the FireStream 9300 series did very little to differentiate itself from the equivalent FirePro cards.

Meanwhile FireStream as a brand hasn’t kept up with NVIDIA’s Tesla business in the dedicated compute market. NVIDIA’s Tesla business is still a fledging business – it’s grown by leaps and bounds since 2010, but not as much as NVIDIA would like – but even so the company has created several distinctions between Tesla and Quadro that AMD never did replicate with FireStream. Chief among these was a compute-focused driver for Windows (NVIDIA calls it TCC), which stripped away all of the graphics capabilities of the card in order improve compute performance by freeing it from the control of the Windows display driver subsystem. Furthermore NVIDIA went and developed a couple different lines of Tesla cards, branching out into both traditional self-cooled cards for workstations and servers, and purely passive cards meant for specialized rackmount servers. For the FirePro V series AMD does have both active and passive cooled FirePro cards, however the FireStream cards were only available with passive cooling.


FirePro V9800P(assive)

The other limitation for AMD in this arena was of course their GPUs. GCN gives AMD a very potent compute architecture that is unquestionably competitive with Fermi and little Kepler, but there’s one thing NVIDIA will do that AMD won’t: build it big. AMD doesn’t strictly adhere to a small-die strategy (300-400mm2 is now their sweet spot), but they also don’t build 500mm2+ behemoths like NVIDIA. There’s a great deal more to compute performance than die size of course (especially when NVIDIA has disabled functional units on most Fermi Tesla parts for yield and power reasons), but it does lock AMD out of certain markets that desire as much individual GPU performance as possible. GPUs excel at parallel problems, however not every problem scales well across multiple GPUs (mostly due to memory sharing needs), which means there’s still a need for a very large GPU.

As a result of these factors, AMD’s RV870 FireStream cards never did gather a great following. GCN fixes the fundamental compute performance problem that was the chief factor holding back FireStream, but this would appear to be a case of too little, too late for FireStream. Of course AMD will continue to have a very important presence in the overall GPU computing market, but at least for now they’re seceding from producing a dedicated compute product.

In lieu of having a dedicated compute product FirePro will be serving both markets. In fact AMD tells us that there are already plans to build compute clusters out of FirePro W series cards (though they can’t name names at this time), reinforcing the notion that AMD is still active in the compute market even without a dedicated product. The big question of course is how potential customers will respond to this; customers may not be interested in a mixed-function part even if the performance was to be the same. Throwing a further wrench in AMD’s plans will be pricing – they may be underpricing NVIDIA’s best Quadro, but right now they’re going to be charging well more than NVIDIA’s best Tesla card. So there’s a real risk right now that FirePro for compute may be a complete non-starter once Tesla K20 arrives at the end of the year.

Perhaps for that very reason AMD’s promotional focus with FirePro is largely focused on professional graphics right now. Certainly the company is proud of their achievements in compute performance, but going by AMD’s marketing materials AMD is definitely focused on graphics first and foremost with FirePro.

The Rest of the FirePro W Series Feature Set More to Come, So Stay Tuned
Comments Locked

35 Comments

View All Comments

  • cjb110 - Tuesday, August 14, 2012 - link

    No interest in the product unfortunatly, but the article was a well written and interesting read.
  • nathanddrews - Tuesday, August 14, 2012 - link

    I certainly miss the days of softmodding consumer cards to pro cards. I think the last card I did it on was either the 8800GT or the 4850. Some of the improvements in rendering quality and drawing speed were astounding - but it certainly nerfed gaming capability. It's a shame (from a consumer perspective) to no longer be able to softmod.
  • augiem - Thursday, August 16, 2012 - link

    I miss those days too, but I have to say I never saw any improvement in Maya sadly over the course of 3 different generations of softmodded cards. And I spent so much time and effort researching the right card models, etc. I think the benefits for Autocad and such must have been more pronounced than Maya.
  • mura - Tuesday, August 14, 2012 - link

    I understand, how important it is to validate and bug-fix these cards, it is not the same, if a card malfunctions under Battlefield3 or some kind of an engineering software - but is such a premium price necessary?

    I know, this is the market - everybody tries to acheive maximum profit, but seeing these prices, and comparing the specs with consumer cards, which cost a fraction - I don't see the bleeding-edge, I don't see the added value.
  • bhima - Tuesday, August 14, 2012 - link

    Who has chatted with some of the guys at Autodesk: They use high-end gaming cards. Not sure if they ALL do, but a good portion of them do simply because of the cost of these "professional" cards.
  • wiyosaya - Thursday, August 16, 2012 - link

    Exactly my point. If the developers at a high-end company like Autodesk use gaming cards, that speaks volumes.

    People expect that they will get better service, too, if a bug crops up. Well, even in the consumer market, I have an LG monitor that was seen by nVidia drivers as an HD TV, and kept me from using the 1980x1200 resolution of the monitor. I reported this to nVidia and within days, there was a beta version of the drivers that had fixed the problem.

    As I see it, the reality is that if you have a problem, there is no guarantee that the vendor will fix it no matter how much you paid for the card. Just look at t heir license agreement. Somewhere in the agreement, it is likely that you will find some clause that says that they do not guarantee a fix to any of the problems that you may report.
  • bwoochowski - Tuesday, August 14, 2012 - link

    No one seems to be asking the hard questions of AMD:

    1) What happened to the 1/2 rate double precision FP performance that we were supposed to see on professional GCN cards?

    2) Now that we're barely starting to see some support for the cl_khr_fp4 header, when can we expect the compiler to support the full suite of options? When will OpenCL 1.2 be fully supported?

    3) Why mention the FirePro S8000 in press reports and never release it? I have to wonder about how much time and effort was wasted on adding support for the S8000 to the HMPP and other compilers.

    I suppose it's pointless to even ask about any kind of accelerated infiniband features at this point.

    With the impending shift to hybrid clusters in the HPC segment, I find it baffling that AMD would choose to kill off their dedicated compute card now. Since the release of the 4870 they had been attracting developers that were eager to capitalize on the cheap double precision fp performance. Now that these applications are ready to make the jump from a single PC to large clusters, the upgrade path doesn't exist. By this time next year there won't be anyone left developing on AMD APP, they'll all have moved back to CUDA. Brilliant move, AMD.
  • N4g4rok - Tuesday, August 14, 2012 - link

    Providing they don't develop new hardware to meet that need. Keeping older variations of dedicated compute cards wouldn't make any sense for moving into large cluster computing. They could keep that same line, but it would need an overall anyway. why not end it and start something new?
  • boeush - Tuesday, August 14, 2012 - link

    "I find it baffling that AMD would choose to kill off their dedicated compute card now."

    It's not that they won't have a compute card (their graphics card is simply pulling double duty under this new plan.) The real issue is, to quote from the article:

    "they may be underpricing NVIDIA’s best Quadro, but right now they’re going to be charging well more than NVIDIA’s best Tesla card. So there’s a real risk right now that FirePro for compute may be a complete non-starter once Tesla K20 arrives at the end of the year."

    I find this approach by AMD baffling indeed. It's as if they just decided to abdicate whatever share they had of the HPC market. A very odd stance to take, particularly if they are as invested in OpenCL as they would like everyone to believe. The more time passes, and the more established code is created around CUDA, the harder it will become for AMD to push OpenCL in the HPC space.
  • CeriseCogburn - Wednesday, August 29, 2012 - link

    LOL - thank you, as the amd epic fail is written all over that.
    Mentally ill self sabotage, what else can it be when you're amd.
    The have their little fanboys yapping opencl now for years on end, and they lack full support for ver 1.2 - LOL
    It's sad - so sad, it's funny.
    Actually that really is sad, I feel sorry for them they are such freaking failures.

Log in

Don't have an account? Sign up now