POST A COMMENT

34 Comments

Back to Article

  • damianrobertjones - Tuesday, August 14, 2012 - link

    Over the years I've always popped over to this page to read the latest news and especially articles yet over the last year, or maybe a bit more, the articles haven't exactly been flowing. is the site slowing down?

    :(
    Reply
  • bobsmith1492 - Tuesday, August 14, 2012 - link

    This engineer doesn't like the so-called "professional" cards. I think they are a rip-off. My high-end workstation computer came with a top-of-the-line Quadro and it could barely handle a second monitor. I finally got a mid-level gaming card and was much happier with its performance. Reply
  • A5 - Tuesday, August 14, 2012 - link

    Then the card you got was broken. Even integrated graphics can handle 2D on 2 displays no problem. Reply
  • bobsmith1492 - Wednesday, August 15, 2012 - link

    That's what I would have thought, but no. Even scrolling down an Excel document was slow, pausing every second to redraw the whole screen. Same thing when dragging a window around over a background that was "stretch to fit." Garbage! Tried modifying graphics settings, hardware acceleration on/off, Googling like mad, posts in the AT forums but no-go. Reply
  • wiyosaya - Tuesday, August 14, 2012 - link

    100 percent agree with your assessment that the pro cards are a rip off. The chips are the same chips as in gaming cards. The only difference is a few switches in firmware that cut off options when run in gaming cards.

    That firmware is also developed by the same developers; I say this after having worked in a similar field where the company I worked for marketed RIP software designed to print to copiers. All the features were in the software, but customers had to pay sometimes substantially extra to enable some of those features. In my opinion, anyone who thinks that there is a separate team developing "pro" drivers is mistaken.

    With a PC that has enough processing power, IMHO, most professionals will not need the "extra capabilities" that the pro cards offer over gaming cards. Perhaps the only reason a pro would need a pro card, if you believe the marketing coming out of the pro card realm, is because you are working with a model that has thousands of parts.

    However, there is a recent trend to kill double precision compute power in gaming cards. IMHO, this is going to hurt graphics card makers more than it will help. It was for this reason that I avoided buying a GTX 680 and opted for a 580 instead for a recent build of mine.
    Reply
  • nathanddrews - Tuesday, August 14, 2012 - link

    Apparently it doesn't hurt them at all since they're making a lot of money with pro cards. All you have to do is show this to a CAD designer and you'l make sale:

    http://vimeo.com/user647522
    Reply
  • aguilpa1 - Wednesday, August 15, 2012 - link

    All these videos illustrate is that certain hardware accelerated features are turned off at the bios and software level. It is an illustration of intentional vender modifications of a piece of hardware to differentiate the two in order to improve the profit of the other.

    This was clearly illustrated in early versions of Quadros whereby a simple bios update would re-enable the turned off features of the consumer model. These methods may have been disabled by now by the vendor as they get wise to the ways of gamers.
    Reply
  • rarson - Tuesday, August 14, 2012 - link

    Wrong. With a professional card, you pay for both the validation and the software, neither of which are professional quality in the consumer variants.

    "That firmware is also developed by the same developers"

    Firmware and drivers are two different things, by the way.
    Reply
  • wiyosaya - Thursday, August 16, 2012 - link

    "Wrong. With a professional card, you pay for both the validation and the software, neither of which are professional quality in the consumer variants."

    The marketing departments have done their jobs very well or you are "pro vendor shill," IMHO as that is what the vendor wants everyone to think. As I previously stated, having been in more than one position where the companies that I worked for sell "professional" software, it is not the case.

    I expect that most pro software packages will run on most gamer cards with very little difference except, perhaps, when pushing the pro software package to its absolute limit. If anyone wants to pay $4K for a $400 card, they are certainly welcome to do that. IMHO, it is a complete waste of money.
    Reply
  • PrinceGaz - Sunday, August 19, 2012 - link

    If you really are a professional graphics (or compute) user and would rather take the chance by opting for a non-validated $400 card instead of a fully-validated and backed by the hardware and software-vendor $4,000 card, then you have got your priorities wrong. That or you're being seriously underpaid for the sort of work you are doing.

    Your software vendor isn't going to be very impressed when you report a problem and they find the system you are running it on has a GeForce or Radeon card instead of one of the validated professional cards and driver versions supported by it!
    Reply
  • cjb110 - Tuesday, August 14, 2012 - link

    No interest in the product unfortunatly, but the article was a well written and interesting read. Reply
  • nathanddrews - Tuesday, August 14, 2012 - link

    I certainly miss the days of softmodding consumer cards to pro cards. I think the last card I did it on was either the 8800GT or the 4850. Some of the improvements in rendering quality and drawing speed were astounding - but it certainly nerfed gaming capability. It's a shame (from a consumer perspective) to no longer be able to softmod. Reply
  • augiem - Thursday, August 16, 2012 - link

    I miss those days too, but I have to say I never saw any improvement in Maya sadly over the course of 3 different generations of softmodded cards. And I spent so much time and effort researching the right card models, etc. I think the benefits for Autocad and such must have been more pronounced than Maya. Reply
  • mura - Tuesday, August 14, 2012 - link

    I understand, how important it is to validate and bug-fix these cards, it is not the same, if a card malfunctions under Battlefield3 or some kind of an engineering software - but is such a premium price necessary?

    I know, this is the market - everybody tries to acheive maximum profit, but seeing these prices, and comparing the specs with consumer cards, which cost a fraction - I don't see the bleeding-edge, I don't see the added value.
    Reply
  • bhima - Tuesday, August 14, 2012 - link

    Who has chatted with some of the guys at Autodesk: They use high-end gaming cards. Not sure if they ALL do, but a good portion of them do simply because of the cost of these "professional" cards. Reply
  • wiyosaya - Thursday, August 16, 2012 - link

    Exactly my point. If the developers at a high-end company like Autodesk use gaming cards, that speaks volumes.

    People expect that they will get better service, too, if a bug crops up. Well, even in the consumer market, I have an LG monitor that was seen by nVidia drivers as an HD TV, and kept me from using the 1980x1200 resolution of the monitor. I reported this to nVidia and within days, there was a beta version of the drivers that had fixed the problem.

    As I see it, the reality is that if you have a problem, there is no guarantee that the vendor will fix it no matter how much you paid for the card. Just look at t heir license agreement. Somewhere in the agreement, it is likely that you will find some clause that says that they do not guarantee a fix to any of the problems that you may report.
    Reply
  • bwoochowski - Tuesday, August 14, 2012 - link

    No one seems to be asking the hard questions of AMD:

    1) What happened to the 1/2 rate double precision FP performance that we were supposed to see on professional GCN cards?

    2) Now that we're barely starting to see some support for the cl_khr_fp4 header, when can we expect the compiler to support the full suite of options? When will OpenCL 1.2 be fully supported?

    3) Why mention the FirePro S8000 in press reports and never release it? I have to wonder about how much time and effort was wasted on adding support for the S8000 to the HMPP and other compilers.

    I suppose it's pointless to even ask about any kind of accelerated infiniband features at this point.

    With the impending shift to hybrid clusters in the HPC segment, I find it baffling that AMD would choose to kill off their dedicated compute card now. Since the release of the 4870 they had been attracting developers that were eager to capitalize on the cheap double precision fp performance. Now that these applications are ready to make the jump from a single PC to large clusters, the upgrade path doesn't exist. By this time next year there won't be anyone left developing on AMD APP, they'll all have moved back to CUDA. Brilliant move, AMD.
    Reply
  • N4g4rok - Tuesday, August 14, 2012 - link

    Providing they don't develop new hardware to meet that need. Keeping older variations of dedicated compute cards wouldn't make any sense for moving into large cluster computing. They could keep that same line, but it would need an overall anyway. why not end it and start something new? Reply
  • boeush - Tuesday, August 14, 2012 - link

    "I find it baffling that AMD would choose to kill off their dedicated compute card now."

    It's not that they won't have a compute card (their graphics card is simply pulling double duty under this new plan.) The real issue is, to quote from the article:

    "they may be underpricing NVIDIA’s best Quadro, but right now they’re going to be charging well more than NVIDIA’s best Tesla card. So there’s a real risk right now that FirePro for compute may be a complete non-starter once Tesla K20 arrives at the end of the year."

    I find this approach by AMD baffling indeed. It's as if they just decided to abdicate whatever share they had of the HPC market. A very odd stance to take, particularly if they are as invested in OpenCL as they would like everyone to believe. The more time passes, and the more established code is created around CUDA, the harder it will become for AMD to push OpenCL in the HPC space.
    Reply
  • CeriseCogburn - Wednesday, August 29, 2012 - link

    LOL - thank you, as the amd epic fail is written all over that.
    Mentally ill self sabotage, what else can it be when you're amd.
    The have their little fanboys yapping opencl now for years on end, and they lack full support for ver 1.2 - LOL
    It's sad - so sad, it's funny.
    Actually that really is sad, I feel sorry for them they are such freaking failures.
    Reply
  • johnthacker - Tuesday, August 14, 2012 - link

    The W7000 has some uses in specific situations, but that's because it's a single slot card. Single slot Radeon HD 7850s (much less 7870s), which also use Pitcairn, are difficult to find; there was one OEM that showed off a design IIRC. Other than that it's hard to see exactly when someone would want these cards.

    The same generally holds for NVIDIA (the Fermi Quadro cards are cut down GF100 based, so they can be better at compute than their gaming numbers suggest, and the old Quadro 4000 is a single slot card.) Interesting that NVIDIA so far is trying to reserve GK110 for Quadro and Kepler only. We'll see if that works.
    Reply
  • Dribble - Tuesday, August 14, 2012 - link

    AMD doesn't need to provide compatible, they need to provide better.

    Bottom line is companies won't change gpu manufacturer. Nvidia works well, has traditionally worked better then AMD and they still have much better driver support (team is much bigger).

    There are no AMD fanboys routing for the underdog, you have to provide a clear business reason to change, and "we're almost as good as nvidia in sometimes" isn't it.
    Reply
  • CeriseCogburn - Wednesday, August 29, 2012 - link

    Compute baby ! amd compute ! compute ! GPGPU ! amd wins! amd wins!
    (that's all I've heard since 7000 series)
    Hey, winzip man.
    Reply
  • Pixelpusher6 - Wednesday, August 15, 2012 - link

    I really have to question AMD's move here to kill off Firestream and have the FirePro line serve both markets. At the present time this is where they have an advantage on Nvidia. 1TB double precision performance is pretty good even though its only 1/4 of single precision, even clocked low enough to be passively cooled it should still beat Nvidia's best compute card. K10 is not really a compute card at all and to me it seems like they just wanted to get something, anything out until K20. And K20 is by no means a certainty for Q4 2012, my guess is it will be delayed. I just don't have confidence in Nvidia's mastering of the 28nm process yet, especially given the enormous die size of this chip which I've heard presents some unique challenges. And when K20 does come out it will probably be more expensive than their current compute cards.

    If I were AMD I would re-brand the compute card, drop the Firestream name because of it's association with VLIW, and come out with a new brand to highlight what really makes up GCN...Compute. Does anyone know if HPC clusters use actively cooled cards or only passively cooled? I was under the impression that compute cards generally were clocked a little lower but passively cooled. If that is the case then that rules out using the FirePro W9000 and W8000 in these server clusters. It seems like AMD just conceded this market completely when they finally have a competitive compute GPU to gain a foothold. As someone else noted this market will only be expanding. If AMD wants to only focus on professional graphics I sure hope their drivers will be better than the consumer counterparts.
    Reply
  • dtolios - Wednesday, August 15, 2012 - link

    When will AMD start improving compatibility with VRay RT and other similar OpenCL apps? All this computational potential remains unused outside benchmarks - at least for the CG world.

    Radeons are vastly better in OpenGL than GeForce cards, so the switch to FirePro's is way less "mandatory" for such apps. But even if those driver issues were solved, AMD would secure a huge increase of share in the professional CG market which now uses nVidia (yes, mostly gaming cards) almost exclusively.
    Reply
  • AG@FirePro - Monday, August 27, 2012 - link

    You might imagine that it's in AMD's best interest to work very closely with all the important ISVs in this space - and you would be right! :) Helping our technology partners and the broader software development community implement open-standards-based GPU acceleration in their applications is an area of heavy onging investment for us.

    Of course, not all apps are written the same. Some applications -especially those written in years past- are architected in a way that makes it challenging to enable the best performance across all the modern GPU options on the market. Proprietary or "hybrid" codebases often make full cross-compatibility quite difficult. can assure you that neither ISVs nor end-users want their toolsets be tied to a particular hardware vendor or proprietary technology base. Unfortunately, it's not always as easy as flipping a switch and sometimes this takes a while. This said, I think it's fair to say that our aim is that very soon, everybody will have the option to run the hardware of their choice in conjunction with their favorite realtime raytracer, physics solver or any other hardware-accelerated toolset.

    AMD FirePro cards fully support OpenCL in both hardware and software. Our devices offer certified and acknowledged compatibility and killer performance for a broad range of OpenCL-based applications. The same is true for tons of applications accelerated under OpenGL, DirectX and DirectCompute APIs. Compatibility and reliability are crucial. Nobody understands this more than us.

    To this end, we continue to be closely aligned with all the key ISVs in the M&E, DCC and CAD space to help them provide maximum flexibility, choice and value for their end-users.We also continue to refine and expand our range of developer tools (profilers, compliers, debuggers, etc) while at the same time contributing heavily to the open-source community in the form of optimized libraries and other free developer resources.

    The OpenCL story gets better every day. Every day, there are new and better OpenCL libraries being written and shared. There are new compiler optimizations being made all the time which allow for faster andmore flexible implementations. More and more software devs are liberating their code and their customers from proprietary APIs. While CUDA-bound apps still provide lots of value for many end-users, the writing is clearly on the wall. The age of proprietary GPU acceleration has begun to yield to a new reality of flexibility and choice for consumers.

    This is a good thing, no?

    *PS* You may have noticed an announcement about certain new server-side GCN-based FirePro GPU offerings today. Stay tuned. Things are about to get seriously fun up in here.

    Adam G.
    AMD FirePro Team
    Reply
  • CeriseCogburn - Wednesday, August 29, 2012 - link

    It's not a good thing because it has not happened, and it doesn't appear it will even in the next decade.
    It's still proprietary, and is not cross card company compatible, so it's all crap talk.
    As we saw amd do after their years of PR bashing, WinZip PROPRIETARY.

    It's gonna be all seriously vendor specific up in there for a long, long time.
    Reply
  • warpuck - Sunday, August 19, 2012 - link

    Does this mean I wont need 2 PCs? one for games and another for graphics. I did notice what appears to be a crossfire connector. I know most companies would not go for PC configuration like that, unless it was in the boss's office. I am one of those independants. I like taking a break when I feel like it. Not having 2 PCs would simplify things for me. Reply
  • peevee - Friday, August 24, 2012 - link

    OK, "later this week"? In the review written 8/14. "This" week ended, then "next" week ends today... Reply
  • Death666Angel - Tuesday, August 28, 2012 - link

    Hey!
    No problem about the inconsistent data, but maybe you can present it in a more accurate way? Currently the interval of the X axis is not to scale and the line through the data points makes it seem as though you know the way it progressed in between the data points. I'd rather make a simple bar chart with the intervals showing correctly. It would be a more honest and easy to read diagram. :)
    Great article though! :D
    Reply
  • ManuelLP - Thursday, August 30, 2012 - link

    When is publish the second part? Tomorrow is the end of the month. Reply
  • Gadgety - Wednesday, December 19, 2012 - link

    This very interesting part I was done back in August. It's now December. I've searched for W9000 on Anandtech but no part II shows up. What gives? Reply
  • nitro912gr - Friday, January 18, 2013 - link

    Where is PART 2? Reply
  • theprofessionalschoice - Thursday, December 05, 2013 - link

    As an entrepreneur in Bitcoin mining, I have made roughly $1.7m. from a start-up fund of only $40,000. I can personally vouch for the W9000 cards because I've mined with them exclusively and now happily manage a small empire because of their vast throughput. Kudos to the author for going so in-depth with this review... VERY accurate and I highly recommend this card to miners of any level... I can't wait for the 20nm dual-chips to launch (exclusively in the new Mac Pro) sometime this month!!! Great professional cards - NOT for consumers! ;) Reply

Log in

Don't have an account? Sign up now