Final Thoughts

Whether or not AMD calls the Radeon HD 6790 the successor of the Radeon HD 5830, that’s what it is. So 5830 comparisons are quite appropriate, both to look at what AMD did well at and where it doesn’t quite escapes its ancestor.

Perhaps the biggest problem with the 5830 was that it was a 3rd tier part from a high-end GPU; power, temperature, and noise could approach the levels of a high-end GPU without the matching performance. Barts is not a high-end GPU, and as such even if the 6790 were as hot/loud/power-hungry as the 6870, it wouldn’t be the end of the world. Thankfully it looks like power consumption is being kept in check, so partners should be able to develop reasonably cool & quiet cards. In practice the 6790 will probably be a bit worse than the 6850 in this regard, which again is not great given that it achieves only 85% of the performance, but it’s not unreasonable. The 6790 is still a 3rd tier product, but it’s learning from the past.

The problem with the 6790, much like the 5830 before it, is pricing. When you can pick up a GeForce GTX 460 768MB for $150 or a Radeon HD 6850 for $10 more, what sense does a $150 6790 make? It doesn’t make any sense, and there’s the problem. The impression I get is that AMD wanted to make a card to thoroughly trample the GTX 550 Ti, and indeed the 6790 can do that. The problem is that they’re pricing it against the GTX 460 and 6850 right now. The GTX 550 Ti is a good $20 lower (and probably should be cheaper still).

At the end of our 5830 review last year, we said the issue came down to $20: the 5830 was $20 too expensive for what it offered. It’s fitting then that this seems to be the same problem with the 6790. If it were a $130 card it would fit in well between AMD’s other cards; it would beat the GTX 550 Ti, and it would make NVIDIA think long and hard about what to do with the GTX 460 768MB. Instead AMD is committing the same mistake as the 5830 and as the GTX 550 Ti by launching it at $150. $150 is quickly becoming a great place to mislaunch cards.

Long term I’m a bit worried that the $150 price will stick, giving AMD a price floor to bring up 6800 series prices. The 6790 is solidly ahead of the GTX 550 Ti, so if that’s the only thing AMD bases all of their pricing around they can charge more than the GTX 550 Ti, and more for the 6800 series on top of that. I hope it doesn’t come to that, but we have seen video card prices creep up before.

For the time being there’s not much going for the 6790 to recommend it. Throw some rebates on the Radeon HD 6790 to get it down to $130 and we can talk. Until then the GeForce GTX 460 768MB or the Radeon HD 6850 are both much better products.

Power, Temperature, & Noise
POST A COMMENT

69 Comments

View All Comments

  • Amoro - Tuesday, April 05, 2011 - link

    I'm pretty sure that only Cayman is VLIW4. Reply
  • Ryan Smith - Tuesday, April 05, 2011 - link

    Correct. NI is a very broad family; it doesn't definite a single architecture. Cayman is VLIW4, Barts, Turks, and Caicos are VLIW5 and are basically optimized versions of Evergreen (5000 series) hardware.

    http://www.anandtech.com/show/3987/amds-radeon-687...
    Reply
  • Amoro - Tuesday, April 05, 2011 - link

    If you look at some of the raw performance specifications for the two cards it seems to indicate that texture fillrate and raw processing power don't have as much of an impact on Anandtech's testing suite.

    Radeon HD 5830

    Fillrates
    12.8GP/s
    44.8GT/s

    Memory Bandwidth
    128GB/s

    GFLOPS
    1792

    Radeon HD 6790

    Fillrates
    13.4GP/s
    33.6GT/s

    Memory Bandwidth
    134.4GB/s

    GFLOPS
    1344

    The 6790 wins in pixel fillrate and memory bandwidth but loses horribly in raw processing power and texture fillrate yet it still manages to keep within -10% and even manages to beat the 5830 in some cases.
    Reply
  • BoFox - Wednesday, April 06, 2011 - link

    Thanks for some more of those numbers!
    We can see that the 5830 has far higher numbers in these areas:
    44.8 GT/s
    1790 GFLOPs

    And the 6790 has only
    33.6 GT/s
    1344 GFLOPs

    While the 6790 has greater pixel fillrate and memory bandwidth than the 5830.

    If it were not for VLIW4, why is the 5830 only 2-3% faster than 6790 in this review here, if you look at all of the benchmarks? Why?

    Another way we could find out is to see how much it affects DP performance in applications like Milkyway@home. Cards with VLIW4 should have 1/4 the FP64 output ratio to FP32 output, so I wouldn't be surprised if we see 6790's being 20% faster than the similarly spec'ed 4890.
    Reply
  • BoFox - Thursday, April 07, 2011 - link

    Ahh, your article reminded me that FP64 was disabled for Barts GPUs.. I must've forgot about it and wanted to test it to prove that it's VLIW4.

    But the numbers in the replies below strongly point to the 6790 being boosted by VLIW4 in order to basically match up to a 5830 with 40% more shaders and TMU's.

    Any explanation for this, sir Ryan?
    Reply
  • BoFox - Friday, April 08, 2011 - link

    RE: "From a graphics point of view it's not possible to separate the performance of the ROPs from memory bandwidth. Color fill, etc are equally impacted by both. To analyze bandwidth you'd have to work from a compute point of view. However with that said I don't have any reason to believe AMD doesn't have a 256-bit; achieving identical performance with half the L2 cache will be harder though."

    1) If it's not possible to separate the performance from a "graphics" rather than "compute" point of view, then should not the performance be linked for all "graphics" point of views (as it is a "graphics" card to begin with)? Even the "compute" applications (FP16 and FP32 analysis at http://www.behardware.com/articles/783- ... -5830.html ) show the card to behave like as if it's 128-bit.
    2) Why does Ryan not have any reason to believe.. because AMD said so? If a manufacturer of a LCD panel advertises 1ms G2G response time, but it looks like 16ms, does he still have no reason to believe it's 16ms just because the manufacturer said so?
    3) If the L2 cache is cut down in proportion with the castrated shaders/TMUs/ROPs, then it should not affect performance, let alone "harder though".
    Reply
  • Soldier1969 - Tuesday, April 05, 2011 - link

    2 x 6970s FTW at 2560 x 1600 res. Reply
  • JimmiG - Tuesday, April 05, 2011 - link

    Is it just me or is all this talk about price difference of $10 or less getting a little ridiculous? I mean, if you're prepared to spend $150 (or $160...) on something that is completely non-essential, what difference is $10 going to make? If you're so poor that $10 is a big deal, you're probably not spending your money on gaming products anyway since you need everything for stuff like food and rent.

    It seems the video card companies are the guilty ones, constantly trying to outmaneuver each other with new pricing schemes. I miss the old days when there was one $100 card, one $200 card, one $300 card etc. Now there can easily be a dozen different models in the range of $100 - $300.
    Reply
  • liveonc - Tuesday, April 05, 2011 - link

    This looks like a prime candidate for a mini-ITX for those who'd want a desktop replacement, but don't want to pay so damn much for something that has 30minutes of battery life, doesn't have a chance to outperform a desktop, & costs too much. Reply
  • lorribot - Tuesday, April 05, 2011 - link

    Might be just me but since the 4000 series i dont actually understand AMDs numbering scheme anymore.
    There seem to be a great variety of of 6000 cards all with very similar performance and different prices.

    There is the 6990 at the top then a couple more 69xx cards then some 68xx and some 67xx, all well and good but it seems the 5870 is faster card then the 6870, which is odd and not what i would have expected, indeed it has similar performance to the 5850.

    The 5xxx series came in 53, 54, 55, 56, 57, 58 and 59 flavours with one, two or three sub versions in each band giving something like 15 or 16 different cards.

    It seems to me that with so many variations and a numbering scheme that seems to change from version to version AMD seem to actually want to confuse the buying public.

    They really need to get a handle on this, less is more in some cases.

    Nvidia's numbering scheme on the whole seems to be much more sensible in recent times, apart from the odd hiccup with 460 and 465.
    Reply

Log in

Don't have an account? Sign up now