Final Thoughts

Often it’s not until the last moment that we have all the information in hand to completely analyze a new video card. The Radeon HD 6970 and Radeon HD 6950 were no different. With AMD not releasing the pricing information to the press until Monday afternoon, we had already finished our performance benchmarks before we even knew the price, so much time was spent speculating and agonizing over what route AMD would go. So let’s jump straight in to our recommendations.

Our concern was that AMD would shoot themselves in the foot by pricing the Radeon HD 6970 in particular at too high a price. If we take a straight average at 1920x1200 and 2560x1600, its performance is more or less equal to the GeForce GTX 570. In practice this means that NVIDIA wins a third of our games, AMD wins a third of our games, and they effectively tie on the rest, so the position of the 6970 relative to the GTX 570 is heavily dependent on just what games out of our benchmark suite you favor. All we can say for sure is that on average the two cards are comparable.

So with that in mind a $370 launch price is neither aggressive nor overpriced. Launching at $20 over the GTX 570 isn’t going to start a price war, but it’s also not so expensive to rule the card out. Of the two the 6970 is going to take the edge on power efficiency, but it’s interesting to see just how much NVIDIA and AMD’s power consumption and performance under gaming has converged. It used to be much more lopsided in AMD’s favor.

Meanwhile the Radeon HD 6950 occupies an interesting spot. Above it is the 570/6970, below it are the soon to be discontinued GTX 470 and Radeon HD 5870. These cards were a bit of a spoiler for the GTX 570, and this is once more the case for the 6950. The 6950 is on average 7-10% faster than the 5870 for around 20% more. I am becoming increasingly convinced that more than 1GB of VRAM is necessary for any new cards over $200, but we’re not quite there yet. When the 5870 is done and gone the 6950 will be a reasonable successor, but for the time being the 5870 at $250 currently is a steal of a deal if you don’t need the extra performance or new features like DP1.2. Conversely the 6950 is itself a bit of a spoiler; the 6970 is only 10-15% faster for $70 more. If you had to have a 6900 card, the 6950 is certainly the better deal. Whether you go with the 5870, the 6950, or the 6970, just keep in mind that the 6900 series is in a much better position for future games due to AMD’s new architecture.

And that brings us to the final matter for today, that new architecture. Compared to the launch of Cypress in 2009 the feature set isn’t radically different like it was when AMD first added DirectX 11 support, but Cayman is radically different in its own way. After being carried by their current VLIW5 architecture for nearly four years, AMD is set to hand off their future to their new VLIW4 architecture. It won’t turn the world upside down for AMD or its customers, but it’s a reasonable step forward for the company by reducing their reliance on ILP in favor of more narrow TLP-heavy loads. For gaming this specifically means their hardware should be a better match for future DX10/DX11 games, and the second graphics engine should give them enough tessellation and rasterizing power for the time being.

Longer term we will have to see how AMD’s computing gamble plays out. Though we’ve largely framed Cayman in terms of gaming, to AMD Cayman is first and foremost a compute GPU, in a manner very similar to another company whose compute GPU is also the fastest gaming GPU on the market. Teething issues aside this worked out rather well for NVIDIA, but will lightning strike twice for AMD? The first Cayman-based video cards are launching today, but the Cayman story is just getting started.

Power, Temperature, & Noise
Comments Locked

168 Comments

View All Comments

  • B3an - Thursday, December 16, 2010 - link

    Very stupid uninformed and narrow-minded comment. People like you never look to the future which anyone should do when buying a graphics card, and you completely lack any imagination. Theres already tons of uses for GPU computing, many of which the average computer user can make use of, even if it's simply encoding a video faster. And it will be use a LOT more in the future.

    Most people, especially ones that game, dont even have 17" monitors these days. The average size monitor for any new computer is at least 21" with 1680 res these days. Your whole comment is as if everyone has the exact same needs as YOU. You might be happy with your ridiculously small monitor, and playing games at low res on lower settings, and it might get the job done, but lots of people dont want this, they have standards and large monitors and needs to make use of these new GPU's. I cant exactly see many people buying these cards with a 17" monitor!
  • CeepieGeepie - Thursday, December 16, 2010 - link

    Hi Ryan,

    First, thanks for the review. I really appreciate the detail and depth on the architecture and compute capabilities.

    I wondered if you had considered using some of the GPU benchmarking suites from the academic community to give even more depth for compute capability comparisons. Both SHOC (http://ft.ornl.gov/doku/shoc/start) and Rodinia (https://www.cs.virginia.edu/~skadron/wiki/rodinia/... look like they might provide a very interesting set of benchmarks.
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi Ceepie;

    I've looked in to SHOC before. Unfortunately it's *nix-only, which means we can't integrate it in to our Windows-based testing environment. NVIDIA and AMD both work first and foremost on Windows drivers for their gaming card launches, so we rarely (if ever) have Linux drivers available for the launch.

    As for Rodinia, this is the first time I've seen it. But it looks like their OpenCL codepath isn't done, which means it isn't suitable for cross-vendor comparisons right now.
  • IdBuRnS - Thursday, December 16, 2010 - link

    "So with that in mind a $370 launch price is neither aggressive nor overpriced. Launching at $20 over the GTX 570 isn’t going to start a price war, but it’s also not so expensive to rule the card out. "

    At NewEgg right now:

    Cheapest GTX 570 - $509
    Cheapest 6970 - $369

    $30 difference? What are you smoking? Try $140 difference.
  • IdBuRnS - Thursday, December 16, 2010 - link

    Oops, $20 difference. Even worse.
  • IdBuRnS - Thursday, December 16, 2010 - link

    570...not 580...

    /hangsheadinshame
  • epyon96 - Thursday, December 16, 2010 - link

    This was a very interesting discussion to me in the article.

    I'm curious if Anandtech might expand on this further in a future dedicated article comparing what NVIDIA is using to AMD.

    Are they also more similar to VLIW4 or VLIW5?

    Can someone else shed some light on it?
  • Ryan Smith - Thursday, December 16, 2010 - link

    We wrote something almost exactly like you're asking for for our Radeon HD 4870 review.

    http://www.anandtech.com/show/2556

    AMD and NVIDIA's compute architectures are still fundamentally the same, so just about everything in that article still holds true. The biggest break is VLIW4 for the 6900 series, which we covered in our article this week.

    But to quickly answer your question, GF100/GF110 do not immediately compare to VLIW4 or VLIW5. NVIDIA is using a pure scalar architecture, which has a number of fundamental differences from any VLIW architecture.
  • dustcrusher - Thursday, December 16, 2010 - link

    The cheap insults are nothing but a detriment to what is otherwise an interesting argument, even if I don't agree with you.

    As far as the intellect of Anandtech readers goes, this is one of the few sites where almost all of the comments are worth reading; most sites are the opposite- one or two tiny bits of gold in a big pan of mud.

    I'm not going to "vastly overestimate" OR underestimate your intellect though- instead I'm going to assume that you got caught up in the moment. This isn't Tom's or Dailytech, a little snark is plenty.
  • Arnulf - Thursday, December 16, 2010 - link

    When you launch an application (say a game), it is likely to be the only active thread running on the system, or perhaps one of very few active threads. CPU with Turbo function will clock up as high as possible to run this main thread. When further threads are launched by the application, CPU will inevitably increase its power consumption and consequently clock down.

    While CPU manufacturers don't advertise this functionality in this manner, it is really no different from PowerTune.

    Would PowerTune technology make you feel any better if it was marketed the other way around, the way CPUs are ? (mentioning lowest frequencies and clock boost provided that thermal cap isn't met yet)

Log in

Don't have an account? Sign up now