Starcraft II

Our next game is Starcraft II, Blizzard’s 2010 RTS megahit. Much like Portal 2 it’s a DX9 game designed to run on a wide range of hardware so performance is quite peppy with most high-end cards, but it can still challenge a GPU when it needs to.

Starcraft II

Starcraft II

Starcraft II

For 2560 and 1920 we’re using 4x MSAA, which must be forced through the driver control panel as Starcraft II does not natively support anti-aliasing. As is often the case with forced MSAA the resulting performance hit is rather high, which is why SC2 can still tax our high-end GPUs.

Starting at 2560, things are looking good for the 7960. At 70.2fps it takes a 19% lead over the GTX 580, and is the only single-GPU card to crack 60fps at that resolution. Against the 6970 it also looks quite good, with a lead of just under 40%.

But when we drop down to 1920, the 7970’s tendency for its lead to drop with the resolution takes full force. Here the 7970 is only 2% ahead of the GTX 580, and looking at 1680 (without MSAA) has the 7970 being outright outgassed by the older GTX 580 by nearly 33%. Interestingly enough however we don’t see the same thing happen against AMD’s own cards, as the 7970 remains ahead of the 6970 by about 35%.

While it’s primarily 1920 and 2560 we’re interested in, it’s still worth pondering on 1680 for a moment. Given the consistent performance of the 7970 versus the 6970, it looks like we’re not simply seeing architectural strengths and weaknesses here. AMD simply cannot hit the high framerates of the GTX 580 here, and at this point we have to suspect that unless AMD is somehow ROP-bound, that we’re looking at a driver limitation of some kind that starts to particularly manifest itself at 1920 and below.

In any case while Starcraft II is not a particularly strong game for the 7970, at the very least the raw performance is there. The performance differences are largely academic as the 7970 is more than capable of powering through even 2560. As such if the 7970 is going to struggle to beat the GTX 580 at any game, this is one of the less meaningful games to struggle at.

Battlefield 3 Civilization V
Comments Locked

292 Comments

View All Comments

  • mczak - Thursday, December 22, 2011 - link

    Oh yes _for this test_ certainly 32 ROPs are sufficient (FWIW it uses FP16 render target with alpha blend). But these things have caches (which they'll never hit in the vantage fill test, but certainly not everything will have zero cache hits), and even more important than color output are the z tests ROPs are doing (which also consume bandwidth, but z buffers are highly compressed these days).
    You can't really say if 32 ROPs are sufficient, nor if they are somehow more efficient judged by this vantage test (as just about ANY card from nvidia or amd hits bandwidth constraints in that particular test long before hitting ROP limits).
    Typically it would make sense to scale ROPs along with memory bandwidth, since even while it doesn't need to be as bad as in the color fill test they are indeed a major bandwidth eater. But apparently AMD disagreed and felt 32 ROPs are enough (well for compute that's certainly true...)
  • cactusdog - Thursday, December 22, 2011 - link

    The card looks great, undisputed win for AMD. Fan noise is the only negative, I was hoping for better performance out the new gen cooler but theres always non-reference models for silent gaming.

    Temps are good too so theres probably room to turn the fan speed down a little.
  • rimscrimley - Thursday, December 22, 2011 - link

    Terrific review. Very excited about the new test. I'm happy this card pushes the envelope, but doesn't make me regret my recent 580 purchase. As long as AMD is producing competitive cards -- and when the price settles on this to parity with the 580, this will be the market winner -- the technology benefits. Cheers!
  • nerfed08 - Thursday, December 22, 2011 - link

    Good read. By the way there is a typo in final words.

    faster and cooler al at once
  • Anand Lal Shimpi - Thursday, December 22, 2011 - link

    Fixed, thank you :)

    Take care,
    Anand
  • hechacker1 - Thursday, December 22, 2011 - link

    I think most telling is the minimum FPS results. The 7970 is 30-45% ahead of the previous generation; in a "worse case" situation were the GPU can't keep up or the program is poorly coded.

    Of course they are catching up with Nvidia's already pretty good minimum FPS, but I am glad to see the improvement, because nothing is worse than stuttering during a fasted pace FPS. I can live with 60fps, or even 30fps, as long as it's consistent.

    So I bet the micro-stutter problem will also be improved in SLI with this architecture.
  • jgarcows - Thursday, December 22, 2011 - link

    While I know the bitcoin craze has died down, I would be interested to see it included in the compute benchmarks. In the past, AMD has consistently outperformed nVidia in bitcoin work, it would also be interesting to see Anandtech's take as to why, and to see if the new architecture changes that.
  • dcollins - Thursday, December 22, 2011 - link

    This architecture will most likely be a step backwards in terms of bitcoin mining performance. In the GCN architecture article, Anand mentioned that buteforce hashing was one area where a VLIW style architecture had an advantage over a SIMD based chip. Bitcoin mining is based on algorithms mathematically equivalent to password hashing. With GCN, AMD is changing the very thing that made their card better miners than Nvidia's chips.

    The old architecture is superior for "pure," mathematically well defined code while GCN is targeted at "messy," more practical and thus widely applicable code.
  • wifiwolf - Thursday, December 22, 2011 - link

    a bit less than expected, but not really an issue:

    http://www.tomshardware.co.uk/radeon-hd-7970-bench...
  • dcollins - Thursday, December 22, 2011 - link

    You're looking at a 5% increase in performance for a whole new generation with 35% more compute hardware, increased clock speed and increased power consumption: that's not an improvement, it's a regression. I don't fault AMD for this because Bitcoin mining is a very niche use case, but Crossfire 68x0 cards offer much better performance/watt and performance/$.

Log in

Don't have an account? Sign up now