Civilization V

Our final game, Civilization 5, gives us an interesting look at things that other RTSes cannot match, with a much weaker focus on shading in the game world, and a much greater focus on creating the geometry needed to bring such a world to life. In doing so it uses a slew of DirectX 11 technologies, including tessellation for said geometry, driver command lists for reducing CPU overhead, and compute shaders for on-the-fly texture decompression.

Civilization V

Civilization V

Because of the fact that Civilization V uses driver command lists, we were originally not going to include it in this benchmark suite as a gaming benchmark. If it were solely a DCL test it would do a good job highlighting the fact that AMD doesn’t currently support the feature, but a poor job of actually explaining any hardware/architectural differences.  It was only after we saw AMD’s reviewer’s guide that we decided to go ahead and include it, because quite frankly we didn’t believe the numbers AMD had published.

With the GTX 580 and the 6970, the 6970 routinely lost to the GTX 580 by large margins. We had long assumed this was solely due to NVIDIA’s inclusion of DCLs, as we’ve seen a moderate single-GPU performance deficit and equally moderate multi-GPU lead for AMD melt away when NVIDIA added DCL support. The 7970 required that we rethink this.

If Civilization V was solely a DCL test, then our 2560 results would be impossible – the 7970 is winning by 12% in a game NVIDIA previous won by a massive margin. NVIDIA only regains their lead at 1680, which at this resolution we’re not nearly as likely to be GPU-bound.

So what changed? AMD has yet to spill the beans, but short of a secret DCL implementation for just CivV we have to look elsewhere. Next to DCL CivV’s other killer feature is its use of compute shaders, and GCN is a compute architecture. To that extent we believe at this point that while AMD is still facing some kind of DCL bottleneck, they have completely opened the floodgates on whatever compute shader bottleneck was standing in their way before. This is particularly evident when comparing the 7970 to the 6970, where the 7970 enjoys a consistent 62% performance advantage. It’s simply an incredible turnabout to see the 7970 do so well when the 6970 did so poorly.

Of course if this performance boost really was all about compute shaders, it raises a particularly exciting question: just how much higher could AMD go if they had DCLs? Hopefully one day that’s an answer we get to find out.

Starcraft II Compute: The Real Reason for GCN
Comments Locked

292 Comments

View All Comments

  • Esbornia - Thursday, December 22, 2011 - link

    Fan boy much?
  • CeriseCogburn - Thursday, March 8, 2012 - link

    Finally, piroroadkill, Esbornia - the gentleman ericore merely stated what all the articles here have done as analysis while the radeonite fans repeated it ad infinitum screaming nvidia's giant core count doesn't give the percentage increase it should considering transistor increase.
    Now, when it's amd's turn, we get ericore under 3 attacks in a row...---
    So you three all take it back concerning fermi ?
  • maverickuw - Thursday, December 22, 2011 - link

    I want to know when the 7950 will come out and hopefully it'll come out at $400
  • duploxxx - Thursday, December 22, 2011 - link

    Only the fact that ATI is able to bring a new architecture on a new process and result in such a performance increase for that power consumption is a clear winner.

    looking at the past with Fermy 1st launch and even Cayman VLIW4 they had much more issues to start with.

    nice job, while probably nv680 will be more performing it will take them at least a while to release that product and it will need to be also huge in size.
  • ecuador - Thursday, December 22, 2011 - link

    Nice review, although I really think testing 1680x1050 for a $550 is a big waste of time, which could have to perhaps multi-monitor testing etc.
  • Esbornia - Thursday, December 22, 2011 - link

    Its Anand you should expect this kind of shiet.
  • Ryan Smith - Thursday, December 22, 2011 - link

    In this case the purpose of 1680 is to allow us to draw comparisons to low-end cards and older cards, which is something we consider to be important. The 8800GT and 3870 in particular do not offer meaningful performance at 1920.
  • poohbear - Thursday, December 22, 2011 - link

    Why do you bencmark @ 1920x1200 resolution? according to the Steam December survey only 8% of gamers have that resolution, whereas 24% have 1920x1080 and 18% use 1680x1050 (the 2 most popular). Also, minimum FPS would be nice to know in your benchmarks, that is really useful for us! just a heads up for next time u benchmark a video card! Otherwise nice review! lotsa good info at the beginning!:)
  • Galcobar - Thursday, December 22, 2011 - link

    Page 4, comments section.
  • Esbornia - Thursday, December 22, 2011 - link

    They dont want to show the improvements on min FPS cause they hate AMD, you should know that already.

Log in

Don't have an account? Sign up now