Civilization V

Our final game, Civilization 5, gives us an interesting look at things that other RTSes cannot match, with a much weaker focus on shading in the game world, and a much greater focus on creating the geometry needed to bring such a world to life. In doing so it uses a slew of DirectX 11 technologies, including tessellation for said geometry, driver command lists for reducing CPU overhead, and compute shaders for on-the-fly texture decompression.

Civilization V

Civilization V

Because of the fact that Civilization V uses driver command lists, we were originally not going to include it in this benchmark suite as a gaming benchmark. If it were solely a DCL test it would do a good job highlighting the fact that AMD doesn’t currently support the feature, but a poor job of actually explaining any hardware/architectural differences.  It was only after we saw AMD’s reviewer’s guide that we decided to go ahead and include it, because quite frankly we didn’t believe the numbers AMD had published.

With the GTX 580 and the 6970, the 6970 routinely lost to the GTX 580 by large margins. We had long assumed this was solely due to NVIDIA’s inclusion of DCLs, as we’ve seen a moderate single-GPU performance deficit and equally moderate multi-GPU lead for AMD melt away when NVIDIA added DCL support. The 7970 required that we rethink this.

If Civilization V was solely a DCL test, then our 2560 results would be impossible – the 7970 is winning by 12% in a game NVIDIA previous won by a massive margin. NVIDIA only regains their lead at 1680, which at this resolution we’re not nearly as likely to be GPU-bound.

So what changed? AMD has yet to spill the beans, but short of a secret DCL implementation for just CivV we have to look elsewhere. Next to DCL CivV’s other killer feature is its use of compute shaders, and GCN is a compute architecture. To that extent we believe at this point that while AMD is still facing some kind of DCL bottleneck, they have completely opened the floodgates on whatever compute shader bottleneck was standing in their way before. This is particularly evident when comparing the 7970 to the 6970, where the 7970 enjoys a consistent 62% performance advantage. It’s simply an incredible turnabout to see the 7970 do so well when the 6970 did so poorly.

Of course if this performance boost really was all about compute shaders, it raises a particularly exciting question: just how much higher could AMD go if they had DCLs? Hopefully one day that’s an answer we get to find out.

Starcraft II Compute: The Real Reason for GCN
Comments Locked

292 Comments

View All Comments

  • GTVic - Thursday, December 22, 2011 - link

    The first Fermi version they demo'd was a mock-up held together with wood screws. That is not a good launch...
  • RussianSensation - Thursday, December 22, 2011 - link

    And the real launch version produced Tessellation performance that took HD7970 to pass, had compute performance that HD7970 can barely best today, had Mega Texture support that HD7970 just added now 2 years later, had scalar SIMD architecture that took AMD 2 years to release.
  • Scali - Friday, December 23, 2011 - link

    HD7970 doesn't actually surpass Fermi's tessellation, apart from tessellation factors 10 and below:
    http://www.pcgameshardware.de/aid,860536/Test-Rade...
    From factor 11 to 64, Fermi still reigns supreme.

    (This is with AMD's SubD11 sample from the DirectX 11 SDK).
  • Scali - Friday, December 23, 2011 - link

    Uhhh no. They demo'ed a real Fermi obviously.
    It was just a development board, which didn't exactly look pretty, and was not in any way representative of the card that would be available to end-users.
    So they made a mock-up to show what a retail Fermi WOULD look like, once it hits the stores.
    Which is common practice anyway in the industry.
  • fllib19554 - Thursday, January 12, 2012 - link

    off yourself cretin.
  • futurepastnow - Thursday, December 22, 2011 - link

    You misspelled "impressive."
  • slayernine - Thursday, December 22, 2011 - link

    What Wreckage really meant to say was that it was disappointing for nVidia to get pummelled so thoroughly.
  • unaligned - Friday, December 23, 2011 - link

    A year old card pummeled by the newest technology? I would hope so.
  • MagickMan - Thursday, December 22, 2011 - link

    Go shoot yourself in the face, troll.
  • rs2 - Thursday, December 22, 2011 - link

    Yes, yes. 4+ billion transistors on a single chip is not impressive at all. Why, it's not even one transistor for every person on the planet yet.

Log in

Don't have an account? Sign up now