Company of Heroes 2

Our second benchmark in our benchmark suite is Relic Games’ Company of Heroes 2, the developer’s World War II Eastern Front themed RTS. For Company of Heroes 2 Relic was kind enough to put together a very strenuous built-in benchmark that was captured from one of the most demanding, snow-bound maps in the game, giving us a great look at CoH2’s performance at its worst. Consequently if a card can do well here then it should have no trouble throughout the rest of the game.

Our first strategy game is also our first game that is flat out AFR incompatible, and as a result the only way to get the best performance out of Company of Heroes 2 is with the fastest single-GPU card available. To that end this is a very clear victory for the 290X, and in fact will be the largest lead for the 290X of all of our benchmarks. At 2560 it’s a full 29% faster than the GTX 780, which all but puts the 290X in a class of its own. This game also shows some of the greatest gains for the 290X over the 280X, with the 290X surpassing its Tahti based predecessor by an equally chart topping 41%. It’s not clear what it is at this time that Company of Heroes 2 loves about 290X in particular, but as far as this game is concerned AMD has put together an architecture that maps well to the game’s needs.

Briefly, because of a lack of AFR compatibility 4K is only barely attainable with any kind of GPU setup. In fact we’re only throwing in the scale-less SLI/CF numbers to showcase that fact. We had to dial down our quality settings to Low on CoH2 in order to get a framerate above 30fps; even though we can be more liberal about playable framerates on strategy games, there still needs to be a cutoff for average framerates around that point. As a result 280X, GTX Titan, and 290X are the only cards to make that cutoff, with 290X being the clear winner. But the loss in quality to make 4K achievable is hardly worth the cost.

 

Moving on to minimum framerates, we see that at its most stressful points that nothing, not even 290X, can keep its minimums above 30fps. For a strategy game this is bearable, but we certainly wouldn’t mind more performance. AMD will be pleased though, as their performance advantage over the GTX 780 is only further extended here; a 29% average performance advantage becomes a 43% minimum performance advantage at 2560.

Finally, while we don’t see any performance advantages from AFR on this game we did run our FCAT benchmarks anyhow to quickly capture the delta percentages. Company of Heroes 2 has a higher than average variance even among single cards, which results in deltas being above 5%. The difference between 5% and 7% is not going to be too significant in practice here, but along with AMD’s performance advantage they do have slightly more consistent frame times than the GTX 780. Though in both the case of the 280X and the 290X we’re looking at what are essentially the same deltas, so while the 290X improves on framerates versus the 280X, it doesn’t bring with it any improvements in frame time consistency.

Metro: Last Light Bioshock Infinite
Comments Locked

396 Comments

View All Comments

  • mr_tawan - Tuesday, November 5, 2013 - link

    AMD card may suffer from loud cooler. Let's just hope that the OEM versions would be shipped with quieter coolers.
  • 1Angelreloaded - Monday, November 11, 2013 - link

    I have to be Honest here, it is beast, in fact the only thing in my mind holding this back is lack of feature sets compared to NVidia, namely PhysX, to me this is a bit of a deal breaker compared for 150$ more the 780 Ti gives me that with lower TDP/and sound profile, as we are only able to so much pull from 1 120W breaker without tripping it and modification for some people is a deal breaker due to wear they live and all. Honestly What I really need to see from a site is 4k gaming at max, 1600p/1200p/1080p benchmarks with single cards as well as SLI/Crossfire to see how they scale against each other. To be clear as well a benchmark using Skyrim Modded to the gills in texture resolutions as well to fully see how the VRAM might effect the cards in future games from this next Gen era, where the Consoles can manage a higher texture resolution natively now, and ultimately this will affect PC performance when the standard was 1-2k texture resolutions now becomes double to 4k or even in a select few up to 8k depth. With a native 64 bit architecture as well you will be able to draw more system RAM into the equation where Skyrim can use a max of 3.5 before it dies with Maxwell coming out and a shared memory pool with a single core microprocessor on the die itself with Gsync for smoothness we might see an over engineered GPU card capable of much much more than we thought, ATI as well has their own ideas which will progress, I have a large feeling Hawaii is actually a reject of sorts because they have to compete with Maxwell and engineer more into the cards themselves.
  • marceloviana - Monday, November 25, 2013 - link

    I Just wondering why does this card came with 32Gb gddr5 and see only 4Gb. The PCB show 16 Elpida EDW2032BBBG (2G each). This amount of memory will help a lot in large scenes wit Vray-RT.
  • Mat3 - Thursday, March 13, 2014 - link

    I don't get it. It's supposed to have 11 compute units per shader engine, making 44 on the entire chip. But the 2nd picture says each shader engine can only have up to 9 compute units....?
  • Mat3 - Thursday, March 13, 2014 - link

    2nd picture on page three I mean.
  • sanaris - Monday, April 14, 2014 - link

    Who cares? This card was never meant to compute something.

    It supposed to be "cheap but decent".
    Initially they made this ridiculous price, but now it is around 200-350 at ebay.
    For $200 it worth its price, because it can be used only to play games.
    Who wants to play games at medium quality (not the future ones), may prefer it.

Log in

Don't have an account? Sign up now