Company of Heroes 2

Our second benchmark in our benchmark suite is Relic Games’ Company of Heroes 2, the developer’s World War II Eastern Front themed RTS. For Company of Heroes 2 Relic was kind enough to put together a very strenuous built-in benchmark that was captured from one of the most demanding, snow-bound maps in the game, giving us a great look at CoH2’s performance at its worst. Consequently if a card can do well here then it should have no trouble throughout the rest of the game.

Company of Heroes 2 - 3840x2160 - Low Quality

Company of Heroes 2 - 2560x1440 - Maximum Quality + Med. AA

Company of Heroes 2 - 1920x1080 - Maximum Quality + Med. AA

Since CoH2 is not AFR compatible, the best performance you’re going to get out of it is whatever you can get out of a single GPU. In which case the GTX 980 is the fastest card out there for this game. AMD’s R9 290XU does hold up well though; the GTX 980 may have a lead, but AMD is never more than a few percent behind at 4K and 1440p. The lead over the GTX 780 Ti is much more substantial on the other hand at 13% to 22%. So NVIDIA has finally taken this game back from AMD, as it were.

Elsewhere against the GTX 680 this is another very good performance for the GTX 980, with a performance advantage over 80%.

On an absolute basis, at these settings you’re looking at an average framerate in the 40s, which for an RTS will be a solid performance.

Company of Heroes 2 - Min. Frame Rate - 3840x2160 - Low Quality

Company of Heroes 2 - Min. Frame Rate - 2560x1440 - Maximum Quality + Med. AA

Company of Heroes 2 - Min. Frame Rate - 1920x1080 - Maximum Quality + Med. AA

However when it comes to minimum framerates, GTX 980 can’t quite stay on top. In every case it is ever so slightly edged out by the R9 290XU by a fraction of a frame per second. AMD seems to weather the hardest drops in framerates just a bit better than NVIDIA does. Though neither card can quite hold the line at 30fps at 1440p and 4K.

Metro: Last Light Bioshock Infinite
Comments Locked

274 Comments

View All Comments

  • jmunjr - Friday, September 19, 2014 - link

    Wish you had done a GTX 970 review as well like many other sites since way more of us care about that card than the 980 since it is cheaper.
  • Gonemad - Friday, September 19, 2014 - link

    Apparently, if I want to run anything under the sun in 1080p cranked to full at 60fps, I will need to get me one GTX 980 and a suitable system to run with it, and forget mid-ranged priced cards.

    That should put an huge hole in my wallet.

    Oh yes, the others can run stuff at 1080p, but you have to keep tweaking drivers, turning AA on, turning AA off, what a chore. And the milennar joke, yes it RUNS Crysis, at the resolution I'd like.

    Didn't, by any chance, the card actually benefit of being fabricated at 28nm, by spreading its heat over a larger area? If the whole thing, hipothetically, just shrunk to 14nm, wouldn't all that 165W of power would be dissipated over a smaller area (1/4 area?), and this thing would hit the throttle and stay there?

    Or by being made smaller, it would actually dissipate even less heat and still get faster?
  • Yojimbo - Friday, September 19, 2014 - link

    I think that it depends on the process. If Dennard scaling were to be in effect, then it should dissipate proportionally less heat. But to my understanding, Dennard scaling has broken down somewhat in recent years, and so I think heat density could be a concern. However, I don't know if it would be accurate to say that the chip benefited from the 28nm process, since I think it was originally designed with the 20nm process in mind, and the problem with putting the chip on that process had to do with the cost and yields. So, presumably, the heat dissipation issues were already worked out for that process..?
  • AnnonymousCoward - Friday, September 26, 2014 - link

    The die size doesn't really matter for heat dissipation when the external heat sink is the same size; the thermal resistance from die to heat sink would be similar.
  • danjw - Friday, September 19, 2014 - link

    I would love to see these built on Intel's 14nm process or even the 22nm. I think both Nvidia and AMD aren't comfortable letting Intel look at their technology, despite NDAs and firewalls that would be a part of any such agreement.

    Anyway, thanks for the great review Ryan.
  • Yojimbo - Friday, September 19, 2014 - link

    Well, if one goes by Jen-Hsun Huang's (Nvidia's CEO) comments of a year or two ago, Nvidia would have liked Intel to manufacture their SOCs for them, but it seems Intel was unwilling. I don't see why they would be willing to have them manufacture SOCs and not GPUs being that at that time they must have already had the plan to put their desktop GPU technology into their SOCs, unless the one year delay between the parts makes a difference.
  • r13j13r13 - Friday, September 19, 2014 - link

    hasta que no salga la serie 300 de AMD con soporte nativo para directx 12
  • Arakageeta - Friday, September 19, 2014 - link

    No interpretation of the compute graphs whatsoever? Could you at least report the output of CUDA's deviceQuery tool?
  • texasti89 - Friday, September 19, 2014 - link

    I'm truly impressed with this new line of GPUs. To be able to acheive this leap on efficiency using the same transistor feature size is a great incremental achievement. Bravo TSMC & Nvidia. I feel comfortable to think that we will soon get this amazing 980 performance level on game laptops once we scale technology to the 10nm process. Keep up the great work.
  • stateofstatic - Friday, September 19, 2014 - link

    Spoiler alert: Intel is building a new fab in Hillsboro, OR specifically for this purpose...

Log in

Don't have an account? Sign up now