Thief

Our latest addition to our benchmark suite is Eidos Monreal’s stealth action game, Thief. Set amidst a Victorian-era fantasy environment, Thief is an Unreal Engine 3 based title which makes use of a number of supplementary Direct3D 11 effects, including tessellation and advanced lighting. Adding further quality to the game on its highest settings is support for SSAA, which can eliminate most forms of aliasing while bringing even the most powerful video cards to their knees.

Thief - 3840x2160 - Very High Quality, No SSAA

Thief - 2560x1440 - Very High QualityThief - 1920x1080 - Very High Quality

Thief is another solid win for the GTX 980. The closest anyone gets to it is within 10%, and the lead only widens from there. Against the GTX 780 Ti, this is a lead of anywhere between 10% and 16%, and against the R9 290 XU it’s 15-22%; Mantle doing the card no favors for average framerates above 1080p.

The performance advantage over the GTX 780 and GTX 680 is also above average. GTX 980 can outrun the previous x80 card by 33% or more, and the GTX 680 by at least 80%.

On an absolute basis the GTX 980 won’t quite crack 60fps at 1440p, but it does come very close at 56fps. And since thief is running an internal form of SSAA, turning up the resolution to 4K and dropping the SSAA still yields playable framerates, though at 48fps it’s closer to 45 than 60. 60fps is going to require a bit more horsepower than what a single GTX 980 can deliver today.

Thief - Min. Frame Rate - 3840x2160 - Very High Quality, No SSAAThief - Min. Frame Rate - 2560x1440 - Very High QualityThief - Min. Frame Rate - 1920x1080 - Very High Quality

The GTX 980’s performance advantage generally holds up when it comes to minimum framerates as well. Though it is interesting to note that until we get to 4K, the GTX 980 holds a larger minimum framerate advantage over the GTX 780 Ti than it does an average framerate advantage – 20% verus about 10%. On the other hand the use of Mantle begins to close the gap for the R9 290XU a bit, but it’s still not enough to make up for the GTX 980’s strong overall performance advantage, especially at 1080p.

Thief - Delta PercentagesThief - Surround/4K - Delta Percentages

Our delta percentages are once more unremarkable. All cards are consistently below 3% here.

Total War: Rome 2 GRID 2
Comments Locked

274 Comments

View All Comments

  • jmunjr - Friday, September 19, 2014 - link

    Wish you had done a GTX 970 review as well like many other sites since way more of us care about that card than the 980 since it is cheaper.
  • Gonemad - Friday, September 19, 2014 - link

    Apparently, if I want to run anything under the sun in 1080p cranked to full at 60fps, I will need to get me one GTX 980 and a suitable system to run with it, and forget mid-ranged priced cards.

    That should put an huge hole in my wallet.

    Oh yes, the others can run stuff at 1080p, but you have to keep tweaking drivers, turning AA on, turning AA off, what a chore. And the milennar joke, yes it RUNS Crysis, at the resolution I'd like.

    Didn't, by any chance, the card actually benefit of being fabricated at 28nm, by spreading its heat over a larger area? If the whole thing, hipothetically, just shrunk to 14nm, wouldn't all that 165W of power would be dissipated over a smaller area (1/4 area?), and this thing would hit the throttle and stay there?

    Or by being made smaller, it would actually dissipate even less heat and still get faster?
  • Yojimbo - Friday, September 19, 2014 - link

    I think that it depends on the process. If Dennard scaling were to be in effect, then it should dissipate proportionally less heat. But to my understanding, Dennard scaling has broken down somewhat in recent years, and so I think heat density could be a concern. However, I don't know if it would be accurate to say that the chip benefited from the 28nm process, since I think it was originally designed with the 20nm process in mind, and the problem with putting the chip on that process had to do with the cost and yields. So, presumably, the heat dissipation issues were already worked out for that process..?
  • AnnonymousCoward - Friday, September 26, 2014 - link

    The die size doesn't really matter for heat dissipation when the external heat sink is the same size; the thermal resistance from die to heat sink would be similar.
  • danjw - Friday, September 19, 2014 - link

    I would love to see these built on Intel's 14nm process or even the 22nm. I think both Nvidia and AMD aren't comfortable letting Intel look at their technology, despite NDAs and firewalls that would be a part of any such agreement.

    Anyway, thanks for the great review Ryan.
  • Yojimbo - Friday, September 19, 2014 - link

    Well, if one goes by Jen-Hsun Huang's (Nvidia's CEO) comments of a year or two ago, Nvidia would have liked Intel to manufacture their SOCs for them, but it seems Intel was unwilling. I don't see why they would be willing to have them manufacture SOCs and not GPUs being that at that time they must have already had the plan to put their desktop GPU technology into their SOCs, unless the one year delay between the parts makes a difference.
  • r13j13r13 - Friday, September 19, 2014 - link

    hasta que no salga la serie 300 de AMD con soporte nativo para directx 12
  • Arakageeta - Friday, September 19, 2014 - link

    No interpretation of the compute graphs whatsoever? Could you at least report the output of CUDA's deviceQuery tool?
  • texasti89 - Friday, September 19, 2014 - link

    I'm truly impressed with this new line of GPUs. To be able to acheive this leap on efficiency using the same transistor feature size is a great incremental achievement. Bravo TSMC & Nvidia. I feel comfortable to think that we will soon get this amazing 980 performance level on game laptops once we scale technology to the 10nm process. Keep up the great work.
  • stateofstatic - Friday, September 19, 2014 - link

    Spoiler alert: Intel is building a new fab in Hillsboro, OR specifically for this purpose...

Log in

Don't have an account? Sign up now