DiRT: Showdown

Racing to the front of our 2013 list will be our racing benchmark, DiRT: Showdown. DiRT: Showdown is based on the latest iteration of Codemasters’ EGO engine, which has continually evolved over the years to add more advanced rendering features. It was one of the first games to implement tessellation, and also one of the first games to implement a DirectCompute based forward-rendering compatible lighting system. At the same time as Codemasters is by far the most prevalent PC racing developers, it’s also a good proxy for some of the other racing games on the market like F1 and GRID.

DiRT: Showdown is something of a divisive game for benchmarking. The game’s advanced lighting system, while not developed by AMD, does implement a lot of the key concepts they popularized with their Leo forward lighting tech demo. As a result performance with that lighting system turned on has been known to greatly favor AMD cards. With that said, since we’re looking at high-end cards there’s really little reason not to be testing with it turned on since even a slow card can keep up. That said, this is why we also test DiRT with advanced lighting both on and off starting at 1920x1080 Ultra.

The end result is perhaps unsurprising in that NVIDIA already starts with a large deficit with the GTX 680 versus AMD’s Radeon cards. Titan closes the gap and is enough to surpass the 7970GE at every resolution except 5760, but just barely. This is the one game like this and as a result I don’t put a ton of stock into these results on a global level, but I thought it would make for an interesting look none the less.

This also settles some speculation of whether DiRT and its compute-heavy lighting system would benefit from the compute performance improvements Titan brings to the table. The answer to that is yes, but only by roughly as much as the increase in theoretical compute performance over GTX 680. We’re not seeing any kind of performance increase that could be attributed to improved compute efficiency here, which is why Titan can only just beat the 7970GE at 2560 here. However the jury is still out on whether this means that DiRT’s lighting algorithm doesn’t map well to Kepler period, or if it’s an implementation issue. We also saw some unexpected weak DirectCompute performance out of Titan with our SystemCompute benchmark, so this may be further evidence that DirectCompute isn’t currently taking full advantage of everything Titan offers.

In any case, at 2560 Titan is roughly 47% faster than the GTX 680 and all of 3% faster than the 7970GE. It’s enough to get Titan above the 60fps mark here, but at 5760 no single GPU, not even GK110, can get you 60fps. On the other hand, the equivalent AMD dual-GPU products, the 7970GECF and the 7990, have no such trouble. Dual-GPU cards will consistently win, but generally not like this.

Meet The 2013 GPU Benchmark Suite & The Test Total War: Shogun 2
Comments Locked

337 Comments

View All Comments

  • CeriseCogburn - Tuesday, February 26, 2013 - link

    Yes, and this is the core situation the radical chizow and others like him have chosen to completely ignore.

    Ivy is 22nm and only 14nm now appears to be possible as approx. 30 atoms are channel widths, with electromigration/plasma leakage hits a critical stage.

    So the slowdown has already occurred, Moore's law is broken (decelleration has been occurring for a long time) , and the reality is near present with the "largest possible" die at Titan's node.

    The number of atoms across in the "electric wire channel" and insulator sides width is down to countable on fingers and toes and it appears there's nearly no place to go.
    That's why we keep hearing about quantum computing dreams, and why shrinkage steps have been less beneficial toward this wall.

    So, expect the crybabies to be taking up a few notches more into an ever higher pitch the next couple of releases. It's coming, or rather it's here.
  • vanwazltoff - Friday, February 22, 2013 - link

    the 690, 680 and 7970 have had almost a year to brew and improve with driver updates, i suspect that after a few drivers and an overclock titan will creep up on a 690 and will probably see a price deduction after a few months. dont clock out yet, just think what this could mean for 700 and 800 series cards, its obvious nvidia can deliver
  • initialised - Friday, February 22, 2013 - link

    When are you guys going to start posting 4K performance for high end graphics?
  • iceman-sven - Friday, February 22, 2013 - link

    I am also wondering. Anandtech need to buy the Sharp PN-K321 fast. I will upgrade from my 2560x1600 to 4k in the next 12 months.

    I hope Anandtech does a rerun of some benchmarks with 4k and Titan SLI configurations. I am planning to buy 2 Titan for this.
  • Ryan Smith - Monday, February 25, 2013 - link

    When someone releases a suitable desktop monitor and we can acquire it on a long-term basis. Something like Sharp's 32-incher is the right resolution, but it really pushes the boundary for what can be called a "desktop" monitor.
  • ElminsterTCOM - Friday, February 22, 2013 - link

    I was wondering if you could pop this card into a Mac Pro and let us know if it is compatible? This would be a great card for 3D modeling!
  • Saxie81 - Friday, February 22, 2013 - link

    I'm wondering why the other websites that give reviews, benchmarks etc, have missed the mark with this card. Everywhere I look, they are posting nothing but game benchmarks, this is why I keep coming to Anandtech. This clearly is meant for more than that. I'm looking @ it for gaming and 3d rendering. I would have loved to have seen maybe Rendering times on a simple image in Blender etc, but the compute benchmarks gives a pretty good indication of what the Titan is capable of. Great article as always, Ryan, and welcome Rahul!
  • Zoeff - Friday, February 22, 2013 - link

    Looking at the Crysis 1080p at highest settings benchmark. I guess they're the wrong way around? :)
  • Ryan Smith - Monday, February 25, 2013 - link

    Do'h. Fixed.
  • realneil - Friday, February 22, 2013 - link

    Seems like whenever `anyone` releases the ~newest/best~ video card, they always price it at a grand. So this isn't surprising to me at all. How much were the Matrix cards from ASUS when they were new?

    I just can't see spending that much for it though. A pair of 680s or 7970s would get the job done for me.

Log in

Don't have an account? Sign up now