Meet The 2013 GPU Benchmark Suite & The Test

Having taken a look at the compute side of Titan, let’s finally dive into what most of you have probably been waiting for: our gaming benchmarks.

As this is the first major launch of 2013 it’s also the first time we’ll be using our new 2013 GPU benchmark suite. This benchmark suite should be considered a work in progress at the moment, as it’s essentially incomplete. With several high-profile games due in the next 4 weeks (and no other product launches expected), we expect we’ll be expanding our suite to integrate those latest games. In the meantime we have composed a slightly smaller suite of 8 games that will serve as our base.

AnandTech GPU Bench 2013 Game List
Game Genre
DiRT: Showdown Racing
Total War: Shogun 2 Strategy
Hitman: Absolution Action
Sleeping Dogs Action/Open World
Crysis: Warhead FPS
Far Cry 3 FPS
Battlefield 3 FPS
Civilization V Strategy

Returning to the suite will be Total War: Shogun 2, Civilization V, Battlefield 3, and of course Crysis: Warhead. With no performance-demanding AAA strategy games released in the last year, we’re effectively in a holding pattern for new strategy benchmarks, hence we’re bringing Shogun and Civilization forward. Even 2 years after its release, Shogun 2 can still put an incredible load on a system on its highest settings, and Civilization V is still one of the more advanced games in our suite due to its use of driver command lists for rendering. With Company of Heroes 2 due here in the near future we may finally get a new strategy game worth benchmarking, while Total War will be returning with Rome 2 towards the end of this year.

Meanwhile Battlefield 3 is still among the most popular multiplayer FPSes, and though newer video cards have lightened its system-killer status, it still takes a lot of horsepower to play. Furthermore the engine behind it, Frostbite 2, is used in a few other action games, and will be used for Battlefield 4 at the end of this year. Finally we have the venerable Crysis: Warhead, our legacy entry. As the only DX10 title in the current lineup it’s good for tracking performance against our oldest video cards, plus it’s still such a demanding game that only the latest video cards can play it at high framerates and resolutions with MSAA.

As for the new games in our suite, we have added DiRT: Showdown, Hitman: Absolution, Sleeping Dogs, and Far Cry 3. DiRT: Showdown is the annual refresh of the DiRT racing franchise from Codemasters, based upon their continually evolving racer engine. Meanwhile Hitman: Absolution is last year’s highly regarded third person action game, and notably in this day and age features a built-in benchmark, albeit a bit of a CPU-intensive one. As for Sleeping Dogs, it’s a rare treat in that it’s a benchmarkable open world game (open world games having benchmarks is practically unheard of) giving us a rare chance to benchmark something from this genre. And finally we have Far Cry 3, the latest rendition of the Far Cry franchise. A popular game in its own right, its jungle environment can be particularly punishing.

These games will be joined throughout the year by additional games as we find games that meet our needs and standards, and for which we can create meaningful benchmarks and validate their performance. As with 2012 we’re looking at having roughly 10 game benchmarks at any given time.

Meanwhile from a settings and resolution standpoint we have finally (and I might add, begrudgingly) moved from 16:10 resolutions to 16:9 resolutions in most cases to better match the popularity of 1080p monitors and the recent wave of 1440p IPS monitors. Our primary resolutions are now 2560x1440, 1920x1080, and 1600x900, with an emphasis on 1920x1080 at lower setting ahead of dropping to lower resolutions, given the increasing marginalization of monitors with sub-1080p resolutions. The one exception to these resolutions is our triple-monitor resolution, which stays at 5760x1200. This is purely for technical reasons, as NVIDIA’s drivers do not consistently offer us 5760x1080 on the 1920x1200 panels we use for testing.

As for the testbed itself, we’ve changed very little. Our testbed remains our trusty 4.3GHz SNB-E, backed with 16GB of RAM and running off of a 256GB Samsung 470 SSD. The one change we have made here is that having validated our platform as being able to handle PCIe 3.0 just fine, we are forcibly enabling PCIe 3.0 on NVIDIA cards where it’s typically disabled. NVIDIA disables PCIe 3.0 by default on SNB-E systems due to inconsistencies in the platform, but as our goal is to remove every non-GPU bottleneck, we have little reason to leave PCIe 3.0 disabled. Especially since most buyers will be on Ivy Bridge platforms where PCIe 3.0 is fully supported.

Finally, we’ve also used this opportunity to refresh a couple of our cards in our test suite. AMD’s original press sample for the 7970 GHz Edition was a reference 7970 with the 7970GE BIOS, a configuration that was more-or-less suitable for the 7970GE, but not one AMD’s partners followed. Since all of AMD’s partners are using open air cooling, we’ve replaced our AMD sample with HIS’s 7970 IceQ X2 GHz Edition, a fairly typical representation of the type of dual-fan coolers that are common on 7970GE cards. Our 7970GE temp/noise results should now be much closer to what retail cards will do, though performance is unchanged.

Unfortunately we’ve had to deviate from that almost immediately for CrossFire testing. Our second HIS card was defective, so due to time constraints we’re using our original AMD 7970GE as our second card for CF testing. This has no impact on performance, but it means that we cannot fairly measure temp or noise. We will update Bench with those results once we get a replacement card and run the necessary tests.

Finally, we also have a Powercolor Devil13 7990 as our 7990 sample. The Devil13 was a limited run part and has been replaced by the plain 7990, the difference between them being a 25MHz advantage for the Devil13. As such we’ve downclocked our Devil13 to match the basic 7990’s specs. The performance and power results should perfectly match a proper retail 7990.

CPU: Intel Core i7-3960X @ 4.3GHz
Motherboard: EVGA X79 SLI
Power Supply: Antec True Power Quattro 1200
Hard Disk: Samsung 470 (256GB)
Memory: G.Skill Ripjaws DDR3-1867 4 x 4GB (8-10-9-26)
Case: Thermaltake Spedo Advance
Monitor: Samsung 305T
Video Cards:

AMD Radeon HD 7970
AMD Radeon HD 7970 GHz Edition
PowerColor Radeon HD 7990 Devil13
NVIDIA GeForce GTX 580
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 690
NVIDIA GeForce GTX Titan

Video Drivers: NVIDIA ForceWare 314.07
NVIDIA ForceWare 314.09 (Titan)
AMD Catalyst 13.2 Beta 6
OS: Windows 8 Pro

 

Titan’s Compute Performance, Cont DiRT: Showdown
Comments Locked

337 Comments

View All Comments

  • PEJUman - Thursday, February 21, 2013 - link

    Made me wonder:
    7970 - 4.3B trans. - $500 - OK compute, 100% gaming perf.
    680 - 3.5B trans. _ $500 - sucky compute, 100% gaming perf.
    Titan - 7.1B trans - $1000 - OK compute, ~140% gaming perf.

    1. Does compute capability really takes that much more transistors to build? as in 2x trans. only yield ~140% improvement on gaming.
    I think this was a conscious decision by nVidia to focus on compute and the required profit margin to sustain R&D.

    2. despite the die size shrink, I'm guessing it would be harder to have functional silicon as the process shrinks. i.e. finding 100mm^2 of functional silicon @ 40nm is easier than @28nm, from the standpoint that more transistors are packed to the same area. Which I think why they have 15SMXs designed.
    Thus it'd be more expensive for nVidia to build same area at 28 vs. 40 nm... at least until the process matures, but at 7B I doubt it will ever be attainable.

    3. The AMD statement on no updates to 7970 essentially sealed the $1000 price for titan. I would bet if AMD announced 8970, Titan would be priced at $700 today, with 3GB memory.
  • JarredWalton - Thursday, February 21, 2013 - link

    Luxury GPU is no more silly than Extreme CPUs that cost $1000 each. And yet, Intel continues to sell those, and what's more the performance offered by Titan is a far better deal than the performance offered by a $1000 CPU vs. a $500 CPU. Then there's the Tesla argument: it's a $3500 card for the K20 and this is less than a third that price, with the only drawbacks being no ECC and no scalability beyond three cards. For the Quadro crowd, this might be a bargain at $1000 (though I suspect Titan won't get the enhanced Quadro drivers, so it's mostly a compute Tesla alternative).
  • chizow - Friday, February 22, 2013 - link

    The problem with this analogy, which I'm sure was floated around Nvidia's Marketing board room in formulating the plan for Titan, is that Intel offers viable alternative SKUs based on the same ASIC. Sure there are the few who will buy the Intel EE CPU (3970K) for $1K, but the overwhelming majority in that high-end market would rather opt for the $500 option (3930K) or $300 option (3820).

    Extend this to the GPU market and you see Nvidia clearly withheld GK100/GK110 as the flagship part for over a year, and instead of offering a viable SKU for traditional high-end market segments based on this ASIC, they created a NEW ultra-premium market. That's the ONLY reason Titan looks better compared to GK104 than Intel's $1K and $500 options, because Nvidia's offerings are truly different classes while Intel's differences are minor binning and multiplier locked parts with a bigger black box.
  • mlambert890 - Saturday, February 23, 2013 - link

    The analogy is fine, you're just choosing to not see it.

    Everything you said about Intel EE vs standard directly applies here.

    You are assuming that the Intel EE parts are nothing more than a marketing ploy, which is wrong, while at the same time assuming that the Titan is orders of magnitude beyond the 680 which is also wrong.

    You're seeing it from the point of view of someone who buys the cheapest Intel CPU, overclocks it to the point of melting, and then feels they have a solution "just as good if not better" than the Intel EE.

    Because the Titan has unlocked stream procs that the 680 lacks, and there is no way to "overclock" your way around missing SPs, you feel that NVidia has committed some great sin.

    The reality is that the EE procs give out of box performance that is superior to out of box performance of the lesser SKUs by a small, but appreciable, margin. In addition, they are unlocked, and come from a better bin, which means they will overclock *even better* than the lesser SKUs. Budget buyers never want to admit this, but it is reality in most cases. Yes you can get a "lucky part" from the lesser SKU that achieves a 100% overclock, but this is an anomaly. Most who criticize the EE SKUs have never even come close to owning one.

    Similarly, the Titan offers a small, but appreciable, margin of performance over the 680. It allows you to wait longer before going SLI. The only difference is you don't get the "roll of the dice" shot at a 680 that *might* be able to appear to match a Titan since the SP's arent there.

    The analogy is fine, it's just that biased perspective prevents some from seeing it.
  • chizow - Saturday, February 23, 2013 - link

    Well you obviously have trouble comprehending analogies if you think 3.6B difference in transistors and ~40% difference in performance is analogous to 3MB L3 cache, an unlocked multiplier and 5% difference in performance.

    But I guess that's the only way you could draw such an asinine parallel as this:

    "Similarly, the Titan offers a small, but appreciable, margin of performance over the 680."

    It's the only way your ridiculous analogy to Intel's EE could possibly hold true, when in reality, it couldn't be further from the truth. Titan holds a huge advantage over GTX 680, but that's expected, its a completely different class of GPU whereas the 3930K and 3960X are cut from the exact same wafer.
  • CeriseCogburn - Sunday, February 24, 2013 - link

    There was no manufacturing capacity you IDIOT LIAR.
    The 680 came out 6 months late, and amd BARELY had 79xx's on the shelves till a day before that.

    Articles were everywhere pointing out nVidia did not have reserve die space as the crunch was extreme, and the ONLY factory was in the process of doing a multi-billion dollar build out to try to keep up with bare minimum demand.

    Now we've got a giant GPU core with perhaps 100 attempted dies per wafer, with a not high yield, YET YOU'RE A LIAR NONETHELESS.
  • chizow - Sunday, February 24, 2013 - link

    It has nothing to do with manufacturing capacity, it had everything to do with 7970's lackluster performance and high price tag.

    GTX 680 was only late (by 3, not 6 months) because Nvidia was too busy re-formulating their high-end strategy after seeing 7970 outperform GTX 580 by only 15-20% but asking 10% higher price. Horrible price:performance metric for a new generation GPU on a new process node.

    This gave Nvidia the opportunity to:

    1) Position mid-range ASIC GK104 as flagship GTX 680 and still beat the 7970.
    2) Push back and most importantly, re-spin GK100 and refine it to be GK110.
    3) Screw their long-time customers and AMD/AMD fans in the process.
    4) Profit.

    So instead of launching and mass-producing their flagship ASIC first (GK100) as they've done in every single previous generation and product launch, they shifted their production allocation at TSMC to their mid-range ASIC, GK104 instead.

    Once GK110 was ready, they've had no problem churning them out, even the mfg date of these TITAN prove this point as week 31 chips are somewhere in the July-August time frame. They were able to deliver some 19,000 K20X units to ORNL for the real TITAN in October 2012. Coupled with the fact they're using ASICs with the same number of functional units for GTX Titanic, it goes to show yields are pretty good.

    But the real conclusion to be drawn for this is that other SKUs based on GK110 are coming. There's no way GK110 wafer yields are anywhere close to 100% for 15 SMX ASICs. I fully expect a reduced SMX unit, maybe 13 with 2304SP as originally rumored show it's face as the GTX 780 with a bunch of GK114 refreshes behind it to fill out the line-up.

    The sooner people stop overpaying for TITAN, the sooner we'll see the GTX 700 series, imo, but with no new AMD GPUs on the horizon we may be waiting awhile.
  • CeriseCogburn - Sunday, February 24, 2013 - link

    Chizow I didn't read your stupid long post except for your stupid 1st line.

    you're a brainwashed lying sack of idiocy, so maybe i'll waste my time reading your idiotic lies, and maybe not, since your first line is the big fat frikkin LIE you HAVE TO BELIEVE that you made up in your frikkin head, in order to take your absolutely FALSE STANCE for the past frikkin nearly year now.
  • chizow - Monday, February 25, 2013 - link

    You should read it, you might learn something.

    Until then stfd, stfu, and gfy.
  • CeriseCogburn - Sunday, February 24, 2013 - link

    Dear Jeff, a GPU that costs $400 dollars is a luxury GPU.

    I'm not certain you disagree with that, I'd just like to point out the brainless idiots pretending $1000 for a GPU is luxury and $250 is not are clueless.

Log in

Don't have an account? Sign up now