Titan’s Compute Performance, Cont

With Rahul having covered the basis of Titan’s strong compute performance, let’s shift gears a bit and take a look at real world usage.

On top of Rahul’s work with Titan, as part of our 2013 GPU benchmark suite we put together a larger number of compute benchmarks to try to cover real world usage, including the old standards of gaming usage (Civilization V) and ray tracing (LuxMark), along with several new tests. Unfortunately that got cut short when we discovered that OpenCL support is currently broken in the press drivers, which prevents us from using several of our tests. We still have our CUDA and DirectCompute benchmarks to look at, but a full look at Titan’s compute performance on our 2013 GPU benchmark suite will have to wait for another day.

For their part, NVIDIA of course already has OpenCL working on GK110 with Tesla. The issue is that somewhere between that and bringing up GK110 for Titan by integrating it into NVIDIA’s mainline GeForce drivers – specifically the new R314 branch – OpenCL support was broken. As a result we expect this will be fixed in short order, but it’s not something NVIDIA checked for ahead of the press launch of Titan, and it’s not something they could fix in time for today’s article.

Unfortunately this means that comparisons with Tahiti will be few and far between for now. Most significant cross-platform compute programs are OpenCL based rather than DirectCompute, so short of games and a couple other cases such as Ian’s C++ AMP benchmark, we don’t have too many cross-platform benchmarks to look at. With that out of the way, let’s dive into our condensed collection of compute benchmarks.

We’ll once more start with our DirectCompute game example, Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes.  While DirectCompute is used in many games, this is one of the only games with a benchmark that can isolate the use of DirectCompute and its resulting performance.

Note that for 2013 we have changed the benchmark a bit, moving from using a single leader to using all of the leaders. As a result the reported numbers are higher, but they’re also not going to be comparable with this benchmark’s use from our 2012 datasets.

With Civilization V having launched in 2010, graphics cards have become significantly more powerful since then, far outpacing growth in the CPUs that feed them. As a result we’ve rather quickly drifted from being GPU bottlenecked to being CPU bottlenecked, as we see both in our Civ V game benchmarks and our DirectCompute benchmarks. For high-end GPUs the performance difference is rather minor; the gap between GTX 680 and Titan for example is 45fps, or just less than 10%. Still, it’s at least enough to get Titan past the 7970GE in this case.

Our second test is one of our new tests, utilizing Elcomsoft’s Advanced Office Password Recovery utility to take a look at GPU password generation. AOPR has separate CUDA and OpenCL kernels for NVIDIA and AMD cards respectively, which means it doesn’t follow the same code path on all GPUs but it is using an optimal path for each GPU it can handle. Unfortunately we’re having trouble getting it to recognize AMD 7900 series cards in this build, so we only have CUDA cards for the time being.

Password generation and other forms of brute force crypto is an area  where the GTX 680 is particularly weak, thanks to the various compute aspects that have been stripped out in the name of efficiency. As a result it ends up below even the GTX 580 in these benchmarks, never mind AMD’s GCN cards. But with Titan/GK110 offering NVIDIA’s full compute performance, it rips through this task. In fact it more than doubles performance from both the GTX 680 and the GTX 580, indicating that the huge performance gains we’re seeing are coming from not just the additional function units, but from architectural optimizations and new instructions that improve overall efficiency and reduce the number of cycles needed to complete work on a password.

Altogether at 33K passwords/second Titan is not just faster than GTX 680, but it’s faster than GTX 690 and GTX 680 SLI, making this a test where one big GPU (and its full compute performance) is better than two smaller GPUs. It will be interesting to see where the 7970 GHz Edition and other Tahiti cards place in this test once we can get them up and running.

Our final test in our abbreviated compute benchmark suite is our very own Dr. Ian Cutress’s SystemCompute benchmark, which is a collection of several different fundamental compute algorithms. Rahul went into greater detail on this back in his look at Titan’s compute performance, but I wanted to go over it again quickly with the full lineup of cards we’ve tested.

Surprisingly, for all of its performance gains relative to GTX 680, Titan still falls notably behind the 7970GE here. Given Titan’s theoretical performance and the fundamental nature of this test we would have expected it to do better. But without additional cross-platform tests it’s hard to say whether this is something where AMD’s GCN architecture continues to shine over Kepler, or if perhaps it’s a weakness in NVIDIA’s current DirectCompute implementation for GK110. Time will tell on this one, but in the meantime this is the first solid sign that Tahiti may be more of a match for GK110 than it’s typically given credit for.

Titan’s Compute Performance (aka Ph.D Lust) Meet The 2013 GPU Benchmark Suite & The Test
Comments Locked

337 Comments

View All Comments

  • chizow - Friday, February 22, 2013 - link

    Idiot...has the top end card cost 2x as much every time? Of course not!!! Or we'd be paying $100K for GPUs!!!
  • CeriseCogburn - Saturday, February 23, 2013 - link

    Stop being an IDIOT.

    What is the cost of the 7970 now, vs what I paid for it at release, you insane gasbag ?
    You seem to have a brainfart embedded in your cranium, maybe you should go propose to Charlie D.
  • chizow - Saturday, February 23, 2013 - link

    It's even cheaper than it was at launch, $380 vs. $550, which is the natural progression....parts at a certain performance level get CHEAPER as new parts are introduced to the market. That's called progress. Otherwise there would be NO INCENTIVE to *upgrade* (look this word up please, it has meaning).

    You will not pay the same money for the same performance unless the part breaks down, and semiconductors under normal usage have proven to be extremely venerable components. People expect progress, *more* performance at the same price points. People will not pay increasing prices for things that are not essential to life (like gas, food, shelter), this is called the price inelasticity of demand.

    This is a basic lesson in business, marketing, and economics applied to the semiconductor/electronics industry. You obviously have no formal training in any of the above disciplines, so please stop commenting like a ranting and raving idiot about concepts you clearly do not understand.
  • CeriseCogburn - Saturday, February 23, 2013 - link

    They're ALREADY SOLD OUT STUPID IDIOT THEORIST.

    LOL

    The true loser, an idiot fool, wrong before he's done typing, the "education" is his brainwashed fried gourd Charlie D OWNZ.
  • chizow - Sunday, February 24, 2013 - link

    And? There's going to be some demand for this card just as there was demand for the 690, it's just going to be much lower based on the price tag than previous high-end cards. I never claimed anything otherwise.

    I outlined the expectations, economics, and buying decisions in general for the tech industry and in general, they hold true. Just look around and you'll get plenty of confirmation where people (like me) who previously bought 1, 2, 3 of these $500-650 GPUs are opting to pass on a single Titanic at $1000.

    Nvidia's introduction of an "ultra-premium" range is an unsustainable business model because it assumes Nvidia will be able to sustain this massive performance lead over AMD. Not to mention they will have a harder time justifying the price if their own next-gen offering isn't convincingly faster.
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    You're not the nVidia CEO nor their bean counter, you whacked out fool.

    You're the IDIOT that babbles out stupid concepts with words like "justifying", as you purport to be an nVidia marketing hired expert.

    You're not. You're a disgruntled indoctrinated crybaby who can't move on with the times, living in a false past, and waiting for a future not here yet.
  • Oxford Guy - Thursday, February 21, 2013 - link

    The article's first page has the word luxury appearing five times. The blurb, which I read prior to reading the article's first page has luxury appearing twice.

    That is 7 uses of the word in just a bit over one page.

    Let me guess... it's a luxury product?
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    It's stupid if you ask me. But that's this place, not very nVidia friendly after their little didn't get the new 98xx fiasco, just like Tom's.

    A lot of these top tier cards are a luxury, not just the Titan, as one can get by with far less, the problem is, the $500 cards fail often at 1920x resolution, and this one perhaps can be said to have conquered just that, so here we have a "luxury product" that really can't do it's job entirely, or let's just say barely, maybe, as 1920X is not a luxury resolution.
    Turn OFF and down SOME in game features, and that's generally, not just extreme case.

    People are fools though, almost all the time. Thus we have this crazed "reviews" outlook distortion, and certainly no such thing as Never Settle.
    We're ALWAYS settling when it comes to video card power.
  • araczynski - Thursday, February 21, 2013 - link

    too bad there's not a single game benchmark in that whole article that I give 2 squirts about. throw in some RPG's please, like witcher/skyrim.
  • Ryan Smith - Thursday, February 21, 2013 - link

    We did test Skyrim only to ultimately pass on it for a benchmark. The problem with Skyrim (and RPGs in general) is that they're typically CPU limited. In this case our charts would be nothing but bar after bar at roughly 90fps, which wouldn't tell us anything meaningful about the GPU.

Log in

Don't have an account? Sign up now