Titan’s Compute Performance (aka Ph.D Lust)

Because GK110 is such a unique GPU from NVIDIA when it comes to compute, we’re going to shake things up a bit and take a look at compute performance first before jumping into our look at gaming performance.

On a personal note, one of the great things about working at AnandTech is all the people you get to work with. Anand himself is nothing short of fantastic, but what other review site also has a Brian Klug or a Jarred Walton? We have experts in a number of fields, and as a computer technology site that includes of course includes experts in computer science.

What I’m trying to say is that for the last week I’ve been having to fend off our CS guys, who upon hearing I had a GK110 card wanted one of their own. If you’ve ever wanted proof of just how big a deal GK110 is – and by extension Titan – you really don’t have to look too much farther than that.

Titan, its compute performance, and the possibilities it unlocks is a very big deal for researchers and other professionals that need every last drop of compute performance that they can get, for as cheap as they can get it. This is why on the compute front Titan stands alone; in NVIDIA’s consumer product lineup there’s nothing like it, and even AMD’s Tahiti based cards (7970, etc), while potent, are very different from GK110/Kepler in a number of ways. Titan essentially writes its own ticket here.

In any case, as this is the first GK110 product that we have had access to, we couldn’t help but run it through a battery of tests. The Tesla K20 series may have been out for a couple of months now, but at $3500 for the base K20 card, Titan is the first GK110 card many compute junkies are going to have real access to.

To that end I'd like to introduce our newest writer, Rahul Garg, who will be leading our look at Titan/GK110’s compute performance. Rahul is a Ph.D student specializing in the field of parallel computing and GPGPU technology, making him a prime candidate for taking a critical but nuanced look at what GK110 can do. You will be seeing more of Rahul in the future, but first and foremost he has a 7.1B transistor GPU to analyze. So let’s dive right in.

By: Rahul Garg

For compute performance, we first looked at two common benchmarks: GEMM (measures performance of dense matrix multiplication) and FFT (Fast Fourier Transform). These numerical operations are important in a variety of scientific fields. GEMM is highly parallel and typically compute heavy, and one of the first tests of performance and efficiency on any parallel architecture geared towards HPC workloads. FFT is typically memory bandwidth bound but, depending upon the architecture, can be influenced by inter-core communication bandwidth. Vendors and third-parties typically supply optimized libraries for these operations. For example, Intel supplies MKL for Intel processors (including Xeon Phi) and AMD supplies ACML and OpenCL-based libraries for their CPUs and GPUs respectively.  Thus, these benchmarks measure the performance of the combination of both the hardware and software stack.

For GEMM, we tested the performance of NVIDIA's CUBLAS library supplied with CUDA SDK 5.0, on SGEMM (single-precision/fp32 GEMM) and DGEMM (double precision/fp64 GEMM) on square matrices of size 5k by 5k. For SGEMM on Titan, the data reported here was collected with boost disabled. We also conducted the experiments with boost enabled on Titan, but found that the performance was effectively equal to the non-boost case. We assume that it is because our test ran for a very short period of time and perhaps did not trigger boost. Therefore, for the sake of simpler analysis, we report the data with boost disabled on the Titan. If time permits, we may return to the boost issue in a future article for this benchmark.

Apart from the results collected by us for GTX Titan, GTX 680 and GTX 580, we refer to experiments conducted by Matsumoto, Nakasato and Sedukin reported in a technical report filed at the University of Aizu about GEMM on Radeon 7970.  Their exact parameters and testbed are different than ours, and we include their results for illustrative purposes, as a ballpark estimate only. The results are below.

DGEMM

Titan rules the roost amongst the three listed cards in both SGEMM and DGEMM by a wide margin. We have not included Intel's Xeon Phi in this test, but the TItan's achieved performance is higher than the theoretical peak FLOPS of the current crop of Xeon Phi. Sharp-eyed readers will have observed that the Titan achieves about 1.3 teraflops on DGEMM, while the listed fp64 theoretical peak is also 1.3 TFlops; we were not expecting 100% of peak on the Titan in DGEMM. NVIDIA clarified that the fp64 rating for the Titan is a conservative estimate. At 837MHz, the calculated fp64 peak of Titan is 1.5 TFlops. However, under heavy load in fp64 mode, the card may underclock below the listed 837MHz to remain within the power and thermal specifications. Thus, fp64 ALU peak can vary between 1.3 TFlops and 1.5 TFlops and our DGEMM results are within expectations.

Next, we consider the percentage of fp32 peak achieved by the respective SGEMM implementations. These are plotted below.

Percentage of peak achieved on SGEMM

Titan achieves about 71% of its peak while GTX 680 only achieves about 40% of the peak. It is clear that while both GTX 680 and Titan are said to be Kepler architecture chips, Titan is not just a bigger GTX 680. Architectural tweaks have been made that enable it to reach much higher efficiency than the GTX 680 on at least some compute workloads. GCN based Radeon 7970 obtains about 63% of peak on SGEMM using Matsumoto et al. algorithm, and Fermi based GTX 580 also obtains about 63% of peak using CUBLAS.

For FFT, we tested the performance of 1D complex-to-complex inplace transforms of size 225 using the CUFFT library. Results are given below.

FFT single precision

FFT double precision

Titan outperforms the GTX 680 in FFT by about 50% in single-precision. We suspect this is primarily due to increased memory bandwidth on Titan compared to GTX 680 but we have not verified this hypothesis.  GTX 580 has a slight lead over the GTX 680. Again, if time permits, we may return to the benchmark for a deeper analysis. Titan achieves about 3.4x the performance of GTX 680 but this is not surprising given the poor fp64 execution resources on the GTX 680.

We then looked at an in-house benchmark called SystemCompute, developed by our own Ian Cutress. The benchmark tests the performance on a variety of sample kernels that are representative of some scientific computing applications. Ian described the CPU version of these benchmarks in a previous article. Ian wrote the GPU version of the benchmarks in C++ AMP, which is a relatively new GPGPU API introduced by Microsoft in VS2012.

Microsoft's implementation of AMP compiles down to DirectCompute shaders. These are all single-precision benchmarks and should run on any DX11 capable GPU. The benchmarks include 2D and 3D finite difference solvers, 3d particle movement, n-body benchmark and a simple matrix multiplication algorithm. Boost is enabled on both the Titan and GTX 680 for this benchmark. We give the score reported by the benchmark for both cards, and report the speedup of the Titan over 680. Speedup greater than 1 implies Titan is faster, while less than 1 implies a slowdown.

SystemCompute scores (higher is better)
Benchmark GTX 580 GTX 680 GTX Titan Speedup of Titan
over GTX 680
2D FD 9053 8445 12461 1.47
3D FD 3133 3827 5263 1.37
3DPmo 41722 26955 40397 1.49
MatMul 172 197 229 1.16
nbody 918 1517 2418 1.59

The benchmarks show between 16% and 60% improvement, with the most improvement coming from the relatively FLOP-heavy n-body benchmark. Interestingly, GTX 580 wins over the Titan in 3DPMo and wins over the 680 in 3DPmo and 2D.

Overall, GTX Titan is an impressive accelerator from compute perspective and posts large gains over its predecessors.

The Final Word On Overclocking Titan’s Compute Performance, Cont
Comments Locked

337 Comments

View All Comments

  • CeriseCogburn - Saturday, February 23, 2013 - link

    $800 or $900 dollars is close enough to a grand that it seems silly.

    Two 7970's at the $579 release and months long price is nearer $1200, and we have endless amd fanboy braggarts here claiming they did the right thing and went for it or certainly would since future proof and value is supreme.

    Now not a single one has said in this entire near 20 pages of comments they'd love to see the FUTURE PROOF ! of the 6 GIGS of ram onboard...
    Shortly ago it was all we ever heard, the absolute reason the 79xx series MUST be purchased over the 600 nVidia series...

    ROFL - the bare naked biased stupidity is almost too much to bear.

    Now the "futureproof" amd cards the crybaby liars screeched must be purchased for the 3G of ram future wins, ARE LOSERS TO THIS NEW TITAN PERIOD, AND FOREVERMORE.

    I guess the "word" "futureproof" was banned from the techtard dictionary just before this article posted.
  • CeriseCogburn - Saturday, February 23, 2013 - link

    Thank you nVidia, 3 monitors, and a 4th, ultra rezz 5,760 x 1,080, and playable maxxed !

    ROFL -

    Thank you all the little angry loser fanboys who never brought this up over 22 pages of ranting whines.
  • CeriseCogburn - Saturday, February 23, 2013 - link

    " After a few hours of trial and error, we settled on a base of the boost curve of 9,80 MHz, resulting in a peak boost clock of a mighty 1,123MHz; a 12 per cent increase over the maximum boost clock of the card at stock. "
    http://www.bit-tech.net/hardware/2013/02/21/nvidia...
    That's 27mhz according to here...

    LOL

    Love this place.
  • TheJian - Sunday, February 24, 2013 - link

    Here's why they don't test more of the games I mentioned previously and others:
    http://www.techpowerup.com/reviews/NVIDIA/GeForce_...
    Crysis 2, with DX11 & HIRES pack added @1920x1200 it beats 3 radeons...Note you have to go to a game where NV doesn't care (warhead) to show it so badly. C2 shows much closer to 2 or 3 radeons than warhead which I don't think NV has spent a second on in probably 4 years.

    Page 11 has Diablo 3 scores.
    Diablo 3 scores

    Page 4 for AC3
    Assassins Creed 3, beats 1,2 or 3 Radeon 7970's at all tested resolutions...ROFL
    http://techreport.com/review/24381/nvidia-geforce-...
    Showing the same 20fps diff at 2560x1600, and showing same CF crapout, losing to singl 7970 even in both websites. Clearly AMD has per game problems. Which they allude to on page 16 of the review:
    "Just know that waiting for driver updates to fix problems has become a time-honored tradition for owners of CrossFire rigs."

    techpowerup.com titan review page 5
    Batman Arkham City, same story...You see this is why SLI/CF isn't all it's cracked up to be...Every game needs work, and if company X doesn't do the work, well, AC3/Bat AC etc is what happens...Crysis 2 seems almost the same also.

    techpowerup.com titan article page 8
    COD Black ops2, 2 titans handily beat 1/2/3 7970's.

    techpowerup page 13:
    F1 2012...ROFL, 1 titan to rule all cards...1/2/3 CF or SLI all beaten by ONE CARD. It seems they limit the games here for a reason at anandtech...Always pitching how fast two 7970's is in this article vs a titan, even though they half recommend ONE titan but 'not at these prices, dual cards will always win'.
    ...ummm, I beg to differ. It should win, if drivers are done correctly, but as shown not always.

    Note at anandtech, dirt showdown shows 3% for NV Titan vs. 7970ghz, but if you run the FAR better Dirt3:
    http://www.pcper.com/reviews/Graphics-Cards/NVIDIA...
    It's a ~20% win for Titan vs. 7970ghz. Crappy showdown game picked for a reason?

    Wait we're not done...
    techpowerup titan review page 15
    Max Payne3, 1 titan closing on 2 or 3 radeon 7970ghz's no matter the res...Not always good to get more cards I guess?

    techpowerup.com page 18 for starcraft 2
    Oh, snap...This is why they don't bench Starcraft 2...ROFL...1, 2 or 3 7970, all beat by 1 titan.
    But then, even a GTX 680 beats 3 7970's in all resolutions here...Hmmm...But then this is why you dropped it right? You found out a 680 beat 7970ghz way back here, even the 670 beat 7970ghz:
    http://www.anandtech.com/show/6025/radeon-hd-7970-...
    Totally explains why you came up with an excuse shortly after claiming a patch broke the benchmark. Most people would have just run with the patch version from a week earlier for the 660ti article. But as bad as 7970ghz lost to 670@1920x1200 it was clear the 660TI would beat it also...LOL. Haven't seen that benchmark since, just a comment it would be back in the future when patch allowed...NOPE. It must really suck for an AMD lover to have to cut out so many games from the new test suite.

    techpowerup.com titan review page 7
    Crap, someone benched Borderlands 2...LOL...Almost the same story, a titan looks good vs. 3 7970's (only loses in 5760x1080 which the single card isn't really for anyway).
    Again, proving adding more cards in some cases even goes backwards...LOL. It shouldn't, but you have to have the money to fix your drivers. Tough to do cutting 30% of your workforce & losing 1.18B.

    techpowerup page 20 titan article has WOW mists of pandaria.
    Dang those techpowerup guys, They had the nerve to bench the most popular game in the world. WOW Mists of Pandaria...Oh snap, 1 titan beats 3 7970's again, at all res. OUCH, even a SINGLE 680 does it...See why they don't bench other games here, and seem to act as though we all play pointless crap like warhead and Dirt3 showdown? Because if you bench a ton of today's hits (anything in 2012) except for a special few, you'll get results like techpowerup.

    That's ELEVEN, count them, 11 situations that kind of show a LOT MORE of the picture than they do here correct? I just wish I knew if they used mins or max at techpowerup (too lazy to ask for now), but either way it shows the weakness of multi-card setups without proper driver dev. It also shows why you need a wide range of TODAY's games tested for an accurate picture. Anandtech has really begun to drop the ball over the years since ryan took over card reviews. These games just add to the missing latency discussion issues that affect all radeons and are still being fixed on a GAME BY GAME basis. The driver fix doesn't affect them all at once. The last driver fixed 3 games (helped anyway), and every other game seems to need it's own fix. BUMMER. Ryan totally ignores this discussion. Techreport has done quite a few articles on it, and cover it in detail again in the titan review. PCper does also.

    Same Techpowerup article (since this site is puking on my links calling it spam) pg 19
    Skyrim, with all 3 radeon's at the bottom again. 1, 2 & 3 7970's beaton by ONE TITAN! So I guess 11 situations Ryan ignores. Does this make anyone take another look at the conclusions here on anandtech?
    PCper titan article shows the same in skyrim.
    http://www.anandtech.com/show/6025/radeon-hd-7970-...
    I kind of see why you dropped skyrim, even in your own tests at 1920x1200 670 was beating 7970ghz also, so even after 13.11 you'll still likely have a loss to 680 as shown at the other two links here, this was 4xmsaa too, which you complained about being a weakness in the 660ti article if memory serves...This kind of score short circuits comments like that huh? I mean 580/670 & 680 all pegged at 97.5fps clearly cpu bound not a memory issue I think, since all radeons are below 86. Well, shucks, can't have this benchmark in our next suite...ROFL. Anyone seeing a pattern here?

    Want more bias? Read the 660TI review's comments section where Ryan and I had a discussion about his conclusions in his article...ROFL. The fun starts about page 17 if memory serves (well not if you list all comments, diff page then). I only had to use HIS OWN benchmarks for the most part to prove his conclusioins BS in that case. He stooped so low as to claim a 27in monitor (bought from ebay...ROFL, even amazon didn't have a seller then, which I linked to) was a reason why 660ti's bandwidth etc sucked. Enthusiasts buy these apparently (cheapest was $500 from korea, next stop was $700 or so). Of course this is why they leave out mins here, as they would hit TEENS or single digits in that article if he posted them. All of the games he tested in that article wouldn't hit 30fps at 2560x1600 on EITHER amd or nv on a 66t0i. So why claim a victor?

    What about Crysis 3? Titan at or near top:
    http://www.guru3d.com/articles_pages/crysis_3_grap...
    Note he's telling you 40min, and really need 60 for smooth gameplay throughout as he says he uses avg. Also note at 2560x1600 with everything on, 7970/ 680 won't be playable as he's only avg 30. But see the point, only WARHEAD sucks on NV. But as show before nobody plays it, as servers are empty. 7970 wins over 680 by 20% in ryans warhead tests. But as soon as you go Crysis 2 dx11/hires textures or Crysis 3 it's suddenly a tie or loss.
    Page 8 in the same article
    Note the comment about 2560x1600, dipping to 25 or so even on gtx 680, and only fastest cards on the planet handle it fine:
    "At 2560x1600 with Very High Quality settings only the most expensive cards on the globe can manage. Please do bear in mind that our tests are based on averages, so YES there will be times your FPS drops to 25 fps in big fire fights and explosions, even with say a GTX 680."
  • TheJian - Sunday, February 24, 2013 - link

    Sorry, this site pukes on a certain amount of links, so I had to change them all to just page refs for the most part: 2nd part here :)
    Ryans warhead comment from this article: "In the meantime, with GTX 680’s LANGUID performance, this has been a game the latest Radeon cards have regularly cleared. For whatever reason they’re a good match for Crysis, meaning even with all its brawn, Titan can only clear the 7970GE by 21%."
    No Ryan, just in this game...Not crysis 2 or 3...LOL. He gives yet another dig in the same page, because this 5yr old game is major important even though nobody plays it:
    "As it stands AMD multi-GPU cards can already cross 60fps, but for everything else we’re probably a generation off yet before Crysis is completely and utterly conquered."

    Jeez, if you'd just put down the 5yr old game and get with the times (crysis 2 or 3 will do Ryan or any of the games above, what 11 of them I gave?), you'll find the only LANGUID performer is AMD. So Titan is a gen behind if you believe him on all CRYSIS games? If NV is a gen behind, how come nobody else shows this in Cryis2 DX11/Hires pack, or Crysis 3? Oh, that's right, NV actually optimizes for games that are less than 5yrs old...ROFL. Honestly I don't think AMD has done anything on their driver for warhead for 5yrs either...They just happen to be faster in a 5yr old game. :) And NV doesn't care. Why would they with the non-existent player base shown above on servers? Is Cryengine 2 being used in a bunch of games I don't know about? Nope, just WARHEAD. I've never heard of the other 3 on the list, but crysis 1 is not quite the same engine and as shown above performs quite well on kepler(1fps difference on 680vs7970ghz @1920x1200) same for crysis 2 & 3. Only warhead sucks on kepler.
    Search wikipedia.org for CryEngine
    You can do this for any game people, to find out what is out now, and what is coming. Look up unreal 3 engine for instance and take a gander at the # of games running it vs. Warhead.
    search wikipedia.org for List of Unreal Engine games
    Complete list of u3 base games there.

    http://techreport.com/review/24381/nvidia-geforce-...
    Guild Wars 2, Titan beating single 7970 & 7970CF at 2560x1600 by a lot...Another ignored game with 3mil sold. Titan is beating CF 7970 by ~20%. OUCH.

    http://www.guru3d.com/articles_pages/crysis_3_grap...
    Reminder, crysis 3 2560x1600 680gtx (that languid card on warhead according to Ryan) TIES 7970ghz in guru3d's benchmarks. Mind you, neither can run there as it's 30fps for both. You'll dip to 10-20fps...ROFL. But point proven correct? RYAN is misrepresenting the facts. Unless you play 3 gen old warhead instead of crysis2 or crysis 3 (or even updated crysis 1 now on cryengine3 according to the site, probably why it does well on kepler too)? Who does that? You still play serious sam1 or far cry 1 too? Still playing doom1?

    Is that 14 games I'm up to now? That's a lot of crap you couldn't use in the new suite huh?

    http://www.anandtech.com/show/6159/the-geforce-gtx...
    The comments section for Ryan's 660ti article. Realizing what I said above, go back and read our conversation. Read as he attempts to defend the bias conclusions in that article, and read the data from his OWN article I used then to prove those cards were made for 1920x1200 and below, not 2560x1600 or 2560x1440 as Ryan defended. Look at the monitor he was pitching and me laying out how you had to EBAY it from KOREA to even make his statements make sense (I gave links, showed that ebay company in korea didn't even have an about page etc...ROFL). Not that you'd actually order a monitor direct from some DUDE in korea giving up your visa like that anyway, how risky is that for a $500 monitor? But it was humorous watching him and Jarred defend the opinions (Jarred basically called me a ahole and said I was uninformed...LOL). The links and the data said otherwise then, and above I just did it again. This hasn't changed much with dual cards or titan. You still need these to play above 1920x1200 at above 30fps and some games still bring the top cards to their knees at 2560x1600 etc. That's why they don't post minimums here. All of the arguments about bandwidth being an issue go out the window when you find out you'll be running 10-20fps to prove it's true. One of the pages in the 660TI article is titled ~"that darned memory bandwidth"...Really? I also pointed out the # of monitors selling @1920x1200 or less (68 if memory serves) and above it on newegg.com at the time. I pointed at that steampowered.com showed less than 2% market share above 1920x1200 (and almost all had dual cards according to their survey, NOT a 660TI or below). I doubt it's much higher now.

    Hopefully one day soon Anand will stop this junk. It's hard to believe this is the new game suite...I mean seriously? That's just sad. But then Anand himself ignored basically the entire freakin' earnings report for NVDA and didn't even respond to the only comment on his NON-informational post (mine...LOL).
    http://www.anandtech.com/show/6746/tegra-4-shipmen...
    I'm the only comment... $20 says Nobody from Anandtech addresses this post either... :) What can they say? The data doesn't lie. Don't believe me...I provided all the links to everything so you can judge them yourselves (and what they've said or done - or not done in all these cases). They didn't address last Q's financial/market share whipping NVDA gave AMD either. I love AMD myself. I currently run a 5850, and put off my 660ti purchase as I'm not really impressed with either side currently and can wait for now (had a black friday purchase planned but passed), but the BIAS here has to stop. Toms, Techreport, PCper etc is reporting heavily on latency problems on radeons (at least 1 other user already mentioned it in this comment section) and AMD is redoing their memory manager to fix it all! AMD released a driver just last month fixing 3 games for this (fixed borderlands2, guild wars2 and one other). PCper.com (Ryan Shrout) is still working on exactly how to accurately test it (others have already decided I guess but more will come out about this). He's calling it frame rating capture:
    http://www.pcper.com/reviews/Graphics-Cards/Frame-...
    Note his comment on situation:
    "This is the same graph with data gathered from our method that omits RUNT frames that only represent pixels under a certain threshold (to be discussed later). Removing the tiny slivers gives us a "perceived frame rate" that differs quite a bit - CrossFire doesn't look faster than a single card."
    AMD cheating here or what (they've both done tricks at some point in their history)? I look forward to seeing Ryan Shrout's data shortly. He used to run AMDMB.com so I'm pretty sure he's pro AMD :)
    http://www.tomshardware.com/reviews/geforce-gtx-ti...
    more latency stuff. Note AMD is working on a new memory manager for GCN supposedly to fix this. I wonder if this will lower their fps avg.

    I didn't form my opinion by making stuff up here. AMD has great stuff, but I provided a LOT of links above that say it's not quite like Anandtech would have you believe. I can find benchmarks where AMD wins, but that's not the point. Ryan always makes the claim AMD wins (check his 660TI article conclusions for example). At best you could call this even, at worst it looks pretty favorable to NV cards here IMHO. IF you toss out crap/old 2 games (warhead, dirt showdown) that nobody plays and add in the 14 above this is pretty grimm for AMD correct? Pretty grimm for Anandtech's opinion too IMHO. If you can argue with the data, feel free I'd like to see it. None of the benchmarks are what you'd buy either, they are all reference clocked cards which nobody in their right mind would purchase. Especially the 660TI's, who buys ref clocked 660TI's? Toms/anand/hardocp seem to love to use them even though it's not what we'd buy as the same price gets you another 100mhz easily OOBE.

    I'd apologize for the wall, but it's not an opinion, all of the benchmarks above are facts and NOT from me. You can call me crazy for saying this site has AMD bias, but that won't change the benchmarks, or the ones Anandtech decided to REMOVE from their test suite (skyrim, borderlands2, diablo3, starcraft2 - all have been in previous tests here, but removed at 660ti+ articles). Strange turn of events?
  • Ryan Smith - Monday, February 25, 2013 - link

    "I'm the only comment... $20 says Nobody from Anandtech addresses this post either... :) What can they say?"

    Indeed. What can we say?

    I want to respond to all user comments, but I am not going to walk into a hostile situation. You probably have some good points, but if we're going to be attacked right off the bat, how are we supposed to have a meaningful discussion?
  • TheJian - Monday, February 25, 2013 - link

    If that's what you call an attack, it has to be the most polite one I've seen. The worst I called you was BIASED.

    Please, feel free to defend the 14 missing games, and the choice of the warhead (which doesn't show the same as crysis 1, 2 or 3 as shown) and dirt showdown. Also why Starcraft2 was in but now out when a launch event for the next one is coming with the next few weeks. Not an important game? The rest above are all top sellers also. Please comment on skyrim, as with the hires pack that is OFFICIAL as I noted in response to CeriseCogburn (where right above his post you call it cpu limited, as his link and mine show it is NOT, and AMD was losing in his by 40fps! out of ~110 if that isn't GPU separation I don't know what is). Are you trying to say you have no idea what the HI-RES pack is for skyrim out for over a year now? Doesn't the term HI-RES instantly mean more GPU taxing than before?

    Nice bail...I attacked your data and your credibility here, not YOU personally (I don't know you, don't care either way what you're like outside your reviews). Still waiting for you to attack my data. Still waiting for an explanation of the game choices and why all the ones I listed are left out for 2 games that sold 100,000 units or less (total failures) and one of them (warhead) from 5 years ago that doesn't represent Crysis 1, 2 or 3 benchmarks shown from all the titan articles (where all the keplers did very well with a lot of victories at 1920x1200, and some above, not just titan).

    This isn't nor have any of my posts been hostile. Is it hostile because I point out you misrepresenting the facts? Is it hostile because I backed it with a lot of links showing it NOT like you're saying (which enforces the misrepresentation of the facts comments)? It would be (perhaps) hostile if I insinuated you were an Ahole and have an "uninformed opinion" like Jarred Walton said about me in the 660ti comments section (which I never did to either of you) even after I provided boat loads of PROOF and information like I did here. So basically it appears, if I provide ample proof in any way say you're not being factual I'm labelled hostile. I was even polite in my response to Jarred after that :)

    How does one critique your data without being hostile? :)

    Never mind I don't want an answer to your distraction comment. Defend your data, and rebut mine. I'm thinking there are curious people after all I provided. It won't be meaningful until you defend your data and rebut the data from all the sites I provided (heck any, they all show the same, 14 games where NV does pretty well and not so good for radeons or CF, in some cases even SLI). I've done all the work for you, all you have to do is explain the results of said homework, or just change your "suite of benchmarks" for gaming. Clearly you're leaving out a lot of the story which heavily slants to NV if added. The ones in the links are the most popular games out today and in the last 15 months. Why are they missing? All show clear separation in scores (in same family of gpu's or out). These are great gpu test games as shown. So please, defend your data and game choices, then do some rebuttal of the evidence. IF someone said this much stuff about my data, and I thought I had a leg to stand on, I'd certainly take some time to rebut the person's comments. Politely just as all my comments were. Including this one. I can't think of a defense here, but if you can and it makes sense I'll acknowledge it on the spot. :)
  • CeriseCogburn - Tuesday, February 26, 2013 - link

    I appreciate that, and read all the words and viewed all the links and then some.

    I have noted extreme bias in many past articles in the wording that is just far too obvious and friends and I have just had a time rolling through it.
    I commented a few reviews back pointing a bit of it out yet there's plenty of comments that reek as well.
    I am disappointed yet this site is larger than just video cards so I roll with it so to speak.

    Now that you've utterly cracked open the factual can exposing the immense amd favored bias, and the benchmark suite is scheduled to change -lol- that's how the timing of things work and coincide so often it seems.

    Anyway, you not only probably have some points, you absolutely do have a lot of unassailable points but then people do have certain "job pressures" so I don't expect any changes at all but am very appreciative and do believe you have done everyone a world of good with your posts.
    The 4 benchmarks dropped was just such a huge nail, lol.

    It's all good, some people like to live in a fantasy type blissful fog and feel good and just the same when reality shines the light it's all good too and even better.

    I absolutely appreciate it, know that.
    You did not waste your time nor anyone else's.
  • thralloforcus - Monday, February 25, 2013 - link

    Please test folding@home and bitcoin mining performance! Those would be my main justifications for getting a new video card to replace the 570 Classified's I have in SLI.
  • Ryan Smith - Monday, February 25, 2013 - link

    As noted elsewhere, OpenCL is currently non-functional on Titan. Until it's fixed we can't run FAHBench. As for BitCoin Jarred has already gone into some good reasons why it's not a very useful GPU benchmark, and why GPUs are becoming less than useful for it.

Log in

Don't have an account? Sign up now