Final Thoughts

When we were first informed about the GeForce GTX 560 Ti With 448 Cores, I approached the matter with a great deal of skepticism. 3rd tier products have not been impressive in quite some time, and NVIDIA’s previous effort with the GTX 465 is a very good example of this. So imagine my surprise once we had a card in hand and benchmark results to work with. NVIDIA has both impressed me and disappointed me at the same time.

The hardware is impressive enough. GTX 570 is a good base to work off of both with respect to performance and operational characteristics – it’s well balanced and the GTX 560-448 directly inherits this. Perhaps most importantly NVIDIA didn’t make their 3rd tier product significantly worse than their 2nd tier in terms of its performance targets, and that makes a world of difference. As a result the GTX 560-448 is what we’d happily call a GTX 570 LE or GTX 565 in any other universe, because it’s certainly not as slow as a GTX 560 Ti.

On a larger scale, once we factor in AMD’s products things get a bit more murky. The GTX 560-448 is definitely faster on average, but as with every other GF100 card, this is heavily dependent on the game being tested. Throwing out CivV – a game where NVIDIA has a distinct advantage due to driver features – leaves things much closer between the GTX 560-448 and the Radeon HD 6950. The 6950 is on average $40 cheaper, and this cannot be ignored. As fast as the GTX 560-448 is, unless you’re specifically using it for games NVIDIA has an advantage in or need their ecosystem for, it’s just not $40 faster. AMD has made the 6950 a good value, and this can’t be ignored.

So if we’re generally impressed with the performance, what are we disappointed about? As you can probably expect however, the disappointing aspect is the name. Even if performance really was close to a GTX 560 Ti it still wouldn’t excuse the poor name. GF110 isn’t GF114, the SM layout and superscalar execution features make these distinctly different GPUs whose differences cannot be reconciled. This is particularly evident when it comes to things such as FP64 performance where the GTX 560-448 is going to be much, much faster; or in cases where the architecture differences mean that the GTX 560-448 isn’t going to pull well ahead of the GTX 560 Ti.

NVIDIA is purposely introducing namespace collisions, and while they have their reasons I don’t believe them to be good enough. The GeForce GTX 560 Ti With 448 Cores is not a GeForce GTX 560 Ti. Most of the time it’s much faster, and this is a good thing. But it also requires more power and generates more heat, and this is a bad thing. My greatest concern is that someone is going to build a system around the operational attributes of a GTX 560 Ti, an then pick up one of these cards, ending up with a system that can’t handle the extra load. This is one of the many benefits of a clear, concise, non-conflicting namespace. And it only gets worse once you see the GTX 560 Ti OEM, a much lower-performing GF100 part that nevertheless shares the GTX 560 Ti name. NVIDIA can and should do better by their customers.

Ultimately NVIDIA has thrown us an interesting curveball for the holidays. We have a GTX 560 Ti that isn’t really a GTX 560 Ti but rather is a card trying hard to be a GTX 570.  At the same time it’s a 3rd tier product but unlike other 3rd tier products it’s actually quite good. Finally as good as it is it will only be available for a limited time. It’s a lot to take into consideration, and a name alone doesn’t do the situation justice. The GeForce GTX 560 Ti With 448 Cores isn’t going to significantly shake-up NVIDIA’s product lines – it’s not meant to – but for the budget-minded among us it’s a chance to get performance near a GTX 570 for just a bit less for Christmas, and that’s as good a reason as any to exist.

Finally, to wrap things up we have the matter of Zotac’s GeForce GTX 560 Ti 448 Cores Limited Edition. If the regular GTX 560-448 is nearly a GTX 570, then Zotac’s card is a GTX 570’s fraternal twin. It’s close enough in performance that the differences in performance cease to matter, and the power consumption doesn’t suffer for the factory overclock. At $299 there’s a greater risk of running into the actual GTX 570, which is what makes the Zotac card a GTX 570 substitute rather than something immediately more or less desirable than the GTX 570. On the plus side if you're in North America and don’t yet have Battlefield 3, the choice becomes much clearer.

Power, Temperature, & Noise
POST A COMMENT

80 Comments

View All Comments

  • ericore - Tuesday, November 29, 2011 - link

    This card is the most Perfect example of a corporation trying to milk the consumer.
    The new Geforce cards are just after Christmas, so what does Nvidia do release a limited addition crap product VS what's around the corner and with a crappy name. The limited namespace is ingenious, but I must hardheadedly agree with Anand on the namespace issue.

    Intelligent people will forgot this card, and wait till after Christmas. Nvidia will have no choice to release Graphics card in Q1 because AMD is going to deliver a serious can of whip ass because of their ingenious decision to go with a low power process silicon VS high performance. You see, they've managed to keep the performance but at half the power then add that it is 28nm VS 40nm and what an nerdy orgasm that is. Nvidia will be on their knees, and we may finally see them offer much lower priced cards; so do you buy from the pegger or from the provider? That's a rhetorical question haha.
    Reply
  • Revdarian - Tuesday, November 29, 2011 - link

    Actually after Christmas you can expect is a 7800 by AMD (that is mid range of the new production, think around or better than current 6900), one month later with luck the high end AMD, and you won't expect the green camp to get a counter until March at the earliest.

    Now that was said on a Hard website by the owner directly, so i would take it as being very accurate all in all.
    Reply
  • ericore - Tuesday, November 29, 2011 - link

    HAha, so same performance at half the power + 28nm VS 40nm + potential Rambus memory which is twice as fast, all in all we are looking at -- at least -- double frame rates. Nvidia was an uber fail with their fermi hype. AMD has not hyped the product at all, but rest assure it will be a bomb and in fact is the exact opposite story to fermi. Clever AMD you do me justice in your intelligent business decisions, worthy of my purchase. Reply
  • HStanford1 - Wednesday, December 07, 2011 - link

    Can't say the same about their CPU lineup

    Roflmao
    Reply
  • granulated - Tuesday, November 29, 2011 - link

    The ad placement under the headline is for the old 384 pipe card !
    If that isn't an accident I will be seriously annoyed.
    Reply
  • DanNeely - Tuesday, November 29, 2011 - link

    "It’s quite interesting to find that idle system power consumption is several watts lower than it is with the GTX 570. Truth be told we don’t have a great explanation for this; there’s the obvious difference in coolers, but it’s rare to see a single fan have this kind of an impact."

    I think it's more likely that Zotak used marginally more efficient power circuitry than on the 570 you're comparing against. 1W there is a 0.6% efficiency edge, 1W on a fan at idle speed is probably at least a 30% difference.
    Reply
  • LordSojar - Tuesday, November 29, 2011 - link

    Look at all the angry anti-nVidia comments, particularly those about them releasing this card before the GTX 600 series.

    nVidia is a company. They are here to make money. If you're an uninformed consumer, then you are a company's (no matter what type they are) bread and butter, PERIOD. You people seem to forget companies aren't in the charity business...

    As for this card, it's an admirable performer, and a good alternative to the GTX 570. That's all it is.

    As for AMD... driver issues or not aside, their control panel is absolutely god awful (and I utilize a system with a fully updated CCC daily). CCC is a totally hilarious joke and should be gutted and redone completely; it's clunky, filled with overlapping/redundant options and ad-ridden. Total garbage... if you even attempt to defend that, you are the very definition of a fanboy.

    As for microstutter, AMD's Crossfire is generally worse at first simply because of the lack of frequent CFX profile updates. Once those updates are in place, it's a non issue between the two companies, they both have it in some capacity using dual/tri/quad GPU solutions. Stop jumping around with your red or green pompoms like children.

    AMD has fewer overall features at a lower overall price. nVidia has more overall features at a higher overall price. Gee... who saw that coming...? Both companies make respectable GPUs and both have decent drivers, but it's a fact that nVidia tend to have the edge in the driver category while AMD have an edge in the actual hardware design category. One is focused on very streamlined, gaming centric graphics cards while the other is focused on more robust, computing centric graphics cards. Get a clue...

    ...and let's not even discuss CUDA vs Stream... Stream is total rubbish, and if you don't program, you have no say in countering that point, so please don't even attempt to. Any programmer worth their weight will tell you, quite simply, that for massively parallel workloads where GPU computing has an advantage that CUDA is vastly superior to ANYTHING AMD offers by several orders of magnitude and that nVidia offers far better support in the professional market when compared to AMD.

    I'm a user of both products, and personally, I do prefer nVidia, but I try not to condemn people for using AMD products until the moment they try to assert that they got a better deal or condemn me for slightly preferring nVidia due to feature sets. People will choose what they want; power users generally go with nVidia, which does carry a price premium for the premium feature sets. Mainstream and gaming enthusiasts go with AMD, because they are more affordable for every fps you get. Welcome to Graphics 101. Class dismissed.
    Reply
  • marklahn - Wednesday, November 30, 2011 - link

    Simply put, nvidia has cuda and physx, amd has higher ALU performance which can be beneficial in some scenarios - gogo OpenCL for not being vendor specific though! Reply
  • marklahn - Wednesday, November 30, 2011 - link

    Oh and Close to the Metal, brook and stream are all mainly things of the past, so don't bring that up please. ;) Reply
  • Revdarian - Wednesday, November 30, 2011 - link

    Such a long post does not make you right, in the part of "CUDA vs Stream" you actually mean "CUDA vs OpenCL and DirectCompute" for example, as those are the two vendor agnostic standards, so that just shows that what is really "rubbish" is your attempt to pose as an authority on the subject. Reply

Log in

Don't have an account? Sign up now