Final Words

While it was clear early on in our testing that the GeForce GTX 980 would be the new single-GPU champion, even after finishing testing of NVIDIA’s new flagship we weren’t sure what to expect out of the GTX 970. On the one hand NVIDIA had driven a very significant gap between the GTX 970 and the GTX 980 on theoretical performance – on paper the GTX 970 should be around 80% the performance of the GTX 980 – and on the other hand the GTX 980 was in practice some 16% faster than AMD’s flagship R9 290X. Immediately you can see the potential for even a slower GM204 card to be a threat to AMD, but it’s only an assumption until you have the data in hand.

It’s only fitting then that with the GTX 970 now up and running we find ourselves in a virtual tie between NVIDIA and AMD. Despite not even being NVIDIA’s flagship GM204 card, the GTX 970 is still fast enough to race the R9 290X to a dead heat – at 1440p the GTX 970 averages just 1% faster than the R9 290X. Only at 4K can AMD’s flagship pull ahead, and even then the situation becomes reversed entirely in NVIDIA’s favor at 1080p. As such the R9 290X still has a niche of its own, but considering the fact that GTX 970 is a $329 card I don’t seriously expect it to be used for 4K gaming, so the 1440p and 1080p comparisons are going to be the most appropriate comparisons here.

As is the case with AMD/NVIDIA ties in the past, this is an anything-but-equal sort of situation, but it strongly sells the idea of just how fast the GTX 970 is and just how dangerous it is to AMD. Ultimately what you are going to find is that the GTX 970 scores some big leads in some games only to fall well behind the R9 290X in others, and in other games still the two cards are tied through and through. In the end for pure performance neither card is superior, and in this case that’s a huge victory for NVIDIA.

With the GTX 970 NVIDIA only ties the R9 290X, but in the process they do so while consuming nearly 90W less power, generating far less noise, and most importantly delivering all of this at a two-thirds the cost. GTX 980 gave NVIDIA a well-earned lead over AMD, but it’s the one-two punch of GTX 980 and GTX 970 together that so solidly cement NVIDIA’s position as the top GPU manufacturer. It’s one thing for the $499 R9 290X to lose to NVIDIA’s $549 flagship, but to be outright tied by NVIDIA’s second tier card is a slap in the face that AMD won’t soon forget.

There’s not much more that can be said at this point other than that as of this moment the high-end performance landscape is entirely in NVIDIA’s favor. They have undercut AMD with better hardware at a lower price, leaving AMD in a very tenuous position. AMD would have to cut R9 290X’s priceby nearly $200 to be performance competitive, and even then they can’t come close to matching NVIDIA’s big edge in power consumption. To that end it’s a lot like the GTX 670 launch, but even in that case NVIDIA’s overall hardware and pricing advantage wasn’t quite as immense as it is today.

At anything over $300 there are only two single-GPU cards to consider: GTX 980 and GTX 970. Nothing else matters. For much of the last year NVIDIA has been more than performance competitive but not price competitive with AMD. So I am very happy to see NVIDIA finally reverse that trend and to do so in such a big way.

Moving on, in NVIDIA’s lineup the GTX 970 is a potent performer, but NVIDIA has left themselves a big enough gap that it doesn’t completely undercut their new flagship. GTX 980 remains 15% faster than GTX 970, and that’s no small difference. For $220 more GTX 980 is certainly not the value option, but then this is an NVIDIA flagship we’re talking about, and NVIDIA has always charged a premium there. Instead if you can’t afford the high price of the GTX 980 or simply don’t want to pay that premium, the GTX 970 is an excellent alternative. By pricing the card at $329 NVIDIA has done a great job of making a significant fraction of GM204’s performance available at a better price, and for this reason GTX 970 stands a very good chance of being the value champion for this generation.

Finally let’s talk about EVGA’s GeForce GTX 970 FTW ACX 2.0. EVGA has clearly put a lot of thought into their card and there is a good reason they remain one of NVIDIA’s closest partners. As the only GTX 970 we’ve looked at thus far we’re really only able to look at it on a pass-fail basis, but on that basis it’s a clear pass. The ACX 2.0 cooler is incredibly powerful when paired with the 145W GTX 970 (almost a bit too much) and EVGA continues to deliver some great features and software through items such as their triple BIOS capability and their software suite.

At the same time the FTW in particular is a solid value proposition, though how solid will depend on what you compare it to. Compared to GTX 980 it’s going to fall short by 7% or so despite the overclock, but in the process it essentially cuts the GTX 980’s lead in half. Given the $220 price difference between the reference GTX 970 and GTX 980, that $40 FTW premium is a great alternative. On the other hand this is in the end a factory overclocked card, and it’s entirely likely that most reference clocked GTX 970s could achieve similar clock speeds without paying the $40 premium. As is usually the case with factory overclocked cards, with the GTX 970 FTW you are paying for the peace of mind that comes from a sure thing and the support behind it, as opposed to playing the overclocking lottery.

With that said, I feel like EVGA does walk away from this launch with one vulnerability exposed, and that is the ACX 2.0 cooler. EVGA’s amazing cooling performance is undercut by their middle of the road noise performance, which although is still very good in light of GTX 970’s overall gaming performance, it is not as good as what we have seen other open air coolers do in the past. It’s by no means a deal breaker – especially given all of EVGA’s other advantages – but given the kind of quiet cooling possibilities that a 145W GPU should enable, EVGA is not exploiting it as well as they could.

And while this configuration isn’t going to be optimal for stock users, I don’t doubt for a second that EVGA’s overclocking community will have a field day with this one. With this much cooling headroom to work with the ACX 2.0 cooler is going hold up very well for users who want to overvolt on air.

Overclocking
Comments Locked

155 Comments

View All Comments

  • Casecutter - Friday, September 26, 2014 - link

    I’m confident in if we had two of what where the normal "AIB OC customs" of both a 970 and 290 things between might not appear so skewed. First as much as folks want this level of card to get them into 4K, there not... So it really just boils down to seeing what similarly generic OC custom offer and say "spar back and forth" @2560x1440 depending on the titles.

    As to power I wish these reviews would halt the inadequate testing like it’s still 2004! The power (complete PC) should for each game B-M’d, and should record in retime the oscillation of power in milliseconds, then output the 'mean' over the test duration. As we know each title fluctuates boost frequency across every title, the 'mean' across each game is different. Then each 'mean' can be added and the average from the number of titles would offer to most straight-forward evaluation of power while gaming. Also, as most folk today "Sleep" their computers (and not many idle for more than 10-20min) I believe the best calculation for power is what a graphics card "suckles" while doing nothing like 80% each month. I’d more like to see how AMD ZeroCore impacts a machines power usage over a months’ time, verse the savings only during gaming. Consider gaming 3hr a day which constitutes 12.5% of a month, does the 25% difference in power gaming beat the 5W saved with Zerocore 80% of that month. Saving energy while using and enjoying something is fine, although wasting watts while doing nothing is incomprehensible.
  • Impulses - Sunday, September 28, 2014 - link

    Ehh, I recently bought 2x custom 290, but I've no doubt that even with a decent OC the 970 can st the very least still match it in most games... I don't regret the 290s, but I also only paid $350/360 for my WF Gigabyte cards, had I paid closer to $400 I'd be kicking myself right about now.
  • Iketh - Monday, September 29, 2014 - link

    most PCs default to sleeping during long idles and most people shut it off
  • dragonsqrrl - Friday, September 26, 2014 - link

    Maxwell truly is an impressive architecture, I just wish Nvidia would stop further gimping double precision performance relative to single precision with each successive generation of their consumer cards. GF100/110 were capped at 1/8, GK110 was capped at 1/24, and now GM204 (and likely GM210) is capped at 1/32... What's still yet to be seen is how they're capping the performance on GM204, whether it's a hardware limitation like GK104, or a clock speed limitation in firmware like GK110.

    Nvidia: You peasants want any sort of reasonable upgrade in FP64 performance? Pay up.
  • D. Lister - Friday, September 26, 2014 - link

    "Company X: You peasants want any sort of reasonable upgrade in product Y? Pay up."

    Well, that's capitalism for ya... :p. Seriously though, if less DP ability means a cheaper GPU then as a gamer I'm all for it. If a dozen niche DP hobbyists get screwed over, and a thousand gamers get a better deal on a gaming card then why not? Remember what all that bit mining nonsense did to the North American prices of the Radeons?
  • D. Lister - Friday, September 26, 2014 - link

    Woah, it seems they do tags differently here at AT :(. Sorry if the above message appears improperly formatted.
  • Mr Perfect - Friday, September 26, 2014 - link

    It's not you, the italic tag throws in a couple extra line breaks. Bold might too, I seem to remember that mangling a post of mine in the past.
  • D. Lister - Sunday, September 28, 2014 - link

    Oh, okay, thanks for the explanation :).
  • wetwareinterface - Saturday, September 27, 2014 - link

    this^

    you seem to be under the illusion that nvidia intended to keep shooting themselves in the foot forever in regards to releasing their high end gpgpu chip under a gaming designation and relying on the driver (which is easy to hack) to keep people from buying a gamer card for workstation loads. face it they wised up and charge extra for fp64 and the higher ram count now. no more cheap workstation cards. the benefit as already described is cheaper gaming cards that are designed to be more efficient at gaming and leave the workstation loads to the workstation cards.
  • dragonsqrrl - Saturday, September 27, 2014 - link

    This is only partially true, and I think D. Lister basically suggested the same thing so I'll just make a single response for both. The argument for price and efficiency would really only be the case for a GK104 type scenario, where on die FP64 performance is physically limited to 1/24 FP32 due to there being 1/24 the Cuda cores. But what about GK110? There is no reason to limit it to 1/24 SP other than segmentation. There's pretty much no efficiency or price argument there, and we see proof of that in the Titan, no less efficient at gaming and really no more expensive to manufacture outside the additional memory and maybe some additional validation. In other words there's really no justification (or at least certainly not the justification you guys are suggesting) for why the GTX780 Ti couldn't have had 1/12 SP with 3GB GDDR5 at the same $700 MSRP, for instance. Of course other than further (and in my opinion unreasonable) segmentation.

    This is why I was wondering how they're capping performance in GM204.

Log in

Don't have an account? Sign up now