The 2GB Question & The Test

Before diving into our test results, I wanted to spend a moment mulling over NVIDIA’s choice for the default memory configuration on GTX 770. Due to the use of a 256bit bus on GK104, NVIDIA limits their practical memory choices to either 2GB of RAM or 4GB. A year ago this was fine even if it wasn’t as large as AMD’s 3GB memory pool, but that was after all a year ago.

Not unlike where we are with 1GB/2GB on mainstream ($150+) cards, we’re at a similar precipice with these enthusiast class cards. Having 2GB of RAM doesn’t impose any real problems today, but I’m left to wonder for how much longer that’s going to be true. The wildcard in all of this will be the next-generation consoles, each of which packs 8GB of RAM, which is quite a lot of RAM for video operations even after everything else is accounted for. With most PC games being ports of console games, there’s a decent risk of 2GB cards being undersized when used with high resolutions and the highest quality art assets. The worst case scenario is only that these highest quality assets may not be usable at playable performance, but considering the high performance of every other aspect of GTX 770 that would be a distinct and unfortunate bottleneck.

The solution for better or worse is doubling the GTX 770 to 4GB. GTX 770 is capable of housing 4GB, and NVIDIA’s partners will be selling 4GB cards in the near future, so 4GB cards will at least be an option. The price premium for 4GB of RAM looks to be around $20-$30, and I expect that will come down some as 4Gb chips start to replace 2Gb chips. 4GB would certainly make the GTX 770 future-proof in that respect, and I suspect it’s a good idea for anyone on a long upgrade cycle, but as always this is a bit of a gamble.

Though I can’t help but feel NVIDIA could have simply sidestepped the whole issue by making 4GB the default, rather than an optional upgrade. As it stands 2GB feels shortsighted, and for a $400 card, a bit small. Given the low cost of additional RAM, a 4GB baseline likely would have been bearable.

The Test

For today’s launch article we’re using NVIDIA’s 320.18 drivers for the GTX 780 and GTX 770, , and AMD’s Catalyst 13.5b2 drivers for all AMD cards.

CPU: Intel Core i7-3960X @ 4.3GHz
Motherboard: EVGA X79 SLI
Power Supply: Antec True Power Quattro 1200
Hard Disk: Samsung 470 (256GB)
Memory: G.Skill Ripjaws DDR3-1867 4 x 4GB (8-10-9-26)
Case: Thermaltake Spedo Advance
Monitor: Samsung 305T
Video Cards: AMD Radeon HD 7970 GHz Edition
AMD Radeon HD 7990
NVIDIA GeForce GTX 580
NVIDIA GeForce GTX 680
NVIDIA GeForce GTX 690
NVIDIA GeForce GTX 780
NVIDIA GeForce GTX Titan
Video Drivers: NVIDIA ForceWare 320.14
NVIDIA ForceWare 320.18
AMD Catalyst 13.5 Beta 2
OS: Windows 8 Pro
Meet The GeForce GTX 770 DiRT: Showdown
Comments Locked

117 Comments

View All Comments

  • JDG1980 - Thursday, May 30, 2013 - link

    TechPowerUp ran tests of three GTX 770s with third-party coolers (Asus DirectCU, Gigabyte WindForce, and Palit JetStream). All three beat the GTX 770 reference on thermals for both idle and load. Noise levels varied, but the DirectCU seemed to be the winner since it was quieter than the reference cooler on both idle and load. That card also was a bit faster in benchmarks than the reference.

    That said, I agree the build quality of the reference cooler is better than the aftermarket substitutes - but Asus is probably a close second. Their DirectCU series has always been very good.
  • ArmedandDangerous - Thursday, May 30, 2013 - link

    This article is in desperate need of some editing work. Spelling and comprehension errors throughout.
  • Nighyal - Thursday, May 30, 2013 - link

    I asked this on the 780 review, and it seems like it might be even more interesting for the 770 considering Nvidia's basically threw more power at a 680, but a performance per watt comparison would be great. If there was something that clearly showed the efficiency of each card in a way (maybe using a fixed work load) it would be interesting to see. Especially when compared to similar architectures or when comparing AMD's efforts with the GHz editions.
  • ThIrD-EyE - Thursday, May 30, 2013 - link

    Since when did 70-80C temperatures become acceptable? I had been looking to upgrade my MSI Cyclone GTX 460 which would never hit higher than 62C and I got a great deal on 2 560TIs for less than half the cost of them new. I have run them in single card and SLI; I see 80C+ when I run an overclock program like MSI Afterburner. I use a custom fan profile to bring the temps down to 75C or less at higher fan speed, but still in reasonable noise levels. It's still not quite enough.

    All these video cards may be fine at these temperatures, but when you are sitting next to the case and there is 80C being pumped out, you really feel it. Especially now with Summer heat finally hitting where I live. My $25 Hyper212+ keeps my OC'ed i7 2600k at a good 45-50C when playing games. I would buy aftermarket coolers if they were not going to take up 3 slots each (I have a card that I need, but would have to be removed.) and didn't cost nearly as much as I paid for the cards.

    AMD, NVIDIA and card partners need to work on bringing temperatures down.
  • quorm - Thursday, May 30, 2013 - link

    lower temperature readings do not mean less heat produced. better cooling just moves the heat from the GPU to your room more efficiently.
  • ThIrD-EyE - Thursday, May 30, 2013 - link

    The architecture of these video cards were obviously made for performance first. That does not mean they can't also work on lowering power consumption to lower the heat produced. One thing that I've found to help my situation is to set all games to run at 60fps without vsync if possible, which thankfully is most fo the games I play. Some games become unplayable or wonky with vsync and other ways of limiting fps without vsync, so I just deal with the heat from no fps limits.

    I hope that the developers of console ports from PS4 and Xbox One put in an fps limit option like Borderlands 2 if they don't allow dev console access.
  • MattM_Super - Friday, May 31, 2013 - link

    Although its not currently accessible from the driver control panel, Nvidia drivers have a built in fps limiter that I use in every game I play (never had any issues with it). You can access it with NvidiaInspector.
  • DanNeely - Thursday, May 30, 2013 - link

    Since 70-80C has always been the best a blower style cooler can do on a high power GPU without getting obscenely loud, and blowers have proven to be the best option to avoid frying the GPU in a case with horrible ventilation. IOW about when both nVidia and ATI adopted blowers for their reference designs.
  • JPForums - Thursday, May 30, 2013 - link

    70C-80C temperatures became acceptable after nVidia decided to release Fermi based cards that regularly hit the mid 90Cs. Since then, the temperatures have in fact come down. Of course, they are still high for my liking and I pay extra for cards with better coolers (I.E. MSI TwinFrozer, Asus DirectCU). That said, there is only so much you can do when pushing 3 times the TDP of an Intel Core i7-3770K while cooling it with a cooler that is both lighter and less ideally formed for the task (Comparing some of the best GPU coolers to any number of heatsinks from Noctua, Thermalright, etc.). Water cooling loops work wonders, but not everyone wants the expense or hassle.
  • Rick83 - Friday, May 31, 2013 - link

    The higher the temperatures, the less fan speed you need, because you have higher delta-theta between the air entering the cooler and the cooling fins, which results in more energy transfer at less volume throughput.
    Obviously the temperature is a pure function of the fan curve under load, and has very little to do with the actual chip (unless you go so far down in energy output, that you can rely on passive convection).

Log in

Don't have an account? Sign up now