Meet The GeForce GTX 670

Because of the relatively low power consumption of GK104 relative to past high-end NVIDIA GPUs, NVIDIA has developed a penchant for small cards. While the GTX 680 was a rather standard 10” long, NVIDIA also managed to cram the GTX 690 into the same amount of space. Meanwhile the GTX 670 takes this to a whole new level.

We’ll start at the back as this is really where NVIDIA’s fascination with small size makes itself apparent. The complete card is 9.5” long, however the actual PCB is far shorter at only 6.75” long, 3.25” shorter than the GTX 680’s PCB. In fact it would be fair to say that rather than strapping a cooler onto a card, NVIDIA strapped a card onto a cooler. NVIDIA has certainly done short PCBs before – such as with one of the latest GTX 560 Ti designs – but never on a GTX x70 part before. But given the similarities between GK104 and GF114, this isn’t wholly surprising, if not to be expected.

In any case this odd pairing of a small PCB with a large cooler is no accident. With a TDP of only 170W NVIDIA doesn’t necessarily need a huge PCB, but because they wanted a blower for a cooler they needed a large cooler. The positioning of the GPU and various electronic components meant that the only place to put a blower fan was off of the PCB entirely, as the GK104 GPU is already fairly close to the rear of the card. Meanwhile the choice of a blower seems largely driven by the fact that this is an x70 card – NVIDIA did an excellent job with the GTX 560 Ti’s open air cooler, which was designed for the same 170W TDP, so the choice is effectively arbitrary from a technical standpoint (there’s no reason to believe $400 customers are any less likely to have a well-ventilated case than $250 buyers). Accordingly, it will be NVIDIA’s partners that will be stepping in with open air coolers of their own designs.

Starting as always at the top, as we previously mentioned the reference GTX 670 is outfitted with a 9.5” long fully shrouded blower. NVIDIA tells us that the GTX 670 uses the same fan as the GTX 680, and while they’re nearly identical in design, based on our noise tests they’re likely not identical. On that note unlike the GTX 680 the fan is no longer placed high to line up with the exhaust vent, so the GTX 670 is a bit more symmetrical in design than the GTX 680 was.


Note: We dissaembled the virtually identical EVGA card here instead

Lifting the cooler we can see that NVIDIA has gone with a fairly simple design here. The fan vents into a block-style aluminum heatsink with a copper baseplate, providing cooling for the GPU. Elsewhere we’ll see a moderately sized aluminum heatsink clamped down on top of the VRMs towards the front of the card. There is no cooling provided for the GDDR5 RAM.


Note: We dissaembled the virtually identical EVGA card here instead

As for the PCB, as we mentioned previously due to the lower TDP of the GTX 670 NVIDIA has been able to save some space. The VRM circuitry has been moved to the front of the card, leaving the GPU and the RAM towards the rear and allowing NVIDIA to simply omit a fair bit of PCB space. Of course with such small VRM circuitry the reference GTX 670 isn’t built for heavy overclocking – like the other GTX 600 cards NVIDIA isn’t even allowing overvolting on reference GTX 670 PCBs – so it will be up to partners with custom PCBs to enable that kind of functionality. Curiously only 4 of the 8 Hynix R0C GDDR5 RAM chips are on the front side of the PCB; the other 4 are on the rear. We typically only see rear-mounted RAM in cards with 16/24 chips, as 8/12 will easily fit on the same side.

Elsewhere at the top of the card we’ll find the PCIe power sockets and SLI connectors. Since NVIDIA isn’t scrambling to save space like they were with the GTX 680, the GTX 670’s PCIe power sockets are laid out in a traditional side-by-side manner. As for the SLI connectors, since this is a high-end GeForce card NVIDIA provides 2 connectors, allowing for the card to be used in 3-way SLI.

Finally at the front of the card NVIDIA is using the same I/O port configuration and bracket that we first saw with the GTX 680. This means 1 DL-DVI-D port, 1 DL-DVI-I port, 1 full size HDMI 1.4 port, and 1 full size DisplayPort 1.2. This also means the GTX 670 follows the same rules as the GTX 680 when it comes to being able to idle with multiple monitors.

NVIDIA GeForce GTX 670 Meet The EVGA GeForce GTX 670 Superclocked
Comments Locked

414 Comments

View All Comments

  • Morg. - Thursday, May 10, 2012 - link

    No.
    I am saying that tahiti XT paired with 384 bits RAM AND clocked at the same speed as a gtx 680 paired with 256 bits RAM, has clearly more raw power.

    The thing is, two years from now, nVidia will be boosting other new games for the NEW nVidia hardware and you will not benefit from it on the old H/W.

    However, raw power will remain, 3GB of RAM will still be 3GB of RAM and you will thank god for the added graphics you get out of that last 1 GB that cost you nothing more.

    The two games that have for years been GPU benchmarks and haven't been sponsored by either nVidia or AMD are Crysis warhead and metro 2033.

    If you wanna trash those results because BF3 is everything to you, you should totally do it though.
  • scook9 - Thursday, May 10, 2012 - link

    Crysis: Warhead is a "The way it is meant to be played" title.....

    You see that every time you start it up as well as on the box.
    http://image.com.com/gamespot/images/bigboxshots/3...
  • eddman - Thursday, May 10, 2012 - link

    Two years from now 7970 won't be powerful enough anyway.

    As scook9 mentioned, warhead is an TWIMTBP and yet runs better on 7970.
    It'd be better if you removed that tin foil hat. TWIMTBP and Gaming Evolved are programs to help developers code their games better.
    There are countless TWIMTBP games that run better on radeons.

    Crysis and warhead use an old engine that isn't going to be used anymore. Nowadays they are just obsolete benchmarks.

    Metro 2033 is a very nice game and I really liked it, but it's not that popular and has a proprietary engine. Most gamers don't care about such engine.

    Frostbite, OTOH, matters because it belongs to a major publisher/developer which means we'll see many games based on it in the future.
  • SlyNine - Thursday, May 10, 2012 - link

    I'm pretty sure a 4870 (basically a 6770) is powerful enough today, why wouldn't a 7970 be powerful enough by than.

    Just because an engine is going to be used anymore doesn't mean it isn't useful to gauge certain aspects of a videocard. Many engines that will be used are not even developed yet, some may push a card more like the Crytech engine did.

    Crytech 2 is going to be used for MechWarrior online baby. (Im glad it used a good engine, and it looks like they are using it to good effect).
  • eddman - Thursday, May 10, 2012 - link

    Because 3GB memory is for high-resolutions and high AA settings, and 2 years from now 7970 won't have enough power to run those games at those settings at good frame rates.

    That doesn't make sense. Card A might run max payne 1 twice as fast as card B, but what'd be the point.

    No, mechwarrior online uses cryengine 3, not 2. Cryengine 2, that was used in crysis and warhead, is dead.
  • SlyNine - Saturday, May 12, 2012 - link

    I meant CryEngine 3. not sure why I said 2.

    There is no proof that 3gigs wont be enough for high res by then. Yea maybe not (or maybe) with AA.

    Besides you didn't say anything about running maxed out everything, you made a blanket statement that the 7970 wont powerful enough period.

    That means that card A does something that card B cannot, depending on what that is it have an effect on engines that focus on certain things.
  • eddman - Saturday, May 12, 2012 - link

    I meant 7970 won't have enough shader power 2 years from now, so 3GB won't help then either.

    Yes, everything maxed out with high AA. After all that's what large memories are for.

    Obsolete engine is obsolete. Deal with it. Cryengine 2 won't be used in any other AAA game. It's gone.
  • SlyNine - Saturday, May 12, 2012 - link

    A realtime engine will always tell you something about the card. Obsolete or not.

    If 3GB gives it some sort of advantage then it was worth it. In many games it's already showing an advantage at ultra high res.

    Only you are saying the only use of large video cache is AA at ultra settings. But this is simply a questionable premise.

    I really don't care if Cryengine 2 is used for a AAA game, or ever again. I still play Crysis. Furthermore I don't give a dam about AAA games, most of them are dumbed down for mass appeal.
  • CeriseCogburn - Monday, June 11, 2012 - link

    At 7000X what rez is 3GB showing an advantage ?

    ROFL - desperation
  • theprodigalrebel - Thursday, May 10, 2012 - link

    BF3 has sold 1.9 million copies worldwide.
    Metro 2033 has sold 0.16m copies worldwide
    Crysis is an old game that I don't see (m)any people playing.

    BF3 is also scheduled for three DLC releases (two this year, third next year).

    I see a perfectly good reason why BF3 performance matters. You are speculating that the 7900-series will have great Unreal 4 performance. That's just silly since nobody knows anything about Unreal 4 performance yet.

    The only thing I could find was Hexus.net reporting that nVidia chose the Kepler to demonstrate the Unreal 4 engine at the GDC.

Log in

Don't have an account? Sign up now