Meet The GeForce GTX 670

Because of the relatively low power consumption of GK104 relative to past high-end NVIDIA GPUs, NVIDIA has developed a penchant for small cards. While the GTX 680 was a rather standard 10” long, NVIDIA also managed to cram the GTX 690 into the same amount of space. Meanwhile the GTX 670 takes this to a whole new level.

We’ll start at the back as this is really where NVIDIA’s fascination with small size makes itself apparent. The complete card is 9.5” long, however the actual PCB is far shorter at only 6.75” long, 3.25” shorter than the GTX 680’s PCB. In fact it would be fair to say that rather than strapping a cooler onto a card, NVIDIA strapped a card onto a cooler. NVIDIA has certainly done short PCBs before – such as with one of the latest GTX 560 Ti designs – but never on a GTX x70 part before. But given the similarities between GK104 and GF114, this isn’t wholly surprising, if not to be expected.

In any case this odd pairing of a small PCB with a large cooler is no accident. With a TDP of only 170W NVIDIA doesn’t necessarily need a huge PCB, but because they wanted a blower for a cooler they needed a large cooler. The positioning of the GPU and various electronic components meant that the only place to put a blower fan was off of the PCB entirely, as the GK104 GPU is already fairly close to the rear of the card. Meanwhile the choice of a blower seems largely driven by the fact that this is an x70 card – NVIDIA did an excellent job with the GTX 560 Ti’s open air cooler, which was designed for the same 170W TDP, so the choice is effectively arbitrary from a technical standpoint (there’s no reason to believe $400 customers are any less likely to have a well-ventilated case than $250 buyers). Accordingly, it will be NVIDIA’s partners that will be stepping in with open air coolers of their own designs.

Starting as always at the top, as we previously mentioned the reference GTX 670 is outfitted with a 9.5” long fully shrouded blower. NVIDIA tells us that the GTX 670 uses the same fan as the GTX 680, and while they’re nearly identical in design, based on our noise tests they’re likely not identical. On that note unlike the GTX 680 the fan is no longer placed high to line up with the exhaust vent, so the GTX 670 is a bit more symmetrical in design than the GTX 680 was.


Note: We dissaembled the virtually identical EVGA card here instead

Lifting the cooler we can see that NVIDIA has gone with a fairly simple design here. The fan vents into a block-style aluminum heatsink with a copper baseplate, providing cooling for the GPU. Elsewhere we’ll see a moderately sized aluminum heatsink clamped down on top of the VRMs towards the front of the card. There is no cooling provided for the GDDR5 RAM.


Note: We dissaembled the virtually identical EVGA card here instead

As for the PCB, as we mentioned previously due to the lower TDP of the GTX 670 NVIDIA has been able to save some space. The VRM circuitry has been moved to the front of the card, leaving the GPU and the RAM towards the rear and allowing NVIDIA to simply omit a fair bit of PCB space. Of course with such small VRM circuitry the reference GTX 670 isn’t built for heavy overclocking – like the other GTX 600 cards NVIDIA isn’t even allowing overvolting on reference GTX 670 PCBs – so it will be up to partners with custom PCBs to enable that kind of functionality. Curiously only 4 of the 8 Hynix R0C GDDR5 RAM chips are on the front side of the PCB; the other 4 are on the rear. We typically only see rear-mounted RAM in cards with 16/24 chips, as 8/12 will easily fit on the same side.

Elsewhere at the top of the card we’ll find the PCIe power sockets and SLI connectors. Since NVIDIA isn’t scrambling to save space like they were with the GTX 680, the GTX 670’s PCIe power sockets are laid out in a traditional side-by-side manner. As for the SLI connectors, since this is a high-end GeForce card NVIDIA provides 2 connectors, allowing for the card to be used in 3-way SLI.

Finally at the front of the card NVIDIA is using the same I/O port configuration and bracket that we first saw with the GTX 680. This means 1 DL-DVI-D port, 1 DL-DVI-I port, 1 full size HDMI 1.4 port, and 1 full size DisplayPort 1.2. This also means the GTX 670 follows the same rules as the GTX 680 when it comes to being able to idle with multiple monitors.

NVIDIA GeForce GTX 670 Meet The EVGA GeForce GTX 670 Superclocked
Comments Locked

414 Comments

View All Comments

  • SlyNine - Saturday, May 12, 2012 - link

    No the 5870 was replaced by the 6970. The 5870 was faster then the 6870.

    The wall was coming, since the 9700pro that needed a power adapter, to videocards that need 2 power adapters and took 2 slots. That was how they got those 2 and even 4x increases. the 9700pro was as much as 6x faster then a 4600 at times.

    But like I said this wall was coming and from now on expect all performance improvements to be based on architecture and node improvements.
  • CeriseCogburn - Sunday, May 13, 2012 - link

    My text > " 4890-5870-6970 ???? "

    It was a typo earlier, dippy do.

    9600pro was not 6X faster than a 4600 ever, period - once again we have your spew and nothing else. But below we have the near EQUAL benchmarks.

    http://www.anandtech.com/show/947/20

    http://www.anandtech.com/show/947/22

    6X, 4X, 2X your rear end... another gigantic lie.

    Congrats on lies so big - hey at least your insane amd fanboy imagination and brainwashing of endless lies is being exposed.

    Keep up the good work.
  • Iketh - Thursday, May 10, 2012 - link

    do you listen to yourself? you're just as bad as wreckage....

    you have never and will never run a corporation
  • CeriseCogburn - Thursday, May 10, 2012 - link

    How can I disagree as obviously you are another internet blogger CEO - one of the many thousands we now have online with corporate business school degrees and endless babbling about profits without a single price cost for a single component of a single video card discussed under your belts.
    It's amazing how many of you tell us who can cut prices and remain profitable - when none of you have even the tiniest inkling of the cost of any component whatsoever, let alone the price it's sold at by nVidia or amd for that matter.
    I'm glad so many of you are astute and learned CEO mind masters, though.
  • chizow - Thursday, May 10, 2012 - link

    You really don't need to be an internet blogger or CEO, you don't even need a business degree although it certainly wouldn't hurt (especially in accounting).

    Just a rudimentary understanding of financial statements and you can easily understand Nvidia's business model, then see when and why they are most successful financially by looking at the market landscape and what products were selling and for how much.

    I can tell you right now, Nvidia was at its most profitable during G80 and G92's run of success (6 straight quarters of record profits that have been unmatched since), so we know for a fact what kind of revenues, margins and ASPs for components they can succeed with by looking at historical data.
  • CeriseCogburn - Friday, May 11, 2012 - link

    G92's were the most wide ranging selection of various cores hacks, bit width, memory config, etc- and released a enormous amount of different card versions - while this release is a flagship only tier thus far - so they don't relate at all.
    So no, you're stuck in the know exactly nothing spot I claimed you are, no matter what you spew about former releases.
    Worse than that, nVidia profit came from chipset sales and high end cards then - and getting information to show the G80 G92 G92b G94 etc profitability by itself will cost you a lot of money buying industry information.
    So you know nothing again, and tried to use a false equivalency.
    Thanks for trying though, and I certainly won't say you should change your personal stance on pricing of the "mid tier" 680, on the other hand I don't see you making a reasonable historical pricing/ performance/current prices release analysis - you haven't done that, and I've been reading all of your comments of course, and otherwise often agree with you.
    As I've said, the GTX580 was this year $499 - the 7970 released and 2.5 months later we're supposed to see the 580 killer not just at $499, but at $299 as the semi-accurate rumors and purported and unbelievable "insider anonymous information" rumors told us - that $299, since it was so unbelievable if examined at all, has become $399, or maybe $449, or $420, whatever the moaner wants it to be...
    I frankly don't buy any of it - and for good reason - this 680 came in as it did because it's a new core and they stripped it down for power/perf and that's that - and they drove amd pricing down.
    Now they're driving it down further.
    If the 680 hit at $299 like everyone claimed it was going to (bouncing off Charlie D's less than honest cranium and falling back on unquoted and anonymous "industry wide" claimed rumors or a single nVidia slide or posted trash prediction charts proven to be incorrect), then where would the 670 be priced at now ? $250 ?
    I suggest the performance increase along with the massive driver improvement bundle and keeping within the 300watt power requirements means that there is nowhere else to go right now.
    The "secret" "held back" performance is nowhere - the rumored card not here yet is a compute monster - so goodbye power/perf win and the giant PR advantage not to mention the vast body of amd fanboys standing on that alone - something nVidia NEVER planned to lead with this time - the big Kepler.
    It's not that nVidia outperformed itself, it's that their secrecy outperformed all the minds of the rabble - and all that's left is complainers who aren't getting something for nothing or something for half price as they hoped.
  • chizow - Thursday, May 10, 2012 - link

    I don't need to run a corporation to understand good and bad business. The fact there are *OUTRAGED* GTX 680 buyers who feel *CHEATED* after seeing the GTX 670 price:performance drives the point home.

    Nvidia really needs to be careful here as they've successfully upset their high-end target market on two fronts:

    1) High-end enthusiasts like myself who are upset they decided to follow AMD's lackluster price:performance curve and market a clearly mid-range ASIC (GK104) as a high-end SKU (GTX 670, 680, 690) and charge high-end premiums for it.

    2) High-end enthusiasts who actually felt the GTX 680 was worthy of its premium price tag, paid the $500 asking price and often, more to get them. Only to see that premium completely eroded by a card that performs within a few % points, yet costs 20% less and is readily available on the market.

    Talk about losing insane value overnight, you don't need to run a business to understand the kind of anger and angst that can cause.
  • CeriseCogburn - Friday, May 11, 2012 - link

    Well, the $$ BURN $$ is still less than the $$ BURN $$ the amd flagship cost - $130 + and that's the same card, not a need to be overclocked lower shader cut version.
    So as far as angry dollar burning, yeah, except amd has done worse in dollar costs than nvidia, and with the same card.
    Nice to know, hopefully your theory has a lot fo strong teeth, then the high end buyers can hold back and drive the price down...
    ( seems a dream doesn't it )
  • CeriseCogburn - Friday, May 11, 2012 - link

    Let's not forget there rage guy, that 7970 burn of $130+ bucks just turned into a $180 or $200 burn.

    Yet, CURRENTLY, all GTX680 owners can unload for upwards of $500... LOL

    Not so for 7970 owners, they are already perma burned.

    I guess you just didn't think it through, it was more important to share a falsity and rage against nVidia.
    Nice try, you've failed.
  • chizow - Sunday, May 13, 2012 - link

    Yes I've said from Day 1 the 7970 was horribly overpriced; it was just an extension of the 40nm price:performance curve 18 months after the fact.

    But that doesn't completely let Nvidia off the hook since they obviously used AMD's weak offering as the launching point to use a mid-range ASIC as their high-end SKU.

    End result is the consumer gets the SMALLEST increase in performance for their money in the last decade of GPUs. I don't understand why this is so hard for you to understand. Look at the benchmarks, do the math and have a seat.

Log in

Don't have an account? Sign up now