Meet The EVGA GeForce GTX 680 Classified

Breaking the card down as we always do, we’ll start with the fundamentals of the card. As we typically see with these premium overclocking cards, the GTX 680 Classified is far bigger than a standard GTX 680. EVGA is still using a double-wide design, however with the need to fit additional VRM circuitry for overclocking the card is longer and taller than a standard GTX 680. Altogether the GTX 680 Classified measures 11 inches long and 4.95 inches tall, making it roughly an inch longer and an inch taller than NVIDIA’s reference design. Consequently this definitely isn’t a card that will fit in every case – its length should fit in most cases, but depending on the case its height may be an issue.

Along with giving EVGA additional space on their board for power circuitry, the larger card also lets them use a larger cooler, which is necessary for the higher power levels the GTX 680 Classified is intended to operate at. EVGA is well known for their use of NVIDIA’s reference designs, but even when they strike out on their own they like to stick with blowers, making the GTX 680 Classified one of the few custom cards you’ll see with such a cooler.

And like the board itself the cooler is equally large, with EVGA affixing a 80mm radial fan to the card. As with axial fans, radial fans can move more air as they increase in size, which means that not only can EVGA push more air than a standard GTX 680, but they can also do so at lower RPMs with less noise. In fact the card’s fan speed is limited to just 55% in order to pass NVIDIA’s noise requirements, which means it doesn’t get the chance to reach the full potential of what this 24W fan can do.

On the other end of the cooling equation we have EVGA’s heatsink, which like the fan takes advantage of the larger card in order to fit a larger heatsink. This heatsink is a larger version of the stacked fin heatsinks we’ve seen NVIDIA use as of late, with a combination of a copper baseplate and a flat copper heatpipe providing transport to the rest of the aluminum heatsink. Truth be told compared to full-card heatsinks we often see on open-air coolers these stacked fin heatsinks are not particularly big, but these designs prove to be quite efficient as we’ll see.

Sitting below the primary heatsink is EVGA’s aluminum baseplate, which primarily serves to reinforce the card and protect the components, with a secondary duty of serving as a basic heatsink for the RAM and VRM circuitry. We’ve seen these baseplates take on heatsink-like features before – such as with the GTX 690 and its grooves/fins – but this is the first baseplate we’ve seen that uses pin fins instead of full fins.

Last but not least of course we have the real star of the show, EVGA’s custom PCB. As we alluded to earlier, the bulk of the additional bulk of the PCB is used to house extra VRM circuitry, leading to the back-half of the PCB being densely populated while the front half of the PCB is effectively empty along the top-inch. EVGA is using a 14+3 phase VRM configuration here, with 14 phases supplying power to the GPU while 3 more phases supply power to the GDDR5 RAM.  This is as compared to a 4+2 configuration for the reference GTX 680.

VRM phases have become something of a competition between vendors, and while it’s not immediately clear whether 14 VRM phases are strictly necessary for this card, going above and beyond the reference GTX 680 certainly is. Not only are the additional phases necessary to smoothly supply power to the GPU and RAM when overclocking (and especially overvolting), but even at its stock state EVGA needs the extra phases thanks to the card’s higher default power target. EVGA’s default power target on the GTX 680 Classified is 250W (330W if using +132% power target), versus 170W(225W w/+132%) on the reference GTX 680; this not only provides some headroom for overclocking, but it means the GTX 680 Classified can reach its top boost bin more often as it doesn’t have to clock down to stay within the 170W power envelope.

Supplying this power is a pair of 8pin PCIe power sockets, which means on paper the GTX 680 Classified can safely draw up to 375W. In practice it’s not clear whether GK104 can actually take that, at least with air cooling, so pushing this card much beyond 300W is mostly in the realm of hardcore water and liquid nitrogen overclockers. Still, the GTX 680 Classified is meant to be overclocked and it definitely has the power delivery system necessary to achieve it.

This brings us to EVGA’s additional features on the PCB specifically for overclockers. First and foremost, to the right of the 8pin PCIe power sockets is the EVGA EVBot header. We will get into EVBot in a moment when we discuss voltage control in general, but this header is the key to maximizing the GTX 680 Classified’s potential. Further to the right we find EVGA’s BIOS selection switch, which has 3 states: Normal, OC, and LN2. The latter 2 BIOSes are in fact identical and exist to disable the card’s power target to enable extreme overclocking. In our (admitted conservative) experience disabling the power target is best left to water and LN2 overclockers, as the default BIOS seems to offer enough headroom (and heat generation) to keep the reference cooler busy.

Moving on, further still to the right we have a voltage monitoring header. Unfortunately the pins are not labeled and difficult to access with probes, as EVGA is intending for the header to be used with not-yet-released adapter that will make it easier to attach probes. Without the adapter a combination of EVBot and software monitoring are the best way to monitor the card’s voltage.

With overclocking out of our way, let’s take a look at the rest of the card. The GTX 680 Classified is a 4GB card, which means EVGA is using 16 GDDR5 RAM chips in a 16bit configuration. 8 are on the front, with the other 8 on the rear. These are the same 6GHz Hynix chips we saw on the GTX 680, and this explains why EVGA isn’t shipping with a memory overclock. As near as we can tell neither Samsung nor Hynix are actually shipping 6.5GHz/7GHz GDDR5 in volume, which would mean that EVGA cannot put faster RAM on the card (though they could still overclock).

Finally we have the GTX 680 Classified’s display port configuration. EVGA is using NVIDIA’s reference configuration here, which means 2 DL-DVI ports, 1 DisplayPort, and 1 HDMI port. The HDMI port is admittedly a bit of a head scratcher – we can’t seriously imagine anyone using such a card to drive a TV, but there you go. EVGA has put the EVBot header here on previous Classified cards, and that may have been a better use of that space in this case.

Wrapping up our look at the GTX 680 Classified, let’s talk about marketing, pricing and availability. One of the things we almost immediately asked EVGA about the GTX 680 Classified is what market segment they‘re shooting for, since ultra-premium is a rather broad category. The GTX 680 Classified is very much a halo part for EVGA, and while it will grab the attention of the press and fans with headlines like hitting 2GHz on LN2, EVGA tells us that most buyers will stick to air cooling. So while the card was built for insane overclocks and abuse on water and LN2 cooling, EVGA clearly expects to sell many of these cards to buyers that will never go beyond the card’s shipping configuration.

Moving on to volume and pricing, Classified cards are usually low-volume parts. Despite that the card has regularly been available from EVGA, so all indications that are the volume of cards is high enough for the market segment EVGA is going after. The toughest part to get over will be pricing: the GTX 680 Classified is a premium variant of what was already a premium product (GTX 680), so EVGA is charging a premium price. The GTX 680 Classified will set you back $660, making it the most expensive GTX 680 currently available. Much of this price hike comes down to the RAM – 4GB GTX 680s start at $590 on Newegg – but even then there’s a further premium thanks to the customized hardware and the fact that this is EVGA’s highest factory overclock. Compared to their next-cheapest 4GB GTX 680, the GTX 680 FTW+, the Classified still carries a $40 premium.

Of course, this doesn’t include the cost of the EVBot controller. If you want one of those – and if you’re intending to overvolt you will – you’ll need to shell out an additional $80, which brings the price of an entire GTX 680 Classified kit to $740. If the hardware isn’t a strong argument that the GTX 680 Classified is an ultra-premium product, the pricing will be an even better argument.

EVGA GeForce GTX 680 Classified Review Overvolting & The EVGA EVBot
POST A COMMENT

75 Comments

View All Comments

  • HisDivineOrder - Tuesday, July 24, 2012 - link

    To be fair, AMD started the gouging with the 7970 series, its moderate boost over the 580 series, and its modest mark-up over that line.

    When nVidia saw what AMD had launched, they must have laughed and rubbed their hands together with glee. Because their mainstream part was beating it and it cost them a LOT less to make. So they COULD have passed those savings onto the customer and launched at nominal pricing, pressuring AMD with extremely low prices that AMD could probably not afford to match...

    ...or they could join with the gouging. They joined with the gouging. They knocked the price down by $50 and AMD's pricing (besides the 78xx series) has been in a freefall ever since.
    Reply
  • CeriseCogburn - Tuesday, July 24, 2012 - link

    You people are using way too much tin foil, it's already impinged bloodflow to the brain from it's weight crimping that toothpick neck... at least the amd housefire heatrays won't further cook the egg under said foil hat.

    Since nVidia just barely has recently gotten a few 680's and 670's in stock on the shelves, how pray tell, would they produce a gigantic 7 billion transistor chip that it appears no forge, even the one in Asgard, could possibly currently have produced on time for any launch, up to and perhaps past even today ?

    See that's what I find so interesting. Forget reality, the Charlie D semi accurate brain fart smell is a fine delicacy for so many, that they will never stop inhaling.

    Wow.

    I'll ask again - at what price exactly was the 680 "midrange" chip supposed to land at ? Recall the GTX580 was still $499+ when amd launched - let's just say since nVidia was holding back according to the 20lbs of tinfoil you guys have lofted, they could have released GTX680 midrange when their GTX580 was still $499+ - right when AMD launched... so what price exactly was GTX680 supposed to be, and where would that put the rest of the lineups on down the price alley ?

    Has one of you wanderers EVER comtemplated that ? Where are you going to put the card lineups with GTX680 at the $250-$299 midrange in January ? Heck ... even right now, you absolute geniuses ?
    Reply
  • natsume - Sunday, July 22, 2012 - link

    For that price, I prefer rather the Sapphire HD 7970 Toxic 6GB @ 1200Mhz Reply
  • CeriseCogburn - Tuesday, July 24, 2012 - link

    Currently unavailable it appears.

    And amd fan boys have told us 7970 overclocks so well to (1300mhz they claim) so who cares.

    Toxic starts at 1100, and no amd fan boy would admit the run of the mill 7970 can't do that out of the box, as it's all we've heard now since January.

    It's nice seeing 6GB on a card though that cannot use even 3GB an maintain a playable frame rate at any resolution or settings, including 100+ Skyrim mods at once attempts.
    Reply
  • CeriseCogburn - Tuesday, July 24, 2012 - link

    Sad how it loses so often to a reference GTX680 in 1920 and at triple monitor resolutions.

    http://www.overclockersclub.com/reviews/sapphire__...
    Reply
  • Sabresiberian - Sunday, July 22, 2012 - link

    One good reason not to have it is the fact that software overclocking can sometimes be rather wonky. I can see Nvidia erring on the cautious side to protect their customers from untidy programs.

    EVGA is a company I want to love, but they are, in my opinion, one that "almost" goes the extra mile. This card is a good example, I think. Their customers expressed a desire for unlocked voltage and 4GB cards (or "more than 2GB"), and they made it for us.

    But they leave the little things out. Where do you go to find out what those little letters mean on the EVBot display? I'll tell you where I went - to this article. I looked in the EVBot manual, looked up the manual online to see if it was updated - it wasn't; scoured the website and forums, and no where could I find a breakdown of what the list of voltage settings meant from EVGA!

    I'm not regretting my purchase of this card; it is a very nice piece of hardware. It just doesn't have the 100% commitment to it a piece of hardware like this should.

    But then, EVGA, in my opinion, does at least as good as anybody, in my opinion. MSI is an excellent company, but they released their Lightning that was supposed to be over-voltable without a way to do it. Asus makes some of the best stuff in the business - if their manufacturing doesn't bungle the job and leave film that needs to be removed between heatsinks and what they should be attached to.

    Cards like this are necessarily problematic. To make them worth their money in a strict results sense, EVGA would have to guarantee they overclock to something like 1400MHz. If they bin to that strict of a standard, why don't they just factory overclock to 1400 to begin with?

    And, what's going to be the cost of a chip guaranteed to overclock that high? I don't know; I don't know what EVGA's current standards are for a "binning for the Classified" pass, but my guess is it would drive the price up, so that cost value target will be missed again.

    No, you can judge these cards strictly by value for yourself, that's quite a reasonable thing to do, but to be fair you must understand that some people are interested in getting value from something other than better frame rates in the games they are playing. For this card, that means the hours spent overclocking - not just the results, the results are almost beside the point, but the time spent itself. In the OC world that often means people will be disappointed in the final results, and it's too bad companies can't guarantee better success - but if they could, really what would be the point for the hard-core overclocker? They would be running a fixed race, and for people like that it would make the race not worth running.

    These cards aren't meant for the general-population overclocker that wants a guaranteed more "bang for the buck" out of his purchase. Great OCing CPUs like Nehalem and Sandy Bridge bring a lot of people into the overclocking world that expect to get great results easily, that don't understand the game it is for those who are actually responsible for discovering those great overclocking items, and that kind of person talks down a card like this.

    Bottom line - if you want a GTX 680 with a guaranteed value equivalent to a stock card, then don't buy this card! It's no more meant for you than a Mack truck is meant to be a family car. However, if you are a serious overclocker that likes to tinker and wants the best starting point, this may be exactly what you want.

    ;)
    Reply
  • Oxford Guy - Sunday, July 22, 2012 - link

    Nvidia wasn't happy with the partners' designs, eh? Oh please. We all remember the GTX 480. That was Nvidia's doing, including the reference card and cooler. Their partners, the ones who didn't use the awful reference design, did Nvidia a favor by putting three fans on it and such.

    Then there's the lack of mention of Big Kepler on the first page of this review, even though it's very important for framing since this card is being presented as "monstrous". It's not so impressive when compared to Big Kepler.

    And there's the lack of mention that the regular 680's cooler doesn't use a vapor chamber like the previous generation card (580). That's not the 680 being a "jack of all trades and a master of none". That's Nvidia making an inferior cooler in comparison with the previous generation.
    Reply
  • CeriseCogburn - Tuesday, July 24, 2012 - link

    I, for one, find the 3rd to the last paragraph of the 1st review page a sad joke.

    Let's take this sentence for isntance, and keep in mind the nVidia reference cooler does everything better than the amd reference:
    " Even just replacing the cooler while maintaining the reference board – what we call a semi-custom card – can have a big impact on noise, temperatures, and can improve overclocking. "

    One wonders why amd epic failure in comparison never gets that kind of treatment.

    If nVidia doesn't find that sentence I mentioned a ridiculous insult, I'd be surprised, because just before that, they got treated to this one: " NVIDIA’s reference design is a jack of all trades but master of none "

    I guess I wouldn't mind one bit if the statements were accompanied by flat out remarks that despite the attitude presented, amd's mock up is a freaking embarrassingly hot and loud disaster in every metric of comparison...

    I do wonder where all these people store all their mind bending twisted hate for nVidia, I really do.

    The 480 cooler was awesome because one could simply remove the gpu sink and still have a metal covered front of the pcb card and thus a better gpu HS would solve OC limits, which were already 10-15% faster than 5870 at stock and gaining more from OC than the 5870.

    Speaking of that, we're supposed to sill love the 5870, this sight claimed the 5850 that lost to 480 and 470 was the best card to buy, and to this day our amd fans proclaim the 5870 a king, compare it to their new best bang 6870 and 6850 that were derided for lack of performance when they came out, and now 6870 CF is some wonderkin for the fan boys.

    I'm pretty sick of it. nVidia spanked the 5000 series with their 400 series, then slammed the GTX460 down their throats to boot - the card all amd fans never mention now - pretending it never existed and still doesn't exist...
    It's amazing to me. All the blabbing stupid praise about amd cards and either don't mention nVidia cards or just cut them down and attack, since amd always loses, that must be why.
    Reply
  • Oxford Guy - Tuesday, July 24, 2012 - link

    Nvidia cheaped out and didn't use a vapor chamber for the 680 as it did with the 580. AMD is irrelevant to that fact.

    The GF100 has far worse performance per watt, according to techpowerup's calculations than anything AMD released in 40nm. The 480 was very hot and very loud, regardless of whether AMD even existed in the market.

    AMD may have a history of using loud inefficient cooling, but my beef with the article is that Nvidia developed a more efficient cooler (580's vapor chamber) and then didn't bother to us it for the 680, probably to save a little money.
    Reply
  • CeriseCogburn - Wednesday, July 25, 2012 - link

    The 680 is cooler and quieter and higher performing all at the same time than anything we've seen in a long time, hence "your beef" is a big pile of STUPID dung, and you should know it, but of course, idiocy never ends here with people like you.

    Let me put it another way for the faux educated OWS corporate "profit watcher" jack***: " It DOESN'T NEED A VAPOR CHAMBER YOU M*R*N ! "

    Hopefully that penetrates the inbred "Oxford" stupidity.

    Thank so much for being such a doof. I really appreciate it. I love encountering complete stupidity and utter idiocy all the time.
    Reply

Log in

Don't have an account? Sign up now