OCP Refined, A Word On Marketing, & The Test

As you may recall from our GTX 580 launch article, NVIDIA added a rudimentary OverCurrent Protection (OCP) feature to the GTX 500 series. At the time of the GTX 580 launch, OCP would clamp down on Furmark and OCCT keep those programs from performing at full speed, as the load they generated was so high that it risked damaging the card. As a matter of principle we have been disabling OCP in all of our tests up until now as OCP was only targeting Furmark and OCCT, meaning it didn’t provide any real protection for the card in any other situations. Our advice to NVIDIA at the time was to expand it to cover the hardware at a generic level, similar to how AMD’s PowerTune operates.

We’re glad to report that NVIDIA has taken up at least some of our advice, and that OCP is taking its first step forward since then. Starting with the ForceWare 267 series drivers, NVIDIA is now using OCP at all times, meaning OCP now protects against any possible program that would generate an excessive load (as defined by NVIDIA), and not just Furmark and OCCT. At this time there’s definitely still a driver component involved as NVIDIA still throttles Furmark and OCCT right off the bat, but everything else seems to be covered by their generic detection methods.

At this point our biggest complaint is that OCP’s operation is still not transparent to the end user. If you trigger it you have no way of knowing unless you know how the game/application should already be performing. NVIDIA tells us that at some point this will be exposed through NVIDIA’s driver API, but today is not that day. Along those lines, at least in the case of Furmark and OCCT OCP still throttles to an excessive degree—whereas AMD gets this right and caps anything and everything at the PowerTune limit, we still see OCP heavily clamp these programs to the point that our GTX 590 draws 100W more under games than it does under Furmark. Clamping down on a program to bring power consumption down to safe levels is a good idea, but clamping down beyond that just hurts the user and we hope to see NVIDIA change this.

Finally, the expansion of OCP’s capabilities is going to have an impact on overclocking. As with reporting when OCP is active, NVIDIA isn’t being fully transparent here so there’s a bit of feeling around at the moment. The OCP limit for any card is roughly 25% higher than the official TDP, so in the case of the GTX 590 this would translate into a 450W limit. This limit cannot currently be changed by the end user, so overclocking—particularly overvolting—risks triggering OCP. Depending on how well its generic detection mode works, it may limit extreme overclocking on all NVIDIA cards with the OCP hardware at the moment. Even in our own overclock testing we have some results that may be compromised by OCP, so it’s definitely something that needs to be considered.

Moving on, I’d like to hit upon marketing quickly. Normally the intended market and uses of most video cards is rather straightforward. For the 6990 AMD pushed raw performance Eyefinity (particularly 5x1P), while for the GTX 590 NVIDIA is pushing raw performance, 3D Vision Surround, and PhysX (even if dedicating a GF110 to PhysX is overkill). However every now and then something comes across that catches my eye. In this case NVIDIA is also using the GTX 590 to reiterate their support for SuperSample Anti-Aliasing support for DX9, DX10, and DX11. SSAA is easily the best hidden feature of the GTX 400/500 series, and it’s one that doesn’t get much marketing attention from NVIDIA. So it’s good to see it getting some attention from NVIDIA—certainly there’s no card better suited for it than the GTX 590.

Last, but not least, we have the test. For the launch of the GTX 590 NVIDIA is providing us with ForceWare 267.71 beta, which adds support for the GTX 590; there are no other significant changes. For cooling purposes we have removed the case fan behind PEG1 on our test rig—while an 11” card is short enough to fit it, it’s counterproductive for a dual-exhaust design. Finally, in order to better compare the GTX 590 to the 6990’s OC/Uber mode, we’ve given our GTX 590 a slight overclock. Our GTX 590 OC is clocked at 750/900, a 143MHz (23%) core and 47MHz (5%) memory overclock. Meanwhile the core voltage was raised from 0.912v to 0.987v. With the poor transparency of OCP’s operation however, we are not 100% confident that we haven’t triggered OCP, so please keep that in mind when looking at the overclocked results.

Update: April 2nd, 2011: Starting with the 267.91 drivers and release 270 drivers, NVIDIA has disabled overvolting on the GTX 590 entirely. This is likely a consequence of several highly-publicized incidents where GTX 590 cards died as a result of overvolting. Although it's unusual to see a card designed to not be overclockable, clearly this is where NVIDIA intends to be.

As an editorial matter we never remove anything from a published article so our GTX 590 OC results will remain. However with these newer drivers it is simply not possible to attain them.

CPU: Intel Core i7-920 @ 3.33GHz
Motherboard: Asus Rampage II Extreme
Chipset Drivers: Intel 9.1.1.1015 (Intel)
Hard Disk: OCZ Summit (120GB)
Memory: Patriot Viper DDR3-1333 3x2GB (7-7-7-20)
Video Cards: AMD Radeon HD 6990
AMD Radeon HD 6970
AMD Radeon HD 6950 2GB
AMD Radeon HD 6870
AMD Radeon HD 6850
AMD Radeon HD 5970
AMD Radeon HD 5870
AMD Radeon HD 5850
AMD Radeon HD 5770
AMD Radeon HD 4870X2
AMD Radeon HD 4870
EVGA GeForce GTX 590 Classified
NVIDIA GeForce GTX 580
NVIDIA GeForce GTX 570
NVIDIA GeForce GTX 560 Ti
NVIDIA GeForce GTX 480
NVIDIA GeForce GTX 470
NVIDIA GeForce GTX 460 1GB
NVIDIA GeForce GTX 460 768MB
NVIDIA GeForce GTS 450
NVIDIA GeForce GTX 295
NVIDIA GeForce GTX 285
NVIDIA GeForce GTX 260 Core 216
Video Drivers: NVIDIA ForceWare 262.99
NVIDIA ForceWare 266.56 Beta
NVIDIA ForceWare 266.58
NVIDIA ForceWare 267.71
AMD Catalyst 10.10e
AMD Catalyst 11.1a Hotfix
AMD Catalyst 11.4 Preview
OS: Windows 7 Ultimate 64-bit

 

Meet The EVGA GeForce GTX 590 Classified, Cont. Crysis: Warhead
Comments Locked

123 Comments

View All Comments

  • Ryan Smith - Thursday, March 24, 2011 - link

    One way or another we will be including multi-monitor stuff. The problem right now is getting ahold of a set of matching monitors, which will take some time to resolve.
  • fausto412 - Thursday, March 24, 2011 - link

    also would be nice to test 1680x1050 on at least a couple of demanding games. illustrate to people who have 22" screens that these cards are a waste of money at their resolution.
  • bigboxes - Thursday, March 24, 2011 - link

    It has been a waste for that low resolution since two generations ago. But you knew that. Troll...
  • tynopik - Thursday, March 24, 2011 - link

    matching monitors might matter for image quality or something, but for straight benchmarking, who cares?

    surely you have 3 monitors capable of 1920x1080

    it's not like the card cares if one is 20" and another is 24"
  • 7Enigma - Thursday, March 24, 2011 - link

    I don't understand this either. There is no need for anything fancy, heck you don't even need to have them actually outputting anything, just fool the drivers into THINKING they are driving multiple monitors!
  • DanNeely - Thursday, March 24, 2011 - link

    I don't entirely agree. While it doesn't matter much for simple average FPS benches like Anandtech is currently doing, they fall well short of the maximum playable settings testing done by sites like HardOCP.
  • strikeback03 - Thursday, March 24, 2011 - link

    Remember, the AT editors are spread all over. So while between them they certainly have at least 3 1920x1080/1200 monitors, Ryan (doing the testing) probably doesn't.

    Plus with different monitors wouldn't response times possibly be different? I'd imagine that would be odd in gaming.
  • tynopik - Thursday, March 24, 2011 - link

    > Remember, the AT editors are spread all over. So while between them they certainly have at least 3 1920x1080/1200 monitors, Ryan (doing the testing) probably doesn't.

    This has been a need for a while, and it's not like this review was completely unexpected, so not sure why they don't have a multi-monitor setup yet

    > Plus with different monitors wouldn't response times possibly be different? I'd imagine that would be odd in gaming.

    Well that's sort of the point, they wouldn't actually be gaming, so who cares?
  • Martin Schou - Thursday, March 24, 2011 - link

    I would have thought that the marketing departments of companies like Asus, Benq, Dell, Eizo, Fujitzu, HP, LaCie, LG, NEC, Philips, Samsung and ViewSonic would cream their pants at what is really very cheap PR.

    Supply sets of 3 or 5 1920x1080/1920x1200 displays and 3 or 5 2560x1440/2560x1600 displays in exchange for at least a full year's advertisement on a prominent tech website.

    If we use Dell as an example, they could supply a set of five U2211H and three U3011 monitors for a total cost of less than 5,900 USD per set. The 5,900 USD is what us regular people would have to pay, but in a marketing campaign it's really just a blip on the radar.

    Now, excuse me while I go dream of a setup that could pull games at 9,600x1080/5,400x1920 or 7,680x1600/4,800x2560 :D
  • Ryan Smith - Friday, March 25, 2011 - link

    I'd just like to note that advertising is handled separately from editorial content. The two are completely compartmentalized so that ad buyers can't influence editorial control. Conversely as an editor I can't sell ad space.

Log in

Don't have an account? Sign up now