The Test

CPU: Intel Core i7-920 @ 3.33GHz
Motherboard: Intel DX58SO (Intel X58)
Chipset Drivers: Intel 9.1.1.1015 (Intel)
Hard Disk: Intel X25-M SSD (80GB)
Memory: Patriot Viper DDR3-1333 3 x 2GB (7-7-7-20)
Video Cards:

ATI Radeon HD 5870
ATI Radeon HD 5850
ATI Radeon HD 4870 X2
ATI Radeon HD 4890
ATI Radeon HD 4870 1GB
ATI Radeon HD 4850
ATI Radeon HD 3870
NVIDIA GeForce GTX 295
NVIDIA GeForce GTX 285
NVIDIA GeForce GTX 275
Gigabyte GTX 260 Super Overclock
NVIDIA GeForce GTX 260 Core 216
NVIDIA GeForce GTS 250
NVIDIA GeForce 8800GT

Video Drivers:

NVIDIA ForceWare 190.62
ATI Catalyst Beta 8.66
ATI Catalyst 9.9

OS: Windows 7 Ultimate 64-bit

The GTX 260 Super Overclock Crysis: Warhead
Comments Locked

29 Comments

View All Comments

  • Leyawiin - Thursday, February 18, 2010 - link

    At this late stage of the game I decided to buy one of these. I game at 1680 x 1050 on XP and at that resolution this seems to be the best performing card for the money. The way it was built (better quality techniques and materials) is just icing on the cake. I don't want to wait for Fermi any longer (and I bet they will be out of my price range when/if they appear) and I don't want to spend $85-100 more for an HD 5850 for the small improvement it would give me on a 22" monitor. Should be a good match for my X4 955.
  • chizow - Monday, October 12, 2009 - link

    While its understandable why it took so long to do a review like this (first retail part clocked this high), these kind of OC scaling results would be much more useful closer to the launch of a product line to better determine the impact of individual clockspeeds and core/functional unit modifications.

    Derek did a few of these OC scaling drill-downs for ATI 4890 and GTX 275 I believe, but they were also very late given the GTX 260/280 and 4850/4870 had already been released for months. They would've been much more helpful if they were done at launch alongside the main reviews to give prospective buyers a better idea of the impact of actual hardware differences vs. software/artificial differences like clockspeed.

    The problem is Nvidia and ATI both mix hardware and clockspeed differences on these parts to obfuscate the actual performance delta between the parts, which is particularly significant because the ASICs are typically the same sans artificially neutered functional units. At the very least, launch reviews should normalize clockspeeds when possible to give a better idea of the impact of actual hardware differences.

    For example, with the 4850 vs. 4870, you have 625MHz vs 750MHz on the core along with GDDR3 and GDDR5. You can't really do much about the bandwidth disparity, but you can try to clock that 4850 up to 750MHz, which would give you a much better idea of the impact of the actual hardware differences and bandwidth. Similarly for the GTX 260 to 275, the original GTX 260s were clocked at 576/1242 and 275s were clocked at 633/1350. Normalizing clocks would then isolate the differences from the additional 8 TMU and 24 SP cluster rather than the artificial/binned difference in clockspeeds.

    To bring it full circle, doing these comparisons earlier wouldn't give you the guarantee a product would run at max overclocks like this factory OC'd part would, it would just set expectations when it actually mattered so that when an OC'd part like this does come along, you could just refer to the comparison done months ago and say "Yeah it performs and scales just as we thought it would when we did this Overclocking Comparison on GT200 parts 18 months ago".
  • SirKronan - Monday, October 12, 2009 - link

    There are two conclusions that would matter at all here:

    Performance per dollar
    and
    Performance per watt

    They got the performance per dollar, and the results aren't surprising at all. But this is half an article without the performance per watt aspect. As soon as they get a new killawatt or measure with a UPS that has a meter, and update the results, this article will actually be meaningful.

    An article comparing a lesser card overclocked to a faster card at stock MUST contain power usage comparison or it's missing half its teeth.
  • chizow - Tuesday, October 13, 2009 - link

    I typically don't find vertical price to performance comparisons relevant because you're always going to have trouble competing with those dirt cheap options that are free AR. Sure you may get great FPS per dollar, but what good does that do you when the aggregate FPS aren't playable? Similarly, with CF or SLI, its very difficult for the higher-end parts to keep ahead of the price and performance options that multi-GPU offer from lower-end parts.

    Performance per watt in this range of parts isn't hugely useful either imo, its been this way for some time, high-end graphics cards consume a lot of power. There's no panacea that's going to fix this, if a card is in the upper tier of performance for its time its probably going to consume a similar amount of power to its predecessors. This only changes as newer processes and faster parts are introduced, where they typically outperform older parts in the same segment using similar power, or slightly less.

    I personally prefer comparisons to be made based on performance, then you can do a lateral comparison of price and power consumption within that performance segment. That way you have an expected perfomrance level and can make an informed decision about other key characteristics, like price and power consumption.
  • shotage - Monday, October 12, 2009 - link

    I agree with this. Power Utilization followed by noise is something I would be concerned about with overclocked cards.

    Personally I don't think it's worth the money for a DX10 card anymore, even if it is fast. If I want a card its going to be DX11 capable.
  • Leyawiin - Monday, October 12, 2009 - link

    From the early reviews I've read there's no contest between the HD 5770 and a stock GTX 260 (216) in all but the most ATI friendly titles. The gap will be even greater with this card. This fact its very cool and quiet for the performance makes it even more compelling. And yes, the HD 5770 will be $160. If you can get it for that price (or get it at all for weeks after its release). I agree with Ryan on this one. Darn good deal for the price.
  • macs - Monday, October 12, 2009 - link

    My stock gtx 260 can overclock to those frequency as well. I can't see any reason to buy those overclocked (and overpriced) video cards...
  • mobutu - Monday, October 12, 2009 - link

    Yep. At 160 usd the 5770 will be (probably, we will find out tommorrow) very very very close in performance to this card priced at 200 usd.
    So the choice is/will be clear.
  • poohbear - Monday, October 12, 2009 - link

    im not quite sure what the point of this review is since the 5800 series have been released, in ur conclusion u didnt even mention anything about DX11 and how the 260 is not even capable of handling DX 10.1 let alone the future DX11. For people who keep their graphics cards for 2 years before upgrading this is an important factor.
  • Alexstarfire - Monday, October 12, 2009 - link

    I fail to see how DX11 matters since no game is even slated to use it yet, that I've heard of anyway. Might be useful if you keep your car 2 years, but I'm guessing by that time the card probably won't be able to handle DX11 much like the first DX10 cards couldn't handle DX10 very well when it first debuted. Of course, just going off of pricing and performance I wouldn't get this card anyway, or any nVidia card for that matter. I'm just saying.

Log in

Don't have an account? Sign up now