Conclusion

Typically, vendor-overclocked cards are used as a tool to patch pricing gaps in a product lineup. They allow a vendor to offer better performance than a stock card, thereby justifying a higher price and for the vendor a higher profit margin.

Gigabyte has gone beyond just filling the void between a GTX 260 Core 216 and a GTX 275. With the GTX 260 Super Overclock, they have produced a card just as fast as a GTX 275 with a lower price. Gigabyte has put the MSRP on this card at $199, while we can’t get a GTX 275 at anything under $209. It’s only $10, but at the same time, what’s the difference between a GTX 275 and a GTX 260 that performs at the same level?

At this time it’s hard to justify purchasing a GTX 275 with the GTX 260 Super Overclock on the shelves. Certainly if you intend to do more overclocking on your own the GTX 260 Super Overclock is a poor choice since Gigabyte has already squeezed out most of what they can. But if you’re not the kind of person that overclocks their video cards, the performance of this card is just as good as a GTX 275 but for less. We would laud it for also being a well-built card after it turned in such impressive temperature and noise results, but as it turns out Gigabyte is their own enemy here – their GTX 275 is also an Ultra Durable card, and it’s the $209 GTX 275 we’ve been talking about. So build quality really doesn’t come into play here since we can get a similarly well built GTX 275; the bottom line is all about price.

Pricing alone is a dangerous place to be competing however. With the launch of the 5800 series, AMD has a very fast DirectX 11 card only $60 more at $259. As NVIDIA has not adjusted prices to meet the 5800 series, Gigabyte is left with little wiggle room since the cost of acquiring the basic parts for a GTX 260 from NVIDIA hasn’t changed. We asked Gigabyte about this last week, and in spite of the 5800 launch they have no intention (or no ability) to lower the price of the GTX 260 Super Overclock.

The performance difference between the GTX 260 Super Overclock (or a GTX 275) and a 5850 comes out to around 25% depending on what game and resolution we’re looking at. With a price difference of 30%, the GTX 260 Super Overclock is still a better value based solely off of performance, but it’s very close.

Meanwhile, the Radeon 4890 is around $20 cheaper and trades blows with the GTX 260 Super Overclock depending on what game we’re looking at. Their noise and thermal characteristics greatly differ, but this is a product of our 4890 having clearly been tuned for lower temperatures over less noise. Here the right card is going to depend entirely on what games you’re interested in: if it’s a game the GTX 260 Super Overclock wins at, it’s going to be enough of a margin to justify the price difference.

Ultimately Gigabyte would be in a better position if they could bring in this card at a lower price. By creating a GTX 260 with the performance of a GTX 275, they’ve put this card into the war between the GTX 275 and AMD’s offerings right now – a war NVIDIA and its partners aren’t in a good position to win. $10 cheaper would go a long way to better cement the position of this card.

In conclusion, this leaves us with a 4-step recommendation depending on your situation. If you can afford a Radeon HD 5850, consider it. If you can’t, look at the games you want to play and see if the Radeon HD 4890 is faster. And if it isn’t or you’re otherwise going for a $200 NVIDIA card, Gigabyte’s GTX 260 Super Overclock is a great choice. It has every bit of the more expensive GTX 275’s performance at a lower price. Finally, if you're willing to overclock on your own (YMMV!), Gigabyte's GTX 275 with user overclocking should be able to separate itself from the heavily overclocked GTX 260 SO for just $10 more.

Temperature & Noise
Comments Locked

29 Comments

View All Comments

  • Leyawiin - Thursday, February 18, 2010 - link

    At this late stage of the game I decided to buy one of these. I game at 1680 x 1050 on XP and at that resolution this seems to be the best performing card for the money. The way it was built (better quality techniques and materials) is just icing on the cake. I don't want to wait for Fermi any longer (and I bet they will be out of my price range when/if they appear) and I don't want to spend $85-100 more for an HD 5850 for the small improvement it would give me on a 22" monitor. Should be a good match for my X4 955.
  • chizow - Monday, October 12, 2009 - link

    While its understandable why it took so long to do a review like this (first retail part clocked this high), these kind of OC scaling results would be much more useful closer to the launch of a product line to better determine the impact of individual clockspeeds and core/functional unit modifications.

    Derek did a few of these OC scaling drill-downs for ATI 4890 and GTX 275 I believe, but they were also very late given the GTX 260/280 and 4850/4870 had already been released for months. They would've been much more helpful if they were done at launch alongside the main reviews to give prospective buyers a better idea of the impact of actual hardware differences vs. software/artificial differences like clockspeed.

    The problem is Nvidia and ATI both mix hardware and clockspeed differences on these parts to obfuscate the actual performance delta between the parts, which is particularly significant because the ASICs are typically the same sans artificially neutered functional units. At the very least, launch reviews should normalize clockspeeds when possible to give a better idea of the impact of actual hardware differences.

    For example, with the 4850 vs. 4870, you have 625MHz vs 750MHz on the core along with GDDR3 and GDDR5. You can't really do much about the bandwidth disparity, but you can try to clock that 4850 up to 750MHz, which would give you a much better idea of the impact of the actual hardware differences and bandwidth. Similarly for the GTX 260 to 275, the original GTX 260s were clocked at 576/1242 and 275s were clocked at 633/1350. Normalizing clocks would then isolate the differences from the additional 8 TMU and 24 SP cluster rather than the artificial/binned difference in clockspeeds.

    To bring it full circle, doing these comparisons earlier wouldn't give you the guarantee a product would run at max overclocks like this factory OC'd part would, it would just set expectations when it actually mattered so that when an OC'd part like this does come along, you could just refer to the comparison done months ago and say "Yeah it performs and scales just as we thought it would when we did this Overclocking Comparison on GT200 parts 18 months ago".
  • SirKronan - Monday, October 12, 2009 - link

    There are two conclusions that would matter at all here:

    Performance per dollar
    and
    Performance per watt

    They got the performance per dollar, and the results aren't surprising at all. But this is half an article without the performance per watt aspect. As soon as they get a new killawatt or measure with a UPS that has a meter, and update the results, this article will actually be meaningful.

    An article comparing a lesser card overclocked to a faster card at stock MUST contain power usage comparison or it's missing half its teeth.
  • chizow - Tuesday, October 13, 2009 - link

    I typically don't find vertical price to performance comparisons relevant because you're always going to have trouble competing with those dirt cheap options that are free AR. Sure you may get great FPS per dollar, but what good does that do you when the aggregate FPS aren't playable? Similarly, with CF or SLI, its very difficult for the higher-end parts to keep ahead of the price and performance options that multi-GPU offer from lower-end parts.

    Performance per watt in this range of parts isn't hugely useful either imo, its been this way for some time, high-end graphics cards consume a lot of power. There's no panacea that's going to fix this, if a card is in the upper tier of performance for its time its probably going to consume a similar amount of power to its predecessors. This only changes as newer processes and faster parts are introduced, where they typically outperform older parts in the same segment using similar power, or slightly less.

    I personally prefer comparisons to be made based on performance, then you can do a lateral comparison of price and power consumption within that performance segment. That way you have an expected perfomrance level and can make an informed decision about other key characteristics, like price and power consumption.
  • shotage - Monday, October 12, 2009 - link

    I agree with this. Power Utilization followed by noise is something I would be concerned about with overclocked cards.

    Personally I don't think it's worth the money for a DX10 card anymore, even if it is fast. If I want a card its going to be DX11 capable.
  • Leyawiin - Monday, October 12, 2009 - link

    From the early reviews I've read there's no contest between the HD 5770 and a stock GTX 260 (216) in all but the most ATI friendly titles. The gap will be even greater with this card. This fact its very cool and quiet for the performance makes it even more compelling. And yes, the HD 5770 will be $160. If you can get it for that price (or get it at all for weeks after its release). I agree with Ryan on this one. Darn good deal for the price.
  • macs - Monday, October 12, 2009 - link

    My stock gtx 260 can overclock to those frequency as well. I can't see any reason to buy those overclocked (and overpriced) video cards...
  • mobutu - Monday, October 12, 2009 - link

    Yep. At 160 usd the 5770 will be (probably, we will find out tommorrow) very very very close in performance to this card priced at 200 usd.
    So the choice is/will be clear.
  • poohbear - Monday, October 12, 2009 - link

    im not quite sure what the point of this review is since the 5800 series have been released, in ur conclusion u didnt even mention anything about DX11 and how the 260 is not even capable of handling DX 10.1 let alone the future DX11. For people who keep their graphics cards for 2 years before upgrading this is an important factor.
  • Alexstarfire - Monday, October 12, 2009 - link

    I fail to see how DX11 matters since no game is even slated to use it yet, that I've heard of anyway. Might be useful if you keep your car 2 years, but I'm guessing by that time the card probably won't be able to handle DX11 much like the first DX10 cards couldn't handle DX10 very well when it first debuted. Of course, just going off of pricing and performance I wouldn't get this card anyway, or any nVidia card for that matter. I'm just saying.

Log in

Don't have an account? Sign up now