The Cards and The Test

In the AMD department, we received two cards. One was an overclocked part from HIS and the other was a stock clocked part from ASUS. Guess which one AMD sent us for the review. No, it's no problem, we're used to it. This is what happens when we get cards from NVIDIA all the time. They argue and argue for the inclusion of overclocked numbers in GPU reviews when it's their GPU we're looking at. Of course when the tables are turned so are the opinions. We sincerely appreciate ASUS sending us this card and we used it for our tests in this article. The original intent of trying to get a hold of two cards was to run CrossFire numbers, but we only have one GTX 275 and we would prefer to wait until we can compare the two to get into that angle.

The ASUS card also includes a utility called Voltage Tweaker that allows gamers to increase some voltages on their hardware to help improve overclocking. We didn't have the chance to play with the feature ourselves, but more control is always a nice feature to have.

For the Radeon HD 4890 our hardware specs are pretty simple. Take a 4870 1GB and overclock it. Crank the core up 100 MHz to 850 MHz and the memory clock up 75 MHz to 975 MHz. That's the Radeon HD 4890 in a nutshell. However, to reach these clock levels, AMD revised the core by adding decoupling capacitors, new timing algorithms, and altered the ASIC power distribution for enhanced operation.  These slight changes increased the transistor count from 956M to 959M. Otherwise, the core features/specifications (texture units, ROPs, z/stencil) remain the same as the HD4850/HD4870 series.

Most vendors will also be selling overclocked variants that run the core at 900 MHz. AMD would like to treat these overclocked parts like they are a separate entity altogether. But we will continue to treat these parts as enhancements of the stock version whether they come from NVIDIA or AMD. In our eyes, the difference between, say, an XFX GTX 275 and an XFX GTX 275 XXX is XFX's call; the latter is their part enhancing the stock version. We aren't going to look at the XFX 4890 and the XFX 4890 XXX any differently. In doing reviews of vendor's cards, we'll consider overclocked performance closely, but for a GPU launch, we will be focusing on the baseline version of the card.

On the NVIDIA side, we received a reference version of the GTX 275. It looks similar to the design of the other GT200 based hardware.

Under the hood here is the same setup as half of a GTX 295 but with higher clock speeds. That means that the GTX 275 has the memory amount and bandwidth of the GTX 260 (448-bit wide bus), but the shader count of the GTX 280 (240 SPs). On top of that, the GTX 275 posts clock speeds closer to the GTX 285 than the GTX 280. Core clock is up 31 MHz from a GTX 280 to 633 MHz, shader clock is up 108 MHz to 1404 MHz, and memory clock is also up 108 MHz to 2322. Which means that in shader limited cases we should see performance closer to the GTX 285 and in bandwicth limited cases we'll still be faster than the GTX 216 because of the clock speed boost across the board.

Rather than just an overclock of a pre-existing card, this is a blending of two configurations combined with an overclock from the two configurations from which it was born. And sure, it's also half a GTX 295, and that is convenient for NVIDIA. It's not just that it's different, it's that this setup should have a lot to offer especially in games that aren't bandwidth limited.

That wraps it up for the cards we're focusing on today. Here's our test system, which is the same as for our GTS 250 article except for the addition of a couple drivers.

The Test

Test Setup
CPU Intel Core i7-965 3.2GHz
Motherboard ASUS Rampage II Extreme X58
Video Cards ATI Radeon HD 4890
ATI Radeon HD 4870 1GB
ATI Radeon HD 4870 512MB
ATI Radeon HD 4850
NVIDIA GeForce GTX 285
NVIDIA GeForce GTX 280
NVIDIA GeForce GTX 275
NVIDIA GeForce GTX 260 core 216
Video Drivers Catalyst 8.12 hotfix, 9.4 Beta for HD 4890
ForceWare 185.65
Hard Drive Intel X25-M 80GB SSD
RAM 6 x 1GB DDR3-1066 7-7-7-20
Operating System Windows Vista Ultimate 64-bit SP1
PSU PC Power & Cooling Turbo Cool 1200W
New Drivers From NVIDIA Change The Landscape The New $250 Price Point: Radeon HD 4890 vs. GeForce GTX 275
Comments Locked

294 Comments

View All Comments

  • piesquared - Thursday, April 2, 2009 - link

    Must be tough trying to write a balanced review when you clearly favour one side of the equation. Seriously, you tow NV's line without hesitation, including soon to be extinct physx, a reviewer relieased card, and unreleased drivers at the time of your review. And here's the kicker; you ignore the OC potential of AMD's new card, which as you know, is one of it's major selling points.

    Could you possibly bend over any further for NV? Obviously you are perfectly willing to do so. F'n frauds
  • Chlorus - Friday, April 3, 2009 - link

    What?! Did you even read the article? They specifically say they cannot really endorse PhysX or CUDA and note the lack of support in any games. I think you're the one towing a line here.
  • SiliconDoc - Monday, April 6, 2009 - link

    The red fanboys have to chime in with insanities so the reviewers can claim they're fair because "both sides complain".
    Yes, red rooster whiner never read the article, because if he had he would remember the line that neither overclocked well, and that overclocking would come in a future review ( in other words, they were rushed again, or got a chum card and knew it - whatever ).
    So, they didn't ignore it , they failed on execution - and delayed it for later, so they say.
    Yeah, red rooster boy didn't read.
  • tamalero - Thursday, April 9, 2009 - link

    jesus dude, you have a strong persecution complex right?
    its like "ohh noes, they're going against my beloved nvidia, I MUST STOP THEM AT ALL COSTS".
    I wonder how much nvidia pays you? ( if not, you're sad.. )
  • SiliconDoc - Thursday, April 23, 2009 - link

    That's interesting, not a single counterpoint, just two whining personal attacks.
    Better luck next time - keep flapping those red rooster wings.
    (You don't have any decent couinterpoints to the truth, do you flapper ? )
    Sometimes things are so out of hand someone has to say it - I'm still waiting for the logical rebuttals - but you don't have any, neither does anyone else.
  • aguilpa1 - Thursday, April 2, 2009 - link

    All these guys talking about how irrelevant physx and how not so many games use it don't get it. The power of physx is bringing the full strength of those GPU's to bear on everyday apps like CS4 or Badaboom video encoding. I used to think it was kind of gimmicky myself until I bought the "very" inexpensive badaboom encoder and wow, how awesome was that! I forgot all about the games.
  • Rhino2 - Monday, April 13, 2009 - link

    You forgot all about gaming because you can encode video faster? I guess we are just 2 different people. I don't think I've ever needed to encode a video for my ipod in 60 seconds or less, but I do play a lot of games.
  • z3R0C00L - Thursday, April 2, 2009 - link

    You're talking about CUDA not Physx.

    Physx is useless as HavokFX will replace it as a standard through OpenCL.
  • sbuckler - Thursday, April 2, 2009 - link

    No physx has the market, HavokFX is currently demoing what physx did 2 years ago.

    What will happen is the moment HavokFX becomes anything approaching a threat nvidia will port Physx to OpenCL and kill it.

    As far as ATI users are concerned the end result is the same - you'll be able to use physics acceleration on your card.
  • z3R0C00L - Thursday, April 2, 2009 - link

    You do realize that Havok Physics are used in more games than Physx right (including all the source engine based games)?

    And that Diablo 3 makes use of Havok Physics right? Just thought I'd mention that to give you time to change your conclusion.

Log in

Don't have an account? Sign up now