The Cards and The Test

In the AMD department, we received two cards. One was an overclocked part from HIS and the other was a stock clocked part from ASUS. Guess which one AMD sent us for the review. No, it's no problem, we're used to it. This is what happens when we get cards from NVIDIA all the time. They argue and argue for the inclusion of overclocked numbers in GPU reviews when it's their GPU we're looking at. Of course when the tables are turned so are the opinions. We sincerely appreciate ASUS sending us this card and we used it for our tests in this article. The original intent of trying to get a hold of two cards was to run CrossFire numbers, but we only have one GTX 275 and we would prefer to wait until we can compare the two to get into that angle.

The ASUS card also includes a utility called Voltage Tweaker that allows gamers to increase some voltages on their hardware to help improve overclocking. We didn't have the chance to play with the feature ourselves, but more control is always a nice feature to have.

For the Radeon HD 4890 our hardware specs are pretty simple. Take a 4870 1GB and overclock it. Crank the core up 100 MHz to 850 MHz and the memory clock up 75 MHz to 975 MHz. That's the Radeon HD 4890 in a nutshell. However, to reach these clock levels, AMD revised the core by adding decoupling capacitors, new timing algorithms, and altered the ASIC power distribution for enhanced operation.  These slight changes increased the transistor count from 956M to 959M. Otherwise, the core features/specifications (texture units, ROPs, z/stencil) remain the same as the HD4850/HD4870 series.

Most vendors will also be selling overclocked variants that run the core at 900 MHz. AMD would like to treat these overclocked parts like they are a separate entity altogether. But we will continue to treat these parts as enhancements of the stock version whether they come from NVIDIA or AMD. In our eyes, the difference between, say, an XFX GTX 275 and an XFX GTX 275 XXX is XFX's call; the latter is their part enhancing the stock version. We aren't going to look at the XFX 4890 and the XFX 4890 XXX any differently. In doing reviews of vendor's cards, we'll consider overclocked performance closely, but for a GPU launch, we will be focusing on the baseline version of the card.

On the NVIDIA side, we received a reference version of the GTX 275. It looks similar to the design of the other GT200 based hardware.

Under the hood here is the same setup as half of a GTX 295 but with higher clock speeds. That means that the GTX 275 has the memory amount and bandwidth of the GTX 260 (448-bit wide bus), but the shader count of the GTX 280 (240 SPs). On top of that, the GTX 275 posts clock speeds closer to the GTX 285 than the GTX 280. Core clock is up 31 MHz from a GTX 280 to 633 MHz, shader clock is up 108 MHz to 1404 MHz, and memory clock is also up 108 MHz to 2322. Which means that in shader limited cases we should see performance closer to the GTX 285 and in bandwicth limited cases we'll still be faster than the GTX 216 because of the clock speed boost across the board.

Rather than just an overclock of a pre-existing card, this is a blending of two configurations combined with an overclock from the two configurations from which it was born. And sure, it's also half a GTX 295, and that is convenient for NVIDIA. It's not just that it's different, it's that this setup should have a lot to offer especially in games that aren't bandwidth limited.

That wraps it up for the cards we're focusing on today. Here's our test system, which is the same as for our GTS 250 article except for the addition of a couple drivers.

The Test

Test Setup
CPU Intel Core i7-965 3.2GHz
Motherboard ASUS Rampage II Extreme X58
Video Cards ATI Radeon HD 4890
ATI Radeon HD 4870 1GB
ATI Radeon HD 4870 512MB
ATI Radeon HD 4850
NVIDIA GeForce GTX 285
NVIDIA GeForce GTX 280
NVIDIA GeForce GTX 275
NVIDIA GeForce GTX 260 core 216
Video Drivers Catalyst 8.12 hotfix, 9.4 Beta for HD 4890
ForceWare 185.65
Hard Drive Intel X25-M 80GB SSD
RAM 6 x 1GB DDR3-1066 7-7-7-20
Operating System Windows Vista Ultimate 64-bit SP1
PSU PC Power & Cooling Turbo Cool 1200W
New Drivers From NVIDIA Change The Landscape The New $250 Price Point: Radeon HD 4890 vs. GeForce GTX 275
Comments Locked

294 Comments

View All Comments

  • SiliconDoc - Monday, April 6, 2009 - link

    Well thanks for stomping the red rooster into the ground, definitively, after proving, once again, that what an idiot blabbering pussbag red spews about without a clue should not be swallowed with lust like a loose girl.
    I mean it's about time the reds just shut their stupid traps - 6 months of bs and lies will piss any decent human being off. Heck, it pissed off NVidia, and they're paid to not get angry. lol
  • tamalero - Sunday, April 5, 2009 - link

    arggh, lots of typoos.

    "Mirrors Edge's PhysX in other hand does show indeed add a lot of graphical feel. " should have been : Mirrors Edge's physx in other hand, does indeed show a lot of new details.
  • lk7600 - Friday, April 3, 2009 - link


    Can you please die? Prefearbly by getting crushed to death, or by getting your face cut to shreds with a
    pocketknife.

    I hope that you get curb-stomped, f ucking retard

    Shut the *beep* up f aggot, before you get your face bashed in and cut
    to ribbons, and your throat slit.
  • papapapapapapapababy - Saturday, April 4, 2009 - link

    Yes, i love you too, silly girl.
  • lk7600 - Friday, April 3, 2009 - link


    Can you please remove yourself from the gene pool? Preferably in the most painful and agonizing way possible? Retard

  • magnetar68 - Thursday, April 2, 2009 - link

    Firstly, I agree with the articles basic premise that lack of convincing titles for PhysX/CUDA means this is not a weighted factor for most people.

    I am not most people, however, and I enjoy running NVIDIA's PhysX and CUDA SDK samples and learning how they work, so I would sacrifice some performance/quality to have access to these features (even spend a little more for them).

    The main point I would like to make, however, is that I like the fact that NVIDIA is out there pushing these capabilities. Yes, until we have cross-platform OpenCL, physics and GPGPU apps will not be ubiquitous; but NVIDIA is working with developers to push these capabilities (and 3D Stereo with 3D VISION) and this is pulling the broader market to head in this direction. I think that vision/leadership is a great thing and therefore I buy NVIDIA GPUs.

    I realize that ATI was pushing physics with Havok and GPGPU programming early (I think before NVIDIA), but NVIDIA has done a better job of executing on these technologies (you don't get credit for thinking about it, you get credit for doing it).

    The reality is that games will be way cooler when the you extrapolate from Mirror's Edge to what will be around down the road. Without companies like NVIDIA out there making solid progress on executing on delivering these capabilities, we will never get there. That has value to me I am willing to pay a little for. Having said that, performance has to be reasonable close for this to be true.
  • JarredWalton - Thursday, April 2, 2009 - link

    Games will be better when we get better effects, and PhysX has some potential to do that. However, the past is a clear indication that developers aren't going to fully support PhysX until it works on every mainstream card out there. Pretty much it means NVIDIA pays people to add PhysX support (either in hardware or other ways), and OpenCL is what will really create an installed user base for that sort of functionality.

    If you're a dev, what would you rather do: work on separate code paths for CUDA and PhysX and forget about all other GPUs, or wait for OpenCL and support all GPUs with one code path? Look at the number of "DX10.1" titles for a good indication.
  • josh6079 - Thursday, April 2, 2009 - link

    Agreed.

    NVidia has certainly received credit for getting accelerated physics moving, but its momentum stops when they couple it to CUDA when offering it to discrete graphics cards outside of the GeForce family.
  • Hrel - Thursday, April 2, 2009 - link

    Still no 3D Mark scores, STILL no low-med resolutions.

    Thanks for including the HD4850, where's the GTS250??? Or do you guys still not have one? Well, you could always use a 9800GTX+ instead, and actually label it correctly this time. Anyway, thanks for the review and all the info on CUDA and PhysX; pretty much just confirmed what I already knew; none of it matters until it's cross-platform.
  • 7Enigma - Friday, April 3, 2009 - link

    3DMark can be found in just about every other review. I personally don't care, but realize people compete on the Orb, and since it's just a simple benchmark to run it probably could be included without much work. The only problem I see (and agree with) is the highly optimized nature both Nvidia and ATI put on the PCVantage/3DMark benchmarks. They don't really tell you much about anything IMO. I'd argue they not only don't tell you about future games (since to my knowledge no (one?) games have ever used an engine from the benchmarks), nor do they tell you much between cards from different brands since they look for every opportunity to tweak them for the highest score, regardless of whether it has any effect in realworld performance.

    What low-med resolution are you asking for? 1280X1024 is the only one I'd like to see (as that's what I and probably 25-50% of all gamers are still using), but I can see why in most cases they don't test it (you have to go to low end cards to have an issue with playable framerates on anything 4850 and above at that resolution). Xbitlabs' review did include 1280X1024, but as you'll see, unless you are playing Crysis:Warhead, and to a lesser extent Farcry2 with max graphics settings and high levels of AA you are normally in the high double to triple digits in terms of framerate. Any resolution lower than that, you've got to be on integrated video to care about!

Log in

Don't have an account? Sign up now