I'm not really sure why we have NDAs on these products anymore. Before we even got our Radeon HD 4890, before we were even briefed on it, NVIDIA contacted us and told us that if we were working on a review to wait. NVIDIA wanted to send us something special.

Then in the middle of our Radeon HD 4890 briefing what do we see but a reference to a GeForce GTX 275 in the slides. We hadn't even laid hands on the 275, but AMD knew what it was and where it was going to be priced.

If you asked NVIDIA what the Radeon HD 4890 was, you'd probably hear something like "an overclocked 4870". If you asked AMD what the GeForce GTX 275 was, you'd probably get "half of a GTX 295".

The truth of the matter is that neither one of these cards is particularly new, they are both a balance of processors, memory, and clock speeds at a new price point.

As the prices on the cards that already offered a very good value fell, higher end and dual GPU cards remained priced significantly higher. This created a gap in pricing between about $190 and $300. AMD and NVIDIA saw this as an opportunity to release cards that fell within this spectrum, and they are battling intensely over price. Both companies withheld final pricing information until the very last minute. In fact, when I started writing this intro (Wednesday morning) I still had no idea what the prices for these parts would actually be.

Now we know that both the Radeon HD 4890 and the GeForce GTX 275 will be priced at $250. This has historically been a pricing sweet spot, offering a very good balance of performance and cost before we start to see hugely diminishing returns on our investments. What we hope for here is a significant performance bump from the GTX 260 core 216 and Radeon HD 4870 1GB class of performance. We'll wait till we get to the benchmarks to reveal if that's what we actually get and whether we should just stick with what's good enough.

At a high level, here's what we're looking at:

  GTX 285 GTX 275 GTX 260 Core 216 GTS 250 / 9800 GTX+
Stream Processors 240 240 216 128
Texture Address / Filtering 80 / 80 80 / 80 72/72 64 / 64
ROPs 32 28 28 16
Core Clock 648MHz 633MHz 576MHz 738MHz
Shader Clock 1476MHz 1404MHz 1242MHz 1836MHz
Memory Clock 1242MHz 1134MHz 999MHz 1100MHz
Memory Bus Width 512-bit 448-bit 448-bit 256-bit
Frame Buffer 1GB 896MB 896MB 512MB
Transistor Count 1.4B 1.4B 1.4B 754M
Manufacturing Process TSMC 55nm TSMC 55nm TSMC 65nm TSMC 55nm
Price Point $360 ~$250 $205 $140

 

  ATI Radeon HD 4890 ATI Radeon HD 4870 ATI Radeon HD 4850
Stream Processors 800 800 800
Texture Units 40 40 40
ROPs 16 16 16
Core Clock 850MHz 750MHz 625MHz
Memory Clock 975MHz (3900MHz data rate) GDDR5 900MHz (3600MHz data rate) GDDR5 993MHz (1986MHz data rate) GDDR3
Memory Bus Width 256-bit 256-bit 256-bit
Frame Buffer 1GB 1GB 512MB
Transistor Count 959M 956M 956M
Manufacturing Process TSMC 55nm TSMC 55nm TSMC 55nm
Price Point ~$250 ~$200 $150

 

We suspect that this will be quite an interesting battle and we might have some surprises on our hands. NVIDIA has been talking about their new drivers which will be released to the public early Thursday morning. These new drivers offer some performance improvements across the board as well as some cool new features. Because it's been a while since we talked about it, we will also explore PhysX and CUDA in a bit more depth than we usually do in GPU reviews.

We do want to bring up availability. This will be a hard launch for AMD but not for NVIDIA (though some European retailers should have the GTX 275 on sale this week). As for AMD, we've seen plenty of retail samples from AMD partners and we expect good availability starting today. If this ends up not being the case, we will certainly update the article to reflect that later. NVIDIA won't have availability until the middle of the month (we are hearing April 14th).

NVIDIA hasn't been hitting their launches as hard lately, and we've gotten on them about that in past reviews. This time, we're not going to be as hard on them for it. The fact of the matter is that they've got a competitive part coming out in a time frame that is very near the launch of an AMD part at the same price point. We are very interested in not getting back to the "old days" where we had paper launched parts that only ended up being seen in the pages of hardware review sites, but we certainly understand the need for companies to get their side of the story out there when launches are sufficiently close to one another. And we're certainly not going to fault anyone for that. Not being available for purchase is it's own problem.

From the summer of 2008 to today we've seen one of most heated and exciting battles in the history of the GPU. NVIDIA and AMD have been pushing back and forth with differing features, good baseline performance with strengths in different areas, and incredible pricing battles in the most popular market segments. While AMD and NVIDIA fight with all their strength to win customers, the real beneficiary has consistently been the end user. And we certainly feel this launch is no exception. If you've got $250 to spend on graphics and were wondering whether you should save up for the GTX 285 or save money and grab a sub-$200 part, your worries are over. There is now a card for you. And it is good.

New Drivers From NVIDIA Change The Landscape
Comments Locked

294 Comments

View All Comments

  • SiliconDoc - Monday, April 6, 2009 - link

    Well thanks for stomping the red rooster into the ground, definitively, after proving, once again, that what an idiot blabbering pussbag red spews about without a clue should not be swallowed with lust like a loose girl.
    I mean it's about time the reds just shut their stupid traps - 6 months of bs and lies will piss any decent human being off. Heck, it pissed off NVidia, and they're paid to not get angry. lol
  • tamalero - Sunday, April 5, 2009 - link

    arggh, lots of typoos.

    "Mirrors Edge's PhysX in other hand does show indeed add a lot of graphical feel. " should have been : Mirrors Edge's physx in other hand, does indeed show a lot of new details.
  • lk7600 - Friday, April 3, 2009 - link


    Can you please die? Prefearbly by getting crushed to death, or by getting your face cut to shreds with a
    pocketknife.

    I hope that you get curb-stomped, f ucking retard

    Shut the *beep* up f aggot, before you get your face bashed in and cut
    to ribbons, and your throat slit.
  • papapapapapapapababy - Saturday, April 4, 2009 - link

    Yes, i love you too, silly girl.
  • lk7600 - Friday, April 3, 2009 - link


    Can you please remove yourself from the gene pool? Preferably in the most painful and agonizing way possible? Retard

  • magnetar68 - Thursday, April 2, 2009 - link

    Firstly, I agree with the articles basic premise that lack of convincing titles for PhysX/CUDA means this is not a weighted factor for most people.

    I am not most people, however, and I enjoy running NVIDIA's PhysX and CUDA SDK samples and learning how they work, so I would sacrifice some performance/quality to have access to these features (even spend a little more for them).

    The main point I would like to make, however, is that I like the fact that NVIDIA is out there pushing these capabilities. Yes, until we have cross-platform OpenCL, physics and GPGPU apps will not be ubiquitous; but NVIDIA is working with developers to push these capabilities (and 3D Stereo with 3D VISION) and this is pulling the broader market to head in this direction. I think that vision/leadership is a great thing and therefore I buy NVIDIA GPUs.

    I realize that ATI was pushing physics with Havok and GPGPU programming early (I think before NVIDIA), but NVIDIA has done a better job of executing on these technologies (you don't get credit for thinking about it, you get credit for doing it).

    The reality is that games will be way cooler when the you extrapolate from Mirror's Edge to what will be around down the road. Without companies like NVIDIA out there making solid progress on executing on delivering these capabilities, we will never get there. That has value to me I am willing to pay a little for. Having said that, performance has to be reasonable close for this to be true.
  • JarredWalton - Thursday, April 2, 2009 - link

    Games will be better when we get better effects, and PhysX has some potential to do that. However, the past is a clear indication that developers aren't going to fully support PhysX until it works on every mainstream card out there. Pretty much it means NVIDIA pays people to add PhysX support (either in hardware or other ways), and OpenCL is what will really create an installed user base for that sort of functionality.

    If you're a dev, what would you rather do: work on separate code paths for CUDA and PhysX and forget about all other GPUs, or wait for OpenCL and support all GPUs with one code path? Look at the number of "DX10.1" titles for a good indication.
  • josh6079 - Thursday, April 2, 2009 - link

    Agreed.

    NVidia has certainly received credit for getting accelerated physics moving, but its momentum stops when they couple it to CUDA when offering it to discrete graphics cards outside of the GeForce family.
  • Hrel - Thursday, April 2, 2009 - link

    Still no 3D Mark scores, STILL no low-med resolutions.

    Thanks for including the HD4850, where's the GTS250??? Or do you guys still not have one? Well, you could always use a 9800GTX+ instead, and actually label it correctly this time. Anyway, thanks for the review and all the info on CUDA and PhysX; pretty much just confirmed what I already knew; none of it matters until it's cross-platform.
  • 7Enigma - Friday, April 3, 2009 - link

    3DMark can be found in just about every other review. I personally don't care, but realize people compete on the Orb, and since it's just a simple benchmark to run it probably could be included without much work. The only problem I see (and agree with) is the highly optimized nature both Nvidia and ATI put on the PCVantage/3DMark benchmarks. They don't really tell you much about anything IMO. I'd argue they not only don't tell you about future games (since to my knowledge no (one?) games have ever used an engine from the benchmarks), nor do they tell you much between cards from different brands since they look for every opportunity to tweak them for the highest score, regardless of whether it has any effect in realworld performance.

    What low-med resolution are you asking for? 1280X1024 is the only one I'd like to see (as that's what I and probably 25-50% of all gamers are still using), but I can see why in most cases they don't test it (you have to go to low end cards to have an issue with playable framerates on anything 4850 and above at that resolution). Xbitlabs' review did include 1280X1024, but as you'll see, unless you are playing Crysis:Warhead, and to a lesser extent Farcry2 with max graphics settings and high levels of AA you are normally in the high double to triple digits in terms of framerate. Any resolution lower than that, you've got to be on integrated video to care about!

Log in

Don't have an account? Sign up now