I'm not really sure why we have NDAs on these products anymore. Before we even got our Radeon HD 4890, before we were even briefed on it, NVIDIA contacted us and told us that if we were working on a review to wait. NVIDIA wanted to send us something special.

Then in the middle of our Radeon HD 4890 briefing what do we see but a reference to a GeForce GTX 275 in the slides. We hadn't even laid hands on the 275, but AMD knew what it was and where it was going to be priced.

If you asked NVIDIA what the Radeon HD 4890 was, you'd probably hear something like "an overclocked 4870". If you asked AMD what the GeForce GTX 275 was, you'd probably get "half of a GTX 295".

The truth of the matter is that neither one of these cards is particularly new, they are both a balance of processors, memory, and clock speeds at a new price point.

As the prices on the cards that already offered a very good value fell, higher end and dual GPU cards remained priced significantly higher. This created a gap in pricing between about $190 and $300. AMD and NVIDIA saw this as an opportunity to release cards that fell within this spectrum, and they are battling intensely over price. Both companies withheld final pricing information until the very last minute. In fact, when I started writing this intro (Wednesday morning) I still had no idea what the prices for these parts would actually be.

Now we know that both the Radeon HD 4890 and the GeForce GTX 275 will be priced at $250. This has historically been a pricing sweet spot, offering a very good balance of performance and cost before we start to see hugely diminishing returns on our investments. What we hope for here is a significant performance bump from the GTX 260 core 216 and Radeon HD 4870 1GB class of performance. We'll wait till we get to the benchmarks to reveal if that's what we actually get and whether we should just stick with what's good enough.

At a high level, here's what we're looking at:

  GTX 285 GTX 275 GTX 260 Core 216 GTS 250 / 9800 GTX+
Stream Processors 240 240 216 128
Texture Address / Filtering 80 / 80 80 / 80 72/72 64 / 64
ROPs 32 28 28 16
Core Clock 648MHz 633MHz 576MHz 738MHz
Shader Clock 1476MHz 1404MHz 1242MHz 1836MHz
Memory Clock 1242MHz 1134MHz 999MHz 1100MHz
Memory Bus Width 512-bit 448-bit 448-bit 256-bit
Frame Buffer 1GB 896MB 896MB 512MB
Transistor Count 1.4B 1.4B 1.4B 754M
Manufacturing Process TSMC 55nm TSMC 55nm TSMC 65nm TSMC 55nm
Price Point $360 ~$250 $205 $140

 

  ATI Radeon HD 4890 ATI Radeon HD 4870 ATI Radeon HD 4850
Stream Processors 800 800 800
Texture Units 40 40 40
ROPs 16 16 16
Core Clock 850MHz 750MHz 625MHz
Memory Clock 975MHz (3900MHz data rate) GDDR5 900MHz (3600MHz data rate) GDDR5 993MHz (1986MHz data rate) GDDR3
Memory Bus Width 256-bit 256-bit 256-bit
Frame Buffer 1GB 1GB 512MB
Transistor Count 959M 956M 956M
Manufacturing Process TSMC 55nm TSMC 55nm TSMC 55nm
Price Point ~$250 ~$200 $150

 

We suspect that this will be quite an interesting battle and we might have some surprises on our hands. NVIDIA has been talking about their new drivers which will be released to the public early Thursday morning. These new drivers offer some performance improvements across the board as well as some cool new features. Because it's been a while since we talked about it, we will also explore PhysX and CUDA in a bit more depth than we usually do in GPU reviews.

We do want to bring up availability. This will be a hard launch for AMD but not for NVIDIA (though some European retailers should have the GTX 275 on sale this week). As for AMD, we've seen plenty of retail samples from AMD partners and we expect good availability starting today. If this ends up not being the case, we will certainly update the article to reflect that later. NVIDIA won't have availability until the middle of the month (we are hearing April 14th).

NVIDIA hasn't been hitting their launches as hard lately, and we've gotten on them about that in past reviews. This time, we're not going to be as hard on them for it. The fact of the matter is that they've got a competitive part coming out in a time frame that is very near the launch of an AMD part at the same price point. We are very interested in not getting back to the "old days" where we had paper launched parts that only ended up being seen in the pages of hardware review sites, but we certainly understand the need for companies to get their side of the story out there when launches are sufficiently close to one another. And we're certainly not going to fault anyone for that. Not being available for purchase is it's own problem.

From the summer of 2008 to today we've seen one of most heated and exciting battles in the history of the GPU. NVIDIA and AMD have been pushing back and forth with differing features, good baseline performance with strengths in different areas, and incredible pricing battles in the most popular market segments. While AMD and NVIDIA fight with all their strength to win customers, the real beneficiary has consistently been the end user. And we certainly feel this launch is no exception. If you've got $250 to spend on graphics and were wondering whether you should save up for the GTX 285 or save money and grab a sub-$200 part, your worries are over. There is now a card for you. And it is good.

New Drivers From NVIDIA Change The Landscape
Comments Locked

294 Comments

View All Comments

  • sbuckler - Thursday, April 2, 2009 - link

    Big difference between Havok Physics and HavokFX physics. With physx you can just turn on hardware acceleration and it works, with havok this is not possible - unlike physx it was never developed to be run on the gpu. Hence havok have had to develop a new physics engine to do that.

    No game uses the HavokFX engine - it's not even available to developers yet let alone in shipped games. The ati demo was all we have seen of it for several years. It's not even clear HavokFX is even a fully accelerated hardware physics engine - i.e. the version showed in the past (before intel took over havok) was basically the havok engine with some hw acceleration for effects. i.e. hardware accel could only be used to make it prettier explosions and rippling cloth - it could not be used to do anything game changing.

    Hence havok have a way to go before they can even claim to support what physX already does, let alone shipping it to developers and then seeing them use it in games. Like I said the moment that comes close to happening nvidia will just release an OpenCL version of physX and that will be that.
  • z3R0C00L - Thursday, April 2, 2009 - link

    It's integrated in the same way. Many game developers are already familiar with coding for Havok effects.

    Not to mention that OpenCL has chosen HavokFX (which is simply using either a CPU or a GPU to render Physics effect as seen here: http://www.youtube.com/watch?v=MCaGb40Bz58">http://www.youtube.com/watch?v=MCaGb40Bz58.

    Again... Physx is dead. OpenCL is HavokFX, it's what the consortium has chosen and it runs on any CPU or GPU including Intel's upcoming Larrabee.

    Like I said before (you seem to not understand logic). Physx is dead.. it's proprietary and not as flexible as Havok. Many studios are also familiar with Havok's tools.

    C'est Fini as they say in french.
  • erple2 - Friday, April 3, 2009 - link

    I think you're mistaken - OpenCL is analogous to CUDA, not to PhysX. HavokFX is analogous to PhysX. OpenCL is the GPGPU compiler that runs on any GPU (and theoretically, it should run on any CPU too, I think). It's what Apple is now trying to push (curious, given that their laptop base is all nVidia now).

    However, if NVidia ports PhysX to OpenCL, that's a win for everyone. Sort of. Except for NVIdia that paid a lot of money for the PhysX IP. I think that the conclusions given are accurate - NVidia is banking on "everyone" (ie Game Developers) coding for PhysX (and by extension, CUDA) rather than HavokFX (and by extension, OpenCL). However, if Developers are smart, they'll go with the actually open format (OpenCL, not CUDA). That means that any physics processing they do will work on ANY GPU, (NVidia and ATI). I personally think that NVidia banked badly this time.

    While I do believe that doing physics calculations on unused GPU cycles is a great thing (and the Razor's Edge demo shows some of the interesting things that can be done), I think that NVidia's pushing of PhysX (and therefore CUDA) is like what 3dfx did with pushing GLide. Everyone supported Direct3D and OpenGL, but only 3dfx supported Glide. While Glide was more efficient (it was catering to a single hardware vendor that made Glide, afterall), the fact that Game Developers could instead program for OpenGL (or Direct3D) and get all 3D accelerators supported meant that the days of Glide were ultimately numbered.

    I wonder if NVidia is trying to pull the industry to adopting its CUDA as a "standard". I think it's ultimately going to fail, however, given that the industry recognizes now that OpenCL is available.

    Is OpenCL as mature as CUDA is? Or are they still kind of finalizing it? Maybe that's the issue - OpenCL isn't complete yet, so NVidia is trying to snatch up support in the Developer community early?
  • sbuckler - Friday, April 3, 2009 - link

    CUDA is in many ways a simplified version of OpenCL - in that CUDA knows what hardware it will run on so has set functions to access it, OpenCL is obviously much more generic as it has to run on any hardware so it's not quite as easy. That part of the reason why CUDA is initially at least more popular then OpenCL - it's easier to work with. That said they are very similar so to port from one to the other won't be hard - hence develop for CUDA now then just port to OpenCL when the market demands it.

    All in my opinion Ati want is their hardware to run with whatever physics standard is out there. Right now they are at a growing competitive disadvantage as hardware physics slowly takes off. Hence they demo HavokFX in the hope that either (a) it takes off or (b) nvidia are forced to port PhysX to openCL. I don't think they care which one wins - both products belong to a competitor.

    Nvidia who have put a lot of money into PhysX want to maximise their investment so they will keep PhysX closed as long as possible to get people to buy their cards, but in the end I am sure they are fully aware they will have to open it up to everyone - it's just a matter of when. From our standpoint the sooner the better.
  • erple2 - Friday, April 3, 2009 - link

    Sure, but my point was simply that HavokFX and PhysX are physics API's, whereas OpenCL and CUDA are "general" purpose computing languages designed to run on a GPU.

    Is CUDA easier to work with? I don't really know, as I've never programmed for either. Is OpenGL harder to program for than Glide was? Again, I don't know, I'm not a developer.

    ATI's "CUDA" was "Stream" (I think). I recall ATI abandoning that for (or folding that into) OpenCL. That's a sound strategic decision, I think.

    If PhysX is ported to OpenCL, then that's a major win for ATI, and a lesser one for NVidia - the PhysX SDK is already free for any developer that wants it (support costs money, of course). NVidia's position in that market is that PhysX currently only works on NVidia cards. Once it works elsewhere (via OpenCL or Stream), NVidia loses that "edge". However, that's a good thing...
  • SiliconDoc - Monday, April 6, 2009 - link

    I guess you're forgetting that recently NVidia supported a rogue software coder that was porting PhysX to ATI drivers. Those drivers hit the web and the downloads went wild high - and ATI stepped in and slammed the door shut with a lawsuit and threats.
    Oh well, ATI didn't want you to enjoy PhysX effects. You got screwed, even as NVidia tried to help you.
    So now all you and Derek and anand are left to do is sourpuss and whine PhysX sucks and means nothing.
    Then anand tries Mirror's Edge ( because he HAS TO - cause dingo is gone - unavailable ) and falls in love with PhysX and the game. LOL
    His conclusion ? He's an ati fabbyboi so cannot recommend it.
  • tamalero - Monday, April 20, 2009 - link

    the ammount of fud you spit is staggering
  • z3R0C00L - Thursday, April 2, 2009 - link

    On one hand you have OpenCL, Havok, ATi, AMD and Intel on the other you have nVIDIA.

    Seriously.
  • z3R0C00L - Thursday, April 2, 2009 - link

    I'm an nVIDIA fan.. I'll admit. I like that you added CUDA and Physx.. but are we reading the same results?

    The Radeon HD 4890 is the clear winner here. I don't understand how it could be any different.

  • CrystalBay - Thursday, April 2, 2009 - link

    I agree if nV wants to sell more cards they need to include the video software at no charge...

Log in

Don't have an account? Sign up now