Compute

Update 3/30/2010: After hearing word after the launch that NVIDIA has artificially capped the GTX 400 series' double precision (FP64) performance, we asked NVIDIA for confirmation. NVIDIA has confirmed it - the GTX 400 series' FP64 performance is capped at 1/8th (12.5%) of its FP32 performance, as opposed to what the hardware natively can do of 1/2 (50%) FP32. This is a market segmentation choice - Tesla of course will not be handicapped in this manner. All of our compute benchmarks are FP32 based, so they remain unaffected by this cap.

Continuing at our look at compute performance, we’re moving on to more generalized compute tasks. GPGPU has long been heralded as the next big thing for GPUs, as in the right hands at the right task they will be much faster than a CPU would be. Fermi in turn is a serious bet on GPGPU/HPC use of the GPU, as a number of architectural tweaks went in to Fermi to get the most out of it as a compute platform. The GTX 480 in turn may be targeted as a gaming product, but it has the capability to be a GPGPU powerhouse when given the right task.

The downside to GPGPU use however is that a great deal of GPGPU applications are specialized number-crunching programs for business use. The consumer side of GPGPU continues to be underrepresented, both due to a lack of obvious, high-profile tasks that would be well-suited for GPGPU use, and due to fragmentation in the marketplace due to competing APIs. OpenCL and DirectCompute will slowly solve the API issue, but there is still the matter of getting consumer orientated GPGPU applications out in the first place.

With the introduction of OpenCL last year, we were hoping by the time Fermi was launched that we would see some suitable consumer applications that would help us evaluate the compute capabilities of both AMD and NVIDIA’s cards. That has yet to come to pass, so at this point we’re basically left with synthetic benchmarks for doing cross-GPU comparisons. With that in mind we’ve run a couple of different things, but the results should be taken with a grain of salt as they don’t represent any single truth about compute performance on NVIDIA or AMD’s cards.

Out of our two OpenCL benchmarks, we’ll start with an OpenCL implementation of an N-Queens solver from PCChen of Beyond3D. This benchmark uses OpenCL to find the number of solutions for the N-Queens problem for a board of a given size, with a time measured in seconds. For this test we use a 17x17 board, and measure the time it takes to generate all of the solutions.

This benchmark offers a distinct advantage to NVIDIA GPUs, with the GTX cards not only beating their AMD counterparts, but the GTX 285 also beating the Radeon 5870. Due to the significant underlying differences of AMD and NVIDIA’s shaders, even with a common API like OpenCL the nature of the algorithm still plays a big part in the performance of the resulting code, so that may be what we’re seeing here. In any case, the GTX 480 is the fastest of the GPUs by far, beating out the GTX 285 by over half the time, and coming in nearly 5 times faster than the Radeon 5870.

Our second OpenCL benchmark is a post-processing benchmark from the GPU Caps Viewer utility. Here a torus is drawn using OpenGL, and then an OpenCL shader is used to apply post-processing to the image. Here we measure the framerate of the process.

Once again the NVIDIA cards do exceptionally well here. The GTX 480 is the clear winner, while even the GTX 285 beats out both Radeon cards. This could once again be the nature of the algorithm, or it could be that the GeForce cards really are that much better at OpenCL processing. These results are going to be worth keeping in mind as real OpenCL applications eventually start arriving.

Moving on from cross-GPU benchmarks, we turn our attention to CUDA benchmarks. Better established than OpenCL, CUDA has several real GPGPU applications, with the limit being that we can’t bring the Radeons in to the fold here. So we can see how much faster the GTX 480 is over the GTX 285, but not how this compares to AMD’s cards.

We’ll start with Badaboom, Elemental Technologies’ GPU-accelerated video encoder for CUDA. Here we are encoding a 2 minute 1080i clip and measuring the framerate of the encoding process.

The performance difference with Badaboom is rather straightforward. We have twice the shaders running at similar clockspeeds, and as a result we get twice the performance. The GTX 480 encodes our test clip in a little over half the time it took the GTX 280.

Up next is a special benchmark version of Folding@Home that has added Fermi compatibility. Folding@Home is a Standford research project that simulates protein folding in order to better understand how misfolded proteins lead to diseases. It has been a poster child of GPGPU use, having been made available on GPUs as early as 2006 as a Close-To-Metal application for AMD’s X1K series of GPUs. Here we’re measuring the time it takes to fully process a sample work unit so that we can project how many nodes (units of work) a GPU could complete per day when running Folding@Home.

Folding@Home is the first benchmark we’ve seen that really showcases the compute potential for Fermi. Unlike everything else which has the GTX 480 running twice as fast as the GTX 285, the GTX 480 is a fewtimes faster than the GTX 285 when it comes to folding. Here a GTX 480 would get roughly 3.5x as much work done per day as a GTX 285. And while this is admittedly more of a business/science application than it is a home user application (even if it’s home users running it), it gives us a glance at what Fermi is capable when it comes to compuete.

Last, but not least for our look at compute, we have another tech demo from NVIDIA. This one is called Design Garage, and it’s a ray tracing tech demo that we first saw at CES. Ray tracing has come in to popularity as of late thanks in large part to Intel, who has been pushing the concept both as part of their CPU showcases and as part of their Larrabee project.

In turn, Design Garage is a GPU-powered ray tracing demo, which uses ray tracing to draw and illuminate a variety of cars. If you’ve never seen ray tracing before it looks quite good, but it’s also quite resource intensive. Even with a GTX 480, with the high quality rendering mode we only get a couple of frames per second.

On a competitive note, it’s interesting to see NVIDIA try to go after ray tracing since that has been Intel’s thing. Certainly they don’t want to let Intel run around unchecked in case ray tracing and Larrabee do take off, but at the same time it’s rasterization and not ray tracing that is Intel’s weak spot. At this point in time it wouldn’t necessarily be a good thing for NVIDIA if ray tracing suddenly took off.

Much like the Folding@Home demo, this is one of the best compute demos for Fermi. Compared to our GTX 285, the GTX 480 is eight times faster at the task. A lot of this comes down to Fermi’s redesigned cache, as ray tracing as a high rate of cache hits which help to avoid hitting up the GPU’s main memory any more than necessary. Programs that benefit from Fermi’s optimizations to cache, concurrency, and fast task switching apparently stand to gain the most in the move from GT200 to Fermi.

Tessellation & PhysX Image Quality & AA
Comments Locked

196 Comments

View All Comments

  • Saiko Kila - Sunday, March 28, 2010 - link

    These MSRPs are not entirely, I mean historically correct... The first MSRP (list price) for HD 5850 was $259, and that was price you had to pay when buying on sites like newegg (there were some rebates, and some differences depending on manufacturer, but still you had to have a very potent hunting sense to get a card of any manufacturer, I got lucky twice). Shortly after launch (about one month, it was October) the MSRP (set by AMD) hiked to $279 and problems with supply not only continued but even worsened. Now, since November 2009, it's $299. HD 5870 followed generally similar path, though HD 5850 hiked more, which is no wonder. Note that this is for reference design only, some manufacturers had higher MSRPs, after all AMD or nvidia sell only chips and not gaming cards.

    If you believe anandtech, here you've got a link, the day the cards were announced:
    http://www.anandtech.com/video/showdoc.aspx?i=3643">http://www.anandtech.com/video/showdoc.aspx?i=3643

    The whole pricing things with HD 5xxx series is quite unusual (though not unexpected) since normally you'd anticipate the street price to be quite lower than MSRP, and then to drop even further, and you would be right. I remember buying EVGA GTX260 just after its launch and the price was good $20 lower than suggested price. That's why we need more competition, and for now the outlook isn't very bright, with nvidia not quite delivering...


    And these European prices - most if not all European countries have a heavy tax (VAT), this tax is always included and you have to pay it, there are other taxes too. In the US the sales tax is not included in the street price, and usually you can evade it after all (harder for Californians). Europeans usually get higher prices. Comparing US prices is thereby better, particularly in us dollars (most electronics deliveries are calculated in dollars in Europe). So the prices in the rest of the world were also boosted, even in Europe, despite weak dollar and other factors :)

    One note - HD5xxx cards are really very big and most of them have very unfriendly location of power sockets, so you'd expect to pay more for a proper, huge case. Also note that if you have a 600 W PSU or so you'd be smarter to keep it and not upgrade, unless REALLY necessary. The lower load means lower efficiency, especially when plugged to 115V/60Hz grid. So if you have a bigger PSU you pay more for electricity. And it seems that more gamers are concerned with that bill than in any time before... You couldn't blame them for that and it's sad in its own way.
  • LuxZg - Tuesday, March 30, 2010 - link

    Well, current MSRP is like I wrote it above. If there is no competition and/or demand is very high, prices always tend to go up. We're just lucky it's not happening often because in IT competition is usually very good.

    As for European prices, what do taxes have to do with it? We've got 23% taxes here, but it's included in all prices, so if nVidia goes up 23% so do AMD cards as well. If I'm looking at prices in the same country (and city, and sometimes store as well), and if nVidia is 300$ and ATI is 100 and 500, than I just can't compare them and say "hey, nVidia is faster than this 100$ ATI card, I?ll buy that"... no, you can't compare like that. Only thing you can do in that case is say something like "OK, so I have 300$ and fastest I can afford is nVidia" .. or "I want fastest there is, and I don't mind the cost" and you'll take HD5970 than. Or you can't afford any of those. So again, I don't get why cards in this review are so rigidly compared one to another as if they have exact same price (or +/- 10$ difference). And at one hand they compare MORE expensive nVidia card to QUITE CHEAPER AMD card, but won't compare that same nVidia card to a more expensive AMD card.. WHY?

    And AMD cards are no bigger than nVidia ones, and last time I've checked bigger case is way way cheaper than a new PSU. And I'm running my computer on, get this, 450W PSU, so I'm not wasting any excessive power on inefficiences on low loads ;) And since this PSU keeps overclocked HD4890, it should work just fine with non-overclocked HD5870. While I'm pretty sure that GTX470 would already mean a new PSU, new PSU that costs ~100$/80€ .. So I'd pay more $ in total, and get a slower card.

    Again, I'm not getting why there's such a rigid idea of GTX470=HD5850 & GTX480=HD5870 ..
  • LuxZg - Saturday, March 27, 2010 - link

    Just re-read the conclusion.. something lacks in this sentence:
    "If you need the fastest thing you can get then the choice is clear, .."
    Shouldn't it finish with "... choice is clear, HD5970..." ? That's what I'm saying, HD5970 wasn't mentioned in the entire conclusion. Past are the days of "single-GPU crown" .. That's just for nVidia to feel better. ATI Doesn't want "single GPU crown", they want the fastest graphics CARD. And they have it.. Serious lack in this article, serious.. And again, there is exact same amount of money dividing GTX480 and HD5870, as is between GTX480 and HD5970..
  • blindbox - Saturday, March 27, 2010 - link

    I know this is going to take quite a bit of work, but can't you colour up the main cards and its competition in this review? By main cards, I mean GTX 470, 480 and 5850 and 5870. It's giving me a hard time to make comparison. I'm sure you guys did this before.. I think.

    It's funny how you guys only coloured the 480.

    PS: I'm sorry for the spam, my comments are not appearing, and I'm sorry for replying to this guy when it is completely off topic, lol.
  • JarredWalton - Saturday, March 27, 2010 - link

    Yes, it did take a bit of work, but I did it for Ryan. The HD 5870/5970 results are in orange and the 5850 is in red. It makes more of a difference on crowded graphs, but it should help pick out the new parts and their competition. I'm guessing Ryan did it to save time, because frankly the graphing engine is a pain in the butt. Thankfully, the new engine should be up and running in the near future. :-)
  • Finally - Saturday, March 27, 2010 - link

    Further improvement idea:
    Give the dual-chip/SLI cards also another colour tone.
  • lemonadesoda - Sunday, March 28, 2010 - link

    No. Keep colouring simple. Just 3 or 4 colours max. More creates noise. If you need to highlight other results, colour the label, or circle or drop shadow or put a red * a the end.

    Just NO rainbow charts!
  • IceDread - Tuesday, March 30, 2010 - link

    The article does not contain hd 5970 in CF. The article does not mention the hd 5970 at all under conclusion. This is really weird. It is my belief that anandtech has become pro nvidia and is no longer an objective site. Obejtivity is looking at performance + functionality / price. HD 5970 is a clear winner here. After all, who cares if a card has 1, 2 or 20 gpus? It's the performance / price that matters.
  • Kegetys - Tuesday, March 30, 2010 - link

    According to a test in legitreviews.com having two monitors attached to the card causes the idle power use to rise quite a bit, I guess the anand test is done with just one monitor attached? It would be nice to see power consumption numbers for dual monitor use as well, I dont mind high power use during load but if the card does not idle properly (with two monitors) then that is quite a showstopper.
  • Ryan Smith - Wednesday, March 31, 2010 - link

    I have a second monitor (albeit 1680) however I don't use it for anything except 3D Vision reviews. But if dual monitor power usage is going to become an issue, it may be prudent to start including that.

Log in

Don't have an account? Sign up now