Odds & Ends: ECC & NVIDIA Surround Missing

One of the things we have been discussing with NVIDIA for this launch is ECC. As we just went over in our GF100 Recap, Fermi offers ECC support for its register file, L1 cache, L2 cache, and RAM. The latter is the most interesting, as under normal circumstances implementing ECC requires a wider bus and additional memory chips. The GTX 400 series will not be using ECC, but we went ahead and asked NVIDIA how ECC will work on Fermi products anyhow.

To put things in perspective, for PC DIMMs an ECC DIMM will be 9 chips per channel (9 bits per byte) hooked up to a 72bit bus instead of 8 chips on a 64bit bus. However NVIDIA doesn’t have the ability or the desire to add even more RAM channels to their products, not to mention 8 doesn’t divide cleanly in to 10/12 memory channels. So how do they implement ECC?

The short answer is that when NVIDIA wants to enable ECC they can just allocate RAM for the storage of ECC data. When ECC is enabled the available RAM will be reduced by 1/8th (to account for the 9th ECC bit) and then ECC data will be distributed among the RAM using that reserved space. This allows NVIDIA to implement ECC without the need for additional memory channels, at the cost of some RAM and some performance.

On the technical side, despite this difference in implementation NVIDIA tells us that they’re still using standard Single Error Correction / Double Error Detection (SECDED) algorithms, so data reliability is the same as in a traditional implementation. Furthermore NVIDIA tells us that the performance hit isn’t a straight-up 12.5% reduction in effective memory bandwidth, rather they have ways to minimize the performance hit. This is their “secret sauce” as they call it, and it’s something that they don’t intend to discuss at in detail at this time.

Shifting gears to the consumer side, back in January NVIDIA was showing off their Eyefinity-like solutions 3DVision Surround and NVIDIA Surround on the CES showfloor. At the time we were told that the feature would launch with what is now the GTX 400 series, but as with everything else related to Fermi, it’s late.

Neither 3DVision Surround nor NVIDIA surround are available in the drivers sampled to us for this review. NVIDIA tells us that these features will be available in their release 256 drivers due in April. There hasn’t been any guidance on when in April these drivers will be released, so at this point it’s anyone’s guess whether they’ll arrive in time for the GTX 400 series retail launch.

The GF100 Recap Tessellation & PhysX
Comments Locked

196 Comments

View All Comments

  • henrikfm - Tuesday, March 30, 2010 - link

    Now it would be easier to believe only idiots buy ultra-high end PC hardware parts.
  • ryta1203 - Tuesday, March 30, 2010 - link

    Is it irresponsible to use benchmarks desgined for one card to measure the performance of another card?

    Sadly, the "community" tries to hold the belief that all GPU architectures are the same, which is of course not true.

    The N-queen solver is poorly coded for ATI GPUs, so of course, you can post benchmarks that say whatever you want them to say if they are coded that way.

    Personally, I find this fact invalidates the entire article, or at least the "compute" section of this article.
  • Ryan Smith - Wednesday, March 31, 2010 - link

    One of the things we absolutely wanted to do starting with Fermi is to include compute benchmarks. It's going to be a big deal if AMD and NVIDIA have anything to say about it, and in the case of Fermi it's a big part of the design decision.

    Our hope was that we'd have some proper OpenCL/DirectCompute apps by the time of the Fermi launch, but this hasn't happened. So our decision was to go ahead with what we had, and to try to make it clear that our OpenCL benchmarks were to explore the state of GPGPU rather than to make any significant claims about the compute capabilities of NVIDIA or AMD's GPUs. We would rather do this than to ignore compute entirely.

    It sounds like we didn't make this clear enough for your liking, and if so I apologize. But it doesn't make the results invalid - these are OpenCL programs and this is what we got. It just doesn't mean that these results will carry over to what a commercial OpenCL program may perform like. In fact if anything it adds fuel to the notion that OpenCL/DirectCompute will not be the great unifier we had hoped for them to be if it means developers are going to have to basically write paths optimized around NVIDIA and AMD's different shader structure.
  • ryta1203 - Tuesday, March 30, 2010 - link

    The compute section of this article is just nonsense. Is this guy a journalist? What does he know about programming GPUs?
  • Firen - Tuesday, March 30, 2010 - link

    Thanks for this comprehensive review, it covers some very interesting topics betwen Team Green and Team Red.

    Yet, I agree with one of the comments here, you missed how easy that ATI 5850 and 5870 can be overlocked thanks to their lite design, a 5870 can easily deliver more or less the same performance as a 480 card while still running cooler and consumes less power..

    Some people might point out that our new 'champion' card can be overlocked as well..that's true..however, doesn't it feel terrifying to have a graphic card running hotter than boiling water!
  • Fulle - Tuesday, March 30, 2010 - link

    I wonder what kind of overclocking headroom the 470 has.... since someone with a 5850 can easily bump the voltage up a smidge, and get about a 30% overclock with minimal effort... people who tinker can usually safely reach about 1GHz core, for about a 37% overclock.

    Unless the 470 has a bit of overclocking headroom, someone with a 5850 could easily overclock to have superior performance, lower heat, lower noise, and lower power consumption.

    After all these months and months of waiting, Nvidia has basically released a few products that ATI can defeat by just binning their current GPUs and bumping up the clockspeed? *sigh* I really don't know who would buy these cards.
  • Shadowmaster625 - Tuesday, March 30, 2010 - link

    You're being way too kind to Nvidia. Up to 50% more power consumption for a very slight (at best) price/performance advantage? This isnt a repeat of the AMD/Intel thing. This is a massive difference in power consumption. We're talking about approximately $1 a year per hour a week of gaming. If you game for 20 hours a week, expect to pay $20 a year more for using the GTX470 vs a 5850. May as well add that right to the price of the card.

    But the real issue is what happens to these cards when they get even a modest coating of dust in them? They're going to detonate...

    Even if the 470 outperformed the 5850 by 30%, I dont think it would be worth it. I cant stand loud video cards. It is totally unacceptable to me. I again have to ask the question I find myself asking quite often: what kind of world are you guys living in? nVidia should get nothing more than a poop-in-a-box award for this.
  • jujumedia - Wednesday, March 31, 2010 - link

    with those power draws and the temps it reaches for daily operation i see gpu failure rates high on the gtx 480 and 470 as they are already faulty from the fab lab. Ill stick with ATI for 10 fps less.
  • njs72 - Wednesday, March 31, 2010 - link

    I been holding on for months to see what Fermi would bring in the world of GPUs. After reading countless reviews of this card i dont think its a justifyable upgrade for my gtx260. I mean yeah the performance is much higher but in most reviews of benchmarks with games like Crysis this card barely wins against the 5870, but buying this card i would need to upgrade the psu and posibly a new case for ventilation. I keep loading up Novatechs website and and almost adding a 5870 to the basket, and not pre ordering gtx480 like i was intending. What puts me off more than anything with the new nvidia card is its noise and temps. I cant see this card living for very long.

    Ive been a nvidia fan ever since the the first geforce card came out, which i still have tucked away in a draw somewhere. I find myself thinking of switching to ATI, but read too many horror stories about their driver implementation that puts me off. Maybe i should just wait for Nvidia to refresh its new card and keep hold of my 260 for a bit longer. i really dont know :-(
  • Zaitsev - Wednesday, March 31, 2010 - link

    There is an error with the Bad Company 2 image mouse overs for the GTX 480. I think the images for 2xAA and 4xAA have been mixed up. 2xAA clearly has more AA than the 4xAA image.

    Compare GTX 480 2x with GTX 285 4x and they look very similar. Also compare 480 4x with 285 2x.

    Very nice article, Ryan! I really enjoyed the tessellation tests. Keep up the good work.

Log in

Don't have an account? Sign up now