Odds & Ends: ECC & NVIDIA Surround Missing

One of the things we have been discussing with NVIDIA for this launch is ECC. As we just went over in our GF100 Recap, Fermi offers ECC support for its register file, L1 cache, L2 cache, and RAM. The latter is the most interesting, as under normal circumstances implementing ECC requires a wider bus and additional memory chips. The GTX 400 series will not be using ECC, but we went ahead and asked NVIDIA how ECC will work on Fermi products anyhow.

To put things in perspective, for PC DIMMs an ECC DIMM will be 9 chips per channel (9 bits per byte) hooked up to a 72bit bus instead of 8 chips on a 64bit bus. However NVIDIA doesn’t have the ability or the desire to add even more RAM channels to their products, not to mention 8 doesn’t divide cleanly in to 10/12 memory channels. So how do they implement ECC?

The short answer is that when NVIDIA wants to enable ECC they can just allocate RAM for the storage of ECC data. When ECC is enabled the available RAM will be reduced by 1/8th (to account for the 9th ECC bit) and then ECC data will be distributed among the RAM using that reserved space. This allows NVIDIA to implement ECC without the need for additional memory channels, at the cost of some RAM and some performance.

On the technical side, despite this difference in implementation NVIDIA tells us that they’re still using standard Single Error Correction / Double Error Detection (SECDED) algorithms, so data reliability is the same as in a traditional implementation. Furthermore NVIDIA tells us that the performance hit isn’t a straight-up 12.5% reduction in effective memory bandwidth, rather they have ways to minimize the performance hit. This is their “secret sauce” as they call it, and it’s something that they don’t intend to discuss at in detail at this time.

Shifting gears to the consumer side, back in January NVIDIA was showing off their Eyefinity-like solutions 3DVision Surround and NVIDIA Surround on the CES showfloor. At the time we were told that the feature would launch with what is now the GTX 400 series, but as with everything else related to Fermi, it’s late.

Neither 3DVision Surround nor NVIDIA surround are available in the drivers sampled to us for this review. NVIDIA tells us that these features will be available in their release 256 drivers due in April. There hasn’t been any guidance on when in April these drivers will be released, so at this point it’s anyone’s guess whether they’ll arrive in time for the GTX 400 series retail launch.

The GF100 Recap Tessellation & PhysX
POST A COMMENT

197 Comments

View All Comments

  • GTaudiophile - Saturday, March 27, 2010 - link

    Me thinks that Cypress really blindsided nVidia. And then on top of it being such an efficient chip, you throw in Eyefinity and all of the audio over HDMI features, etc.

    Talk about a smack down.
    Reply
  • AnnihilatorX - Saturday, March 27, 2010 - link

    Page 2:

    Finally we bad news: availability. This is a paper launch;
    Reply
  • simtex - Saturday, March 27, 2010 - link

    With the current console generation being the primary focus of game developers I find it hard to believe that tessalation will get the big breakthrough anytime soon. With the next-gen console, it will come, but that is few years from now, and hopefully at that time we have seen at least one new generations of GPUs. Reply
  • viewwin - Saturday, March 27, 2010 - link

    I would like a test to see how the new cards do at video encoding. Reply
  • Philip123 - Saturday, March 27, 2010 - link

    These things are not "single slot cards" They are double slot. They take 2 slots. No review should be published without pointing out performance per watt. If you dont publish performance per dollar which includes the 100 watt premium over 3 years you are not doing your job. Only and idiot would buy anything from nvdia. You really think anyone is going to want fan noise from these monstrosities anywhere near them?

    SHAME SHAME SHAME.
    Throw this bullshit in the garbage and tell nvidia to f-off untill it releases an actual computer graphics product instead of a spaceheater for retarded monekeys with developmental disablities.
    Reply
  • AnnonymousCoward - Saturday, March 27, 2010 - link

    You are so wrong.

    You talk about dual-slot cards as if it's a bad thing--it's the best design currently possible, since it allows for efficient cooling without much fan noise, and the heat goes outside your case. Plus, AMD's 5870 & 5850 are also dual slot!

    "No review should be published without pointing out performance per watt" - what gamer cares about that? That's a concern for server farmers!
    Reply
  • 529th - Saturday, March 27, 2010 - link

    I think people would be interested in seeing these cards overclocked, also seeing the 470 in SLI Reply
  • cobra32 - Saturday, March 27, 2010 - link


    So Nvidia's fastest card is 11% faster than AMD's mid level card 5870 and AMD's top card the 5970 is allot faster than the 480 GTX. Do not give me that the 5970 is a two chip card and cannot be compared to a single chip card. Sorry guys the 5970 takes up one slot just like the 480 GTX and is faster and consumes less energy to move things on the screen faster. If I got 3 slots on my motherboard I can have 6 video chips with an ATI while with a nvidia setup I can have only 3 at the most. Until Nvidia has a two chip version which looks impossible with this power hunger design. ATI has the top single card, be that two chips, you can buy. It took them 6 months and still cannot buy one paper launches suck. I have bought several Nvidia Cards and like them all but this one really looks to fall short. If I got one slot to put my video card in, ATI has the highest performing Card I can buy 5970. It's like this would I rather have a single core chip or dual core cpu and that's a no brainer two is always better than one.
    Reply
  • Roland00 - Saturday, March 27, 2010 - link

    Two things
    You can only have 4 gpus in nvidia or ati multi gpu setups. That means two 5970s not 3. (well you can have 4 gpus and one physX)

    Second crossfire doesn't scale well past 2 cards, sli doesn't scale well past 3.
    Reply
  • derrida - Saturday, March 27, 2010 - link

    Thank you Ryan for including OpenCL benchmarks. Reply

Log in

Don't have an account? Sign up now