ECC Support

AMD's Radeon HD 5870 can detect errors on the memory bus, but it can't correct them. The register file, L1 cache, L2 cache and DRAM all have full ECC support in Fermi. This is one of those Tesla-specific features.

Many Tesla customers won't even talk to NVIDIA about moving their algorithms to GPUs unless NVIDIA can deliver ECC support. The scale of their installations is so large that ECC is absolutely necessary (or at least perceived to be).

Unified 64-bit Memory Addressing

In previous architectures there was a different load instruction depending on the type of memory: local (per thread), shared (per group of threads) or global (per kernel). This created issues with pointers and generally made a mess that programmers had to clean up.

Fermi unifies the address space so that there's only one instruction and the address of the memory is what determines where it's stored. The lowest bits are for local memory, the next set is for shared and then the remainder of the address space is global.

The unified address space is apparently necessary to enable C++ support for NVIDIA GPUs, which Fermi is designed to do.

The other big change to memory addressability is in the size of the address space. G80 and GT200 had a 32-bit address space, but next year NVIDIA expects to see Tesla boards with over 4GB of GDDR5 on board. Fermi now supports 64-bit addresses but the chip can physically address 40-bits of memory, or 1TB. That should be enough for now.

Both the unified address space and 64-bit addressing are almost exclusively for the compute space at this point. Consumer graphics cards won't need more than 4GB of memory for at least another couple of years. These changes were painful for NVIDIA to implement, and ultimately contributed to Fermi's delay, but necessary in NVIDIA's eyes.

New ISA Changes Enable DX11, OpenCL and C++, Visual Studio Support

Now this is cool. NVIDIA is announcing Nexus (no, not the thing from Star Trek Generations) a visual studio plugin that enables hardware debugging for CUDA code in visual studio. You can treat the GPU like a CPU, step into functions, look at the state of the GPU all in visual studio with Nexus. This is a huge step forward for CUDA developers.


Nexus running in Visual Studio on a CUDA GPU

Simply enabling DX11 support is a big enough change for a GPU - AMD had to go through that with RV870. Fermi implements a wide set of changes to its ISA, primarily designed at enabling C++ support. Virtual functions, new/delete, try/catch are all parts of C++ and enabled on Fermi.

Efficiency Gets Another Boon: Parallel Kernel Support The RV770 Lesson (or The GT200 Story)
Comments Locked

415 Comments

View All Comments

  • shotage - Thursday, October 1, 2009 - link

    lol

    *shakes head*
  • palladium - Thursday, October 1, 2009 - link

    Ahh, he said a 9800 GTX + GDDR5 = 4870 !
  • blindbox - Thursday, October 1, 2009 - link

    Ooops, I think I need to speak something on topic at least. Anyone could tell me if OpenCL SDK is out yet? Or DirectCompute too? It has been over a year since GPU computing was announced and nothing useful for the consumers (I don't call folding for consumers).
  • habibo - Thursday, October 1, 2009 - link

    Yes, both OpenCL and DirectCompute are available for development. It will take time for developers to release applications that use these APIs.

    There are already consumer applications that use CUDA, although these are mostly video encoding, Folding@Home/SETI@home, and PhysX-based games. Possibly not too exciting to you, but hopefully more will be coming as GPU computing gains traction.
  • PorscheRacer - Thursday, October 1, 2009 - link

    Does anyone know if the 5000 series support hardware virtualisation? I think this will be the killer feature once AMD's 800 series chipsets debut here shortly. Being able to virtualise the GPU and other hardware with your virtual machines is the last stop to pure bliss.
  • dgz - Thursday, October 1, 2009 - link

    I am also curious. Right now only nVidia's Quadro cards support this.

    The thing is, though, that your CPU and chipset also have to support what Intel calls VT-d.

    Being able to play 3D games in virtual OS with little to no performance would be great and useful.

    Not going to happen soon, though. It's also funny that virtually no one Lynnfield mentioned the lack of VT-d in 750 in his "deep" review. Huge disappointment.
  • wifiwolf - Thursday, October 1, 2009 - link

    If there's any technology that seams to scratch that virtualization, i think this new gt300 is the one. When reading about nvidia making the card compute oriented it just drove my mind to that thought. Hope i'm right. To be fair with amd, i think their doubled stream processors could be a step forward in that direction too, coupled with dx11 direct compute. Virtual machines just need to acknowledge the cards and capabilities.
  • dgz - Friday, October 2, 2009 - link

    They already do. vmware and vbox have such capabilities. Not everything is possible atm, though.
  • dgz - Thursday, October 1, 2009 - link

    oops, I meant "little to no performance penalty" :)
  • sigmatau - Thursday, October 1, 2009 - link

    According to the super troll who keeps screeching about bandwidth, then the GT300 must be a lesser card since it doesn't have 512 bit connection like the GT200.

    LOL @ Trolls.

Log in

Don't have an account? Sign up now