ECC Support

AMD's Radeon HD 5870 can detect errors on the memory bus, but it can't correct them. The register file, L1 cache, L2 cache and DRAM all have full ECC support in Fermi. This is one of those Tesla-specific features.

Many Tesla customers won't even talk to NVIDIA about moving their algorithms to GPUs unless NVIDIA can deliver ECC support. The scale of their installations is so large that ECC is absolutely necessary (or at least perceived to be).

Unified 64-bit Memory Addressing

In previous architectures there was a different load instruction depending on the type of memory: local (per thread), shared (per group of threads) or global (per kernel). This created issues with pointers and generally made a mess that programmers had to clean up.

Fermi unifies the address space so that there's only one instruction and the address of the memory is what determines where it's stored. The lowest bits are for local memory, the next set is for shared and then the remainder of the address space is global.

The unified address space is apparently necessary to enable C++ support for NVIDIA GPUs, which Fermi is designed to do.

The other big change to memory addressability is in the size of the address space. G80 and GT200 had a 32-bit address space, but next year NVIDIA expects to see Tesla boards with over 4GB of GDDR5 on board. Fermi now supports 64-bit addresses but the chip can physically address 40-bits of memory, or 1TB. That should be enough for now.

Both the unified address space and 64-bit addressing are almost exclusively for the compute space at this point. Consumer graphics cards won't need more than 4GB of memory for at least another couple of years. These changes were painful for NVIDIA to implement, and ultimately contributed to Fermi's delay, but necessary in NVIDIA's eyes.

New ISA Changes Enable DX11, OpenCL and C++, Visual Studio Support

Now this is cool. NVIDIA is announcing Nexus (no, not the thing from Star Trek Generations) a visual studio plugin that enables hardware debugging for CUDA code in visual studio. You can treat the GPU like a CPU, step into functions, look at the state of the GPU all in visual studio with Nexus. This is a huge step forward for CUDA developers.


Nexus running in Visual Studio on a CUDA GPU

Simply enabling DX11 support is a big enough change for a GPU - AMD had to go through that with RV870. Fermi implements a wide set of changes to its ISA, primarily designed at enabling C++ support. Virtual functions, new/delete, try/catch are all parts of C++ and enabled on Fermi.

Efficiency Gets Another Boon: Parallel Kernel Support The RV770 Lesson (or The GT200 Story)
Comments Locked

415 Comments

View All Comments

  • yacoub - Thursday, October 1, 2009 - link

    uh-oh, boys, he's foaming at the mouth. time to put him down.
  • SiliconDoc - Thursday, October 1, 2009 - link

    Ah, another coward defeated. No surprise.
  • yacoub - Wednesday, September 30, 2009 - link

    "The motivation behind AMD's "sweet spot" strategy wasn't just die size, it was price."

    LOL, no it wasn't. Not when everyone, even Anandtech staff, anticipated the pricing for the two Cypress chips to be closer to $199 and $259, not the $299 and $399 they MSRP'd at.

    This return to high GPU prices is disheartening, particularly in this economy. We had better prices for cutting edge GPUs two years ago at the peak of the economic bubble. Today in the midst of the burst, they're coming out with high-priced chips again. But that's okay, they'll have to come down when they don't get enough sales.
  • SiliconDoc - Thursday, October 1, 2009 - link

    It was fun for half a year as the red fans were strung along with the pricing fantasy here.
    Now of course, well the bitter disappointment, not as fast as expected and much more costly. "low yields" - you know, that problem that makles ati "smaller dies" price like "big green monsters" (that have good yields on the GT300).
    --
    But, no "nothing is wrong, this is great!" Anyone not agreeing is "a problem". A paid agent, too, of that evil money bloated you know who.
  • the zorro - Thursday, October 1, 2009 - link

    silicon duck, please take a valium i'm worried about you.
  • SiliconDoc - Thursday, October 1, 2009 - link

    Another lie, no worry, you're no physician, but I am SiliconDoc, so grab your gallon red water bottle reserve for your overheating ati card and bend over and self administer you enema, as usual.
  • araczynski - Wednesday, September 30, 2009 - link

    sounds like ati will win the bang for the buck war this time as well. at least it makes the choice easier for me.
  • marc1000 - Wednesday, September 30, 2009 - link

    Some time ago I heard that the nex gen of consoles would run DX11 (Playstation2 and Xbox were DX7, PS3 and X360 DX9. So PS4 and X720 could perfectly be DX11). If this is the case, we are about to see new consoles with really awesome graphics - and then the GPU race would need to start over to more and more performance.

    Do you guys have any news on those new consoles development? It could complete the figure in the new GPU articles this year.
  • Penti - Friday, October 2, 2009 - link

    I think you mean DX9 class hardware, PS3 who has zero DX9 support and XBOX360 has DX9c class support but a console specific version. PS3 was using OpenGL ES 1.0 with shaders and other feature from 2.0 as it was release pre OpenGL ES 2.0 spec. The game engines don't need the DX API. It doesn't matter to game content developers any way.

    Xbox was actually DirectX 8.1 equivalent. As said next gen consoles are years away. Larrabee and fermi will have been long out by then.
  • haukionkannel - Thursday, October 1, 2009 - link

    Rumours says that next generation consoles are releaset around 2013-2014...
    But who can say...

Log in

Don't have an account? Sign up now