ECC Support

AMD's Radeon HD 5870 can detect errors on the memory bus, but it can't correct them. The register file, L1 cache, L2 cache and DRAM all have full ECC support in Fermi. This is one of those Tesla-specific features.

Many Tesla customers won't even talk to NVIDIA about moving their algorithms to GPUs unless NVIDIA can deliver ECC support. The scale of their installations is so large that ECC is absolutely necessary (or at least perceived to be).

Unified 64-bit Memory Addressing

In previous architectures there was a different load instruction depending on the type of memory: local (per thread), shared (per group of threads) or global (per kernel). This created issues with pointers and generally made a mess that programmers had to clean up.

Fermi unifies the address space so that there's only one instruction and the address of the memory is what determines where it's stored. The lowest bits are for local memory, the next set is for shared and then the remainder of the address space is global.

The unified address space is apparently necessary to enable C++ support for NVIDIA GPUs, which Fermi is designed to do.

The other big change to memory addressability is in the size of the address space. G80 and GT200 had a 32-bit address space, but next year NVIDIA expects to see Tesla boards with over 4GB of GDDR5 on board. Fermi now supports 64-bit addresses but the chip can physically address 40-bits of memory, or 1TB. That should be enough for now.

Both the unified address space and 64-bit addressing are almost exclusively for the compute space at this point. Consumer graphics cards won't need more than 4GB of memory for at least another couple of years. These changes were painful for NVIDIA to implement, and ultimately contributed to Fermi's delay, but necessary in NVIDIA's eyes.

New ISA Changes Enable DX11, OpenCL and C++, Visual Studio Support

Now this is cool. NVIDIA is announcing Nexus (no, not the thing from Star Trek Generations) a visual studio plugin that enables hardware debugging for CUDA code in visual studio. You can treat the GPU like a CPU, step into functions, look at the state of the GPU all in visual studio with Nexus. This is a huge step forward for CUDA developers.


Nexus running in Visual Studio on a CUDA GPU

Simply enabling DX11 support is a big enough change for a GPU - AMD had to go through that with RV870. Fermi implements a wide set of changes to its ISA, primarily designed at enabling C++ support. Virtual functions, new/delete, try/catch are all parts of C++ and enabled on Fermi.

Efficiency Gets Another Boon: Parallel Kernel Support The RV770 Lesson (or The GT200 Story)
Comments Locked

415 Comments

View All Comments

  • silverblue - Thursday, October 1, 2009 - link

    People will buy nVidia hardware for their HTPCs regardless of it having PhysX, AO, CUDA or whatever. Price is a very attractive factor, but so is noise and temperature, so people will go for what suits them the best. If people think nVidia offers more for the price, they will buy it, some may go for another option if they want less heat, or less speed or whatever. It's their choice, and not one made out of malice.

    This thread isn't full of nVidia-haters like you want to believe it is. Keep thinking that if you feel more comfortable doing so. In the end, we as consumers have a choice as to what we buy and nothing of what you are saying here has any bearing on that decision making process.
  • SiliconDoc - Thursday, October 1, 2009 - link

    I think I'll just ignore you, since you seem to have acquired a Svengali mind read on your big "we" extension, and somehow think you represent every person here.
    I don't put any stock in your idiotic lunatic demi-god musings.
    --
    If you ever say anything worth more than a piece of scat, I will however respond appropriately.
    I'll remind you, you can't even prevent YOURSELF from being influenced by me, let alone "everyone here".
    Now if you don't have any KNOWLEDGE on the HTPC issues and questions I brought up with this other poster and his HTPC dreams, please excuse your mind reading self, and keep yourself just as deluded as possible.
    I find this a classic IDIOCY : " we as consumers have a choice as to what we buy (oh no problem there)

    and nothing of what you are saying here has any bearing on that decision making process. "

    You just keep telling yourself that, you unbelievably deranged goofball. LOL, and maybe it will become true for you, if you just keep repeating it.
    The first sign of your own cracked shield in that area is you actually saying that. You've already been influenced, and you're so goofy, you just had to go in text and claim no one ever will be.
    I mean, you are so much worse than anything I've done here it is just amnazing.
    How often do you tell yourself fantasies that there is no chance to can possibly believe or prove, and in fact, have likely already failed yourself ?
    Really, I mean absolutely.
  • silverblue - Friday, October 2, 2009 - link

    If you had a mind left to form any sort of coherent thought patterns, we might take you seriously here. You have just admitted (in your own incoheret, babbling way) that you are trying to actively (and forcibly, I might add) influence people to buy nVidia cards over ATI. I'm telling you that you've failed and will continue to fail as long as you keep shimmying up and down the green flag pole in the name of progress. I wonder if anyone at nVidia reads these comments; what must they think of you? If they considered AT a biased publication then they wouldn't speak with Anand as cordially as they do.

    I say "we" because, unless you've opened your eyes, "we" as a community are becoming even more united against no-brained deluded fanboys such as yourself. We DON'T hate nVidia, a lot of people here own nVidia cards, some only have nVidia cards, some own nVidia and ATI, and some own ATI. This isn't about hatred or bias is misinformation; this is about one socially inept weasel who has been attempting to shove his knowledge down everyone else's throats on this (and other subjects) whether there's any factual basis to it or not.

    You disagree with me using the term "we", fine. I personally want to see the GT300 launch. I personally want nVidia to bring out a mainstream flavour to compete with the 5850. I personally want prices to fall. I personally don't have anything against PhysX, CUDA or AO. I personally want to see 3D gaming gather momentum.

    Now ask yourself - can you be as objective and impartial as that?

    You just seem to read what you like and completely miss the point of any post you reply to. There's no way someone can be impartial on this site with you around because any word of praise about ATI equates to bias in your head.

    There's only so far you can go before someone clicks the Ban button but I'm sure you'll come back with another account.
  • shotage - Thursday, October 1, 2009 - link

    Still voting to get you banned SiliconDoc.
  • Zool - Thursday, October 1, 2009 - link

    I dont think thats to fair from nvidia to let pay the extra cost for design and manufacture from the gpgpu bloat for all people. They lunched tesla card becouse it cost insane money and can get away with curent yields. For majority of graphic is still simd and almosnt no branching uterly enough. I mean if they would make stand alone cuda cards without the useless graphic pipeline parts it could be smaler or faster.And that goes for graphic too.
    I mean how hard would it be for amd or intel to put some similar low transistor budget simd units to the pipeline to CPU like are in GPU. And they could run on CPU clocks and would be integral part of CPU(latencies, cache etc).
    I dont think thats the right strategy for nvidia.
  • silverblue - Thursday, October 1, 2009 - link

    nVidia could charge a premium for the Tesla-badged cards due to their potential savings over the more traditional method of using masses of general-purpose servers, however they may want to really establish Tesla as a viable option so they can't very well charge too much for it.

    I'm interested in seeing the peak performance figures for both Cypress and Fermi; will the AMD part still have an advantage in raw processing power due to having many many more, if weaker, SPs/cores? And will it matter in the working environment?
  • Zool - Thursday, October 1, 2009 - link

    Nvidias dreams of those 500x performance in the coming years are actualy only for GPGPU not graphic.
    The curent gen cards are begining to show some strange scaling.(i think nvidia wont be other in this case too)
    They will need some more changes if they want to utilize more shader procesors than just double everithing. If u think about it than at 850 MHz feeding 1600 shaders (actualy 320 is more realistic) is quite a transistor chalenge.(CPUs look like babie toys to these with large cache and much less core)
    Actualy there are some physical limits to transistors too. Increasing to 4k milion transistors and 3200 shaders in next card would need even more internal speed. It would be maybe easyer to place to rv870 dies in one gpu than double everything.
  • neomocos - Thursday, October 1, 2009 - link

    We all like our freedom of opinions at anand and this article was very interesting and the comments as well that was until SiliconDoc started trashing everything.As stated by a lot of other users i ask anand to take some action against the user , he is ruining my experience and others of calmly reading the articles in the morning with a cofee :).All his arguments and the way he throws them is so random and makes no sense, he sound like a man who needs his drug of praising nvidia and trashing the red rooster any way he can even if it`s with no real arguments.I read with pleasure the comments of the smart non-biased guys posting here but this guy is just talking crap to fill the lines.
    On topic ... considering what 5850 has : eyefinity , performance/price, directx 11, power cons, and most important availability i was smiling to myself and thinking that ATI will have killer sales this 3 months left of 2009. I personally will wait for nvidia to bring fermi and with it the price war cuz we all know that then all prices will go down i estimate 150$ for 5850 and about 200 for 5870 around june and if nvidia has better price/perf i will definetly buy it.
  • SiliconDoc - Thursday, October 1, 2009 - link

    And, here we have your contribution, after whining about me claiming no points, the usual bs from red lovers, here is the evidence of your bloodshot eyes, at least you've accepted my direct orders and forced yourself to talk topic.
    -
    " On topic ... considering what 5850 has : eyefinity , performance/price, directx 11, power cons, and most important availability i was smiling to myself and thinking that ATI will have killer sales this 3 months left of 2009. "
    --
    And after you realize what a red rooster you just were, wether you thought it was a good jab at me, since you know I'll read your attack and that's what the attack was about, or wether you couldn't help yourself, you went on to claim how fair and balanced you are after you hoped for 2 cheap ati cards. LOL The afterthought, barely surfacing from the lack of wattage, added at the end, "if nvidia has better I'll blah blah"..
    FUNNY how you talk about THOSE CARDS in the TESLA THREAD, when you are ON TOPIC !
    roflmao !
    Wowzie!
    Cocka ! Doodle ! Doooo !
    Let me ask you, since you considered eyefinity so great, do you
  • shotage - Thursday, October 1, 2009 - link

    1 vote to get you banned.

Log in

Don't have an account? Sign up now