Efficiency Gets Another Boon: Parallel Kernel Support

In GPU programming, a kernel is the function or small program running across the GPU hardware. Kernels are parallel in nature and perform the same task(s) on a very large dataset.

Typically, companies like NVIDIA don't disclose their hardware limitations until a developer bumps into one of them. In GT200/G80, the entire chip could only be working on one kernel at a time.

When dealing with graphics this isn't usually a problem. There are millions of pixels to render. The problem is wider than the machine. But as you start to do more general purpose computing, not all kernels are going to be wide enough to fill the entire machine. If a single kernel couldn't fill every SM with threads/instructions, then those SMs just went idle. That's bad.


GT200 (left) vs. Fermi (right)

Fermi, once again, fixes this. Fermi's global dispatch logic can now issue multiple kernels in parallel to the entire system. At more than twice the size of GT200, the likelihood of idle SMs went up tremendously. NVIDIA needs to be able to dispatch multiple kernels in parallel to keep Fermi fed.

Application switch time (moving between GPU and CUDA mode) is also much faster on Fermi. NVIDIA says the transition is now 10x faster than GT200, and fast enough to be performed multiple times within a single frame. This is very important for implementing more elaborate GPU accelerated physics (or PhysX, great ;)…).

The connections to the outside world have also been improved. Fermi now supports parallel transfers to/from the CPU. Previously CPU->GPU and GPU->CPU transfers had to happen serially.

A More Efficient Architecture ECC, Unified 64-bit Addressing and New ISA
Comments Locked

415 Comments

View All Comments

  • silverblue - Thursday, October 1, 2009 - link

    People will buy nVidia hardware for their HTPCs regardless of it having PhysX, AO, CUDA or whatever. Price is a very attractive factor, but so is noise and temperature, so people will go for what suits them the best. If people think nVidia offers more for the price, they will buy it, some may go for another option if they want less heat, or less speed or whatever. It's their choice, and not one made out of malice.

    This thread isn't full of nVidia-haters like you want to believe it is. Keep thinking that if you feel more comfortable doing so. In the end, we as consumers have a choice as to what we buy and nothing of what you are saying here has any bearing on that decision making process.
  • SiliconDoc - Thursday, October 1, 2009 - link

    I think I'll just ignore you, since you seem to have acquired a Svengali mind read on your big "we" extension, and somehow think you represent every person here.
    I don't put any stock in your idiotic lunatic demi-god musings.
    --
    If you ever say anything worth more than a piece of scat, I will however respond appropriately.
    I'll remind you, you can't even prevent YOURSELF from being influenced by me, let alone "everyone here".
    Now if you don't have any KNOWLEDGE on the HTPC issues and questions I brought up with this other poster and his HTPC dreams, please excuse your mind reading self, and keep yourself just as deluded as possible.
    I find this a classic IDIOCY : " we as consumers have a choice as to what we buy (oh no problem there)

    and nothing of what you are saying here has any bearing on that decision making process. "

    You just keep telling yourself that, you unbelievably deranged goofball. LOL, and maybe it will become true for you, if you just keep repeating it.
    The first sign of your own cracked shield in that area is you actually saying that. You've already been influenced, and you're so goofy, you just had to go in text and claim no one ever will be.
    I mean, you are so much worse than anything I've done here it is just amnazing.
    How often do you tell yourself fantasies that there is no chance to can possibly believe or prove, and in fact, have likely already failed yourself ?
    Really, I mean absolutely.
  • silverblue - Friday, October 2, 2009 - link

    If you had a mind left to form any sort of coherent thought patterns, we might take you seriously here. You have just admitted (in your own incoheret, babbling way) that you are trying to actively (and forcibly, I might add) influence people to buy nVidia cards over ATI. I'm telling you that you've failed and will continue to fail as long as you keep shimmying up and down the green flag pole in the name of progress. I wonder if anyone at nVidia reads these comments; what must they think of you? If they considered AT a biased publication then they wouldn't speak with Anand as cordially as they do.

    I say "we" because, unless you've opened your eyes, "we" as a community are becoming even more united against no-brained deluded fanboys such as yourself. We DON'T hate nVidia, a lot of people here own nVidia cards, some only have nVidia cards, some own nVidia and ATI, and some own ATI. This isn't about hatred or bias is misinformation; this is about one socially inept weasel who has been attempting to shove his knowledge down everyone else's throats on this (and other subjects) whether there's any factual basis to it or not.

    You disagree with me using the term "we", fine. I personally want to see the GT300 launch. I personally want nVidia to bring out a mainstream flavour to compete with the 5850. I personally want prices to fall. I personally don't have anything against PhysX, CUDA or AO. I personally want to see 3D gaming gather momentum.

    Now ask yourself - can you be as objective and impartial as that?

    You just seem to read what you like and completely miss the point of any post you reply to. There's no way someone can be impartial on this site with you around because any word of praise about ATI equates to bias in your head.

    There's only so far you can go before someone clicks the Ban button but I'm sure you'll come back with another account.
  • shotage - Thursday, October 1, 2009 - link

    Still voting to get you banned SiliconDoc.
  • Zool - Thursday, October 1, 2009 - link

    I dont think thats to fair from nvidia to let pay the extra cost for design and manufacture from the gpgpu bloat for all people. They lunched tesla card becouse it cost insane money and can get away with curent yields. For majority of graphic is still simd and almosnt no branching uterly enough. I mean if they would make stand alone cuda cards without the useless graphic pipeline parts it could be smaler or faster.And that goes for graphic too.
    I mean how hard would it be for amd or intel to put some similar low transistor budget simd units to the pipeline to CPU like are in GPU. And they could run on CPU clocks and would be integral part of CPU(latencies, cache etc).
    I dont think thats the right strategy for nvidia.
  • silverblue - Thursday, October 1, 2009 - link

    nVidia could charge a premium for the Tesla-badged cards due to their potential savings over the more traditional method of using masses of general-purpose servers, however they may want to really establish Tesla as a viable option so they can't very well charge too much for it.

    I'm interested in seeing the peak performance figures for both Cypress and Fermi; will the AMD part still have an advantage in raw processing power due to having many many more, if weaker, SPs/cores? And will it matter in the working environment?
  • Zool - Thursday, October 1, 2009 - link

    Nvidias dreams of those 500x performance in the coming years are actualy only for GPGPU not graphic.
    The curent gen cards are begining to show some strange scaling.(i think nvidia wont be other in this case too)
    They will need some more changes if they want to utilize more shader procesors than just double everithing. If u think about it than at 850 MHz feeding 1600 shaders (actualy 320 is more realistic) is quite a transistor chalenge.(CPUs look like babie toys to these with large cache and much less core)
    Actualy there are some physical limits to transistors too. Increasing to 4k milion transistors and 3200 shaders in next card would need even more internal speed. It would be maybe easyer to place to rv870 dies in one gpu than double everything.
  • neomocos - Thursday, October 1, 2009 - link

    We all like our freedom of opinions at anand and this article was very interesting and the comments as well that was until SiliconDoc started trashing everything.As stated by a lot of other users i ask anand to take some action against the user , he is ruining my experience and others of calmly reading the articles in the morning with a cofee :).All his arguments and the way he throws them is so random and makes no sense, he sound like a man who needs his drug of praising nvidia and trashing the red rooster any way he can even if it`s with no real arguments.I read with pleasure the comments of the smart non-biased guys posting here but this guy is just talking crap to fill the lines.
    On topic ... considering what 5850 has : eyefinity , performance/price, directx 11, power cons, and most important availability i was smiling to myself and thinking that ATI will have killer sales this 3 months left of 2009. I personally will wait for nvidia to bring fermi and with it the price war cuz we all know that then all prices will go down i estimate 150$ for 5850 and about 200 for 5870 around june and if nvidia has better price/perf i will definetly buy it.
  • SiliconDoc - Thursday, October 1, 2009 - link

    And, here we have your contribution, after whining about me claiming no points, the usual bs from red lovers, here is the evidence of your bloodshot eyes, at least you've accepted my direct orders and forced yourself to talk topic.
    -
    " On topic ... considering what 5850 has : eyefinity , performance/price, directx 11, power cons, and most important availability i was smiling to myself and thinking that ATI will have killer sales this 3 months left of 2009. "
    --
    And after you realize what a red rooster you just were, wether you thought it was a good jab at me, since you know I'll read your attack and that's what the attack was about, or wether you couldn't help yourself, you went on to claim how fair and balanced you are after you hoped for 2 cheap ati cards. LOL The afterthought, barely surfacing from the lack of wattage, added at the end, "if nvidia has better I'll blah blah"..
    FUNNY how you talk about THOSE CARDS in the TESLA THREAD, when you are ON TOPIC !
    roflmao !
    Wowzie!
    Cocka ! Doodle ! Doooo !
    Let me ask you, since you considered eyefinity so great, do you
  • shotage - Thursday, October 1, 2009 - link

    1 vote to get you banned.

Log in

Don't have an account? Sign up now