A Different Sort of Launch

Fermi will support DirectX 11 and NVIDIA believes it'll be faster than the Radeon HD 5870 in 3D games. With 3 billion transistors, it had better be. But that's the extent of what NVIDIA is willing to talk about with regards to Fermi as a gaming GPU. Sorry folks, today's launch is targeted entirely at Tesla.


A GeForce GTX 280 with 4GB of memory is the foundation for the Tesla C1060 cards

Tesla is NVIDIA's High Performance Computing (HPC) business. NVIDIA takes its consumer GPUs, equips them with a ton of memory, and sells them in personal or datacenter supercomputers called Tesla supercomputers or computing clusters. If you have an application that can run well on a GPU, the upside is tremendous.


Four of those C1060 cards in a 1U chassis make the Tesla S1070. PCIe connects the S1070 to the host server.

NVIDIA loves to cite examples of where algorithms ported to GPUs work so much better than CPUs. One such example is a seismic processing application that HESS found ran very well on NVIDIA GPUs. It migrated a cluster of 2000 servers to 32 Tesla S1070s, bringing total costs down from $8M to $400K, and total power from 1200kW down to 45kW.

HESS Seismic Processing Example Tesla CPU
Performance 1 1
# of Machines 32 Tesla S1070s 2000 x86 servers
Total Cost ~$400K ~$8M
Total Power 45kW 1200kW

 

Obviously this doesn't include the servers needed to drive the Teslas, but presumably that's not a significant cost. Either way the potential is there, it's just a matter of how many similar applications exist in the world.

According to NVIDIA, there are many more cases like this in the market. The table below shows what NVIDIA believes is the total available market in the next 18 months for these various HPC segments:

Processor Seismic Supercomputing Universities Defence Finance
GPU TAM $300M $200M $150M $250M $230M

 

These figures were calculated by looking at the algorithms used in each segment, the number of Hess-like Tesla installations that can be done, and the current budget for non-GPU based computing in those markets. If NVIDIA met its goals here, the Tesla business could be bigger than the GeForce one. There's just one problem:

As you'll soon see, many of the architectural features of Fermi are targeted specifically for Tesla markets. The same could be said about GT200, albeit to a lesser degree. Yet Tesla accounted for less than 1.3% of NVIDIA's total revenue last quarter.

Given these numbers it looks like NVIDIA is building GPUs for a world that doesn't exist. NVIDIA doesn't agree.

The Evolution of GPU Computing

When matched with the right algorithms and programming efforts, GPU computing can provide some real speedups. Much of Fermi's architecture is designed to improve performance in these HPC and other GPU compute applications.

Ever since G80, NVIDIA has been on this path to bring GPU computing to reality. I rarely get the opportunity to get a non-marketing answer out of NVIDIA, but in talking to Jonah Alben (VP of GPU Engineering) I had an unusually frank discussion.

From the outside, G80 looks to be a GPU architected for compute. Internally, NVIDIA viewed it as an opportunistic way to enable more general purpose computing on its GPUs. The transition to a unified shader architecture gave NVIDIA the chance to, relatively easily, turn G80 into more than just a GPU. NVIDIA viewed GPU computing as a future strength for the company, so G80 led a dual life. Awesome graphics chip by day, the foundation for CUDA by night.

Remember that G80 was hashed out back in 2002 - 2003. NVIDIA had some ideas of where it wanted to take GPU computing, but it wasn't until G80 hit that customers started providing feedback that ultimately shaped the way GT200 and Fermi turned out.

One key example was support for double precision floating point. The feature wasn't added until GT200 and even then, it was only added based on computing customer feedback from G80. Fermi kicks double precision performance up another notch as it now executes FP64 ops at half of its FP32 rate (more on this later).

While G80 and GT200 were still primarily graphics chips, NVIDIA views Fermi as a processor that makes compute just as serious as graphics. NVIDIA believes it's on a different course, at least for the short term, than AMD. And you'll see this in many of the architectural features of Fermi.

Index Architecting Fermi: More Than 2x GT200
POST A COMMENT

415 Comments

View All Comments

  • silverblue - Thursday, October 1, 2009 - link

    People will buy nVidia hardware for their HTPCs regardless of it having PhysX, AO, CUDA or whatever. Price is a very attractive factor, but so is noise and temperature, so people will go for what suits them the best. If people think nVidia offers more for the price, they will buy it, some may go for another option if they want less heat, or less speed or whatever. It's their choice, and not one made out of malice.

    This thread isn't full of nVidia-haters like you want to believe it is. Keep thinking that if you feel more comfortable doing so. In the end, we as consumers have a choice as to what we buy and nothing of what you are saying here has any bearing on that decision making process.
    Reply
  • SiliconDoc - Thursday, October 1, 2009 - link

    I think I'll just ignore you, since you seem to have acquired a Svengali mind read on your big "we" extension, and somehow think you represent every person here.
    I don't put any stock in your idiotic lunatic demi-god musings.
    --
    If you ever say anything worth more than a piece of scat, I will however respond appropriately.
    I'll remind you, you can't even prevent YOURSELF from being influenced by me, let alone "everyone here".
    Now if you don't have any KNOWLEDGE on the HTPC issues and questions I brought up with this other poster and his HTPC dreams, please excuse your mind reading self, and keep yourself just as deluded as possible.
    I find this a classic IDIOCY : " we as consumers have a choice as to what we buy (oh no problem there)

    and nothing of what you are saying here has any bearing on that decision making process. "

    You just keep telling yourself that, you unbelievably deranged goofball. LOL, and maybe it will become true for you, if you just keep repeating it.
    The first sign of your own cracked shield in that area is you actually saying that. You've already been influenced, and you're so goofy, you just had to go in text and claim no one ever will be.
    I mean, you are so much worse than anything I've done here it is just amnazing.
    How often do you tell yourself fantasies that there is no chance to can possibly believe or prove, and in fact, have likely already failed yourself ?
    Really, I mean absolutely.
    Reply
  • silverblue - Friday, October 2, 2009 - link

    If you had a mind left to form any sort of coherent thought patterns, we might take you seriously here. You have just admitted (in your own incoheret, babbling way) that you are trying to actively (and forcibly, I might add) influence people to buy nVidia cards over ATI. I'm telling you that you've failed and will continue to fail as long as you keep shimmying up and down the green flag pole in the name of progress. I wonder if anyone at nVidia reads these comments; what must they think of you? If they considered AT a biased publication then they wouldn't speak with Anand as cordially as they do.

    I say "we" because, unless you've opened your eyes, "we" as a community are becoming even more united against no-brained deluded fanboys such as yourself. We DON'T hate nVidia, a lot of people here own nVidia cards, some only have nVidia cards, some own nVidia and ATI, and some own ATI. This isn't about hatred or bias is misinformation; this is about one socially inept weasel who has been attempting to shove his knowledge down everyone else's throats on this (and other subjects) whether there's any factual basis to it or not.

    You disagree with me using the term "we", fine. I personally want to see the GT300 launch. I personally want nVidia to bring out a mainstream flavour to compete with the 5850. I personally want prices to fall. I personally don't have anything against PhysX, CUDA or AO. I personally want to see 3D gaming gather momentum.

    Now ask yourself - can you be as objective and impartial as that?

    You just seem to read what you like and completely miss the point of any post you reply to. There's no way someone can be impartial on this site with you around because any word of praise about ATI equates to bias in your head.

    There's only so far you can go before someone clicks the Ban button but I'm sure you'll come back with another account.
    Reply
  • shotage - Thursday, October 1, 2009 - link

    Still voting to get you banned SiliconDoc. Reply
  • Zool - Thursday, October 1, 2009 - link

    I dont think thats to fair from nvidia to let pay the extra cost for design and manufacture from the gpgpu bloat for all people. They lunched tesla card becouse it cost insane money and can get away with curent yields. For majority of graphic is still simd and almosnt no branching uterly enough. I mean if they would make stand alone cuda cards without the useless graphic pipeline parts it could be smaler or faster.And that goes for graphic too.
    I mean how hard would it be for amd or intel to put some similar low transistor budget simd units to the pipeline to CPU like are in GPU. And they could run on CPU clocks and would be integral part of CPU(latencies, cache etc).
    I dont think thats the right strategy for nvidia.
    Reply
  • silverblue - Thursday, October 1, 2009 - link

    nVidia could charge a premium for the Tesla-badged cards due to their potential savings over the more traditional method of using masses of general-purpose servers, however they may want to really establish Tesla as a viable option so they can't very well charge too much for it.

    I'm interested in seeing the peak performance figures for both Cypress and Fermi; will the AMD part still have an advantage in raw processing power due to having many many more, if weaker, SPs/cores? And will it matter in the working environment?
    Reply
  • Zool - Thursday, October 1, 2009 - link

    Nvidias dreams of those 500x performance in the coming years are actualy only for GPGPU not graphic.
    The curent gen cards are begining to show some strange scaling.(i think nvidia wont be other in this case too)
    They will need some more changes if they want to utilize more shader procesors than just double everithing. If u think about it than at 850 MHz feeding 1600 shaders (actualy 320 is more realistic) is quite a transistor chalenge.(CPUs look like babie toys to these with large cache and much less core)
    Actualy there are some physical limits to transistors too. Increasing to 4k milion transistors and 3200 shaders in next card would need even more internal speed. It would be maybe easyer to place to rv870 dies in one gpu than double everything.
    Reply
  • neomocos - Thursday, October 1, 2009 - link

    We all like our freedom of opinions at anand and this article was very interesting and the comments as well that was until SiliconDoc started trashing everything.As stated by a lot of other users i ask anand to take some action against the user , he is ruining my experience and others of calmly reading the articles in the morning with a cofee :).All his arguments and the way he throws them is so random and makes no sense, he sound like a man who needs his drug of praising nvidia and trashing the red rooster any way he can even if it`s with no real arguments.I read with pleasure the comments of the smart non-biased guys posting here but this guy is just talking crap to fill the lines.
    On topic ... considering what 5850 has : eyefinity , performance/price, directx 11, power cons, and most important availability i was smiling to myself and thinking that ATI will have killer sales this 3 months left of 2009. I personally will wait for nvidia to bring fermi and with it the price war cuz we all know that then all prices will go down i estimate 150$ for 5850 and about 200 for 5870 around june and if nvidia has better price/perf i will definetly buy it.
    Reply
  • SiliconDoc - Thursday, October 1, 2009 - link

    And, here we have your contribution, after whining about me claiming no points, the usual bs from red lovers, here is the evidence of your bloodshot eyes, at least you've accepted my direct orders and forced yourself to talk topic.
    -
    " On topic ... considering what 5850 has : eyefinity , performance/price, directx 11, power cons, and most important availability i was smiling to myself and thinking that ATI will have killer sales this 3 months left of 2009. "
    --
    And after you realize what a red rooster you just were, wether you thought it was a good jab at me, since you know I'll read your attack and that's what the attack was about, or wether you couldn't help yourself, you went on to claim how fair and balanced you are after you hoped for 2 cheap ati cards. LOL The afterthought, barely surfacing from the lack of wattage, added at the end, "if nvidia has better I'll blah blah"..
    FUNNY how you talk about THOSE CARDS in the TESLA THREAD, when you are ON TOPIC !
    roflmao !
    Wowzie!
    Cocka ! Doodle ! Doooo !
    Let me ask you, since you considered eyefinity so great, do you
    Reply
  • shotage - Thursday, October 1, 2009 - link

    1 vote to get you banned. Reply

Log in

Don't have an account? Sign up now