NVIDIA's Bumpy Ride: A Q4 2009 Update

by Anand Lal Shimpi on 10/14/2009 12:00 AM EST


Back to Article

  • medi01 - Saturday, October 17, 2009 - link

    Since when are sales based only on products qualities?
    How many buys are actually aware of performance details?
    How many of those, that are aware, are NOT one brand loyal?

    To me it rather seems nVidia, despite not having an answer to 5800 series for a few month will still successfully sell overpriced cards. It's AMD that will continue to make huge loses, having good products and "better value" price policy.

    Customers should be interested in healthy competition. "I buy inferior product from a company that already dominates the market" will simply kill the underdog, and then it'll show us... :(
  • kashifme21 - Wednesday, October 21, 2009 - link

    By supporting consoles, they might have gotten sales in the short term. However in the long term it has been a disaster for both of them.

    Consoles sales even putting the xbox 360 and ps3 together do not exceed 60million. which imo is not much of an ammount for these 2 companies.

    what has happened now is that the focus of developers has shifted to the consoles. which is why jumps in graphics have become stagnant now. the result in this is that even pc users dont need constant upgrades like the way they used to which means less sales from the pc market for nvidia and ati.

    Also as a note previously a pc user would need to make a switch in about 2 years to stay uptodate with the latest games, now since its developed with consoles in minds the cycle has become 7 years. So previously if a pc user needed to upgrade once in 2 years its only going to be 1 in 7yrs now, there simply wont be games out to take advantage of hardware.

    also the console market ati and nvidia thought to cater too is in the same situation. a console user will simply buy 1 console then make no hardware purchase for the rest of the generation unless the console fails.

    imo going for console sales might have given a sale in the short term butfor the long term its been bad for all the pc hardware makers be it cpu, gpu, ram, chipsets etc. as time goes on this will get worse specially if another console gen is supported.
  • KhadgarTWN - Sunday, October 25, 2009 - link

    For the part of console, I have a little diff thought.
    For very short period, consoles boost GPU sells; a bit longer, consoles devour PC gaming and affect hardware sells, that's true.

    But for a little longer? If selling GPU to console is a mistake, could they "Fix" it?
    Think about if AMD(ATi)/nVidia refused to develop graphic part of console and Larrabee failed, where is the next Gen console?

    For the years of PS2/DC/XBox, that's no big deal, Sony had its EE + GS, Nintendo had its own parts. Maybe XBOX would never born, but no big deal, consoles still live strong and prosporous.
    For now, no AMD, no XB360; no nVidia, no PS3. And for Nintendo part? Doubtful they could utilize their own part.

  • msroadkill612 - Wednesday, October 28, 2009 - link

    Am trying to make ammends for being a bit of a leech on geek sites and not contributing. Bit off topic but i hope some of you will find the below post useful.

    There is a deluge of requests around the geek forums re what grapics card to buy, but never have i seen a requestor specify whether they plan an always on PC vs "on only when in use".

    A$ .157 / kw sydney australia - oct 09 (A$=~.92 of a $US now)

    usa prices link:

    (so sydney prices v similar to california - 15.29usc/ kw )


    in an always on PC, a graphics card which draws 10 watts less at IDLE than an alternative card
    (My logic here is - If you having fun at FULL LOAD, then who cares what the load power draw/cost is, but in most cases this is a small portion of the week and can logically be ignored - IF, and this is a big IF, your power save / sleep settings are set and working correctly)

    .157$/1000x10=.00157$ (substitute your local cost - always increasing though, negating you bean counters net present worth objections)

    x 24x365

    =A$13.75pa for each extra 10w idle draw (i hope the math is right)

    If you use air conditioning all day all year, you can theoretically double this cost.

    If, however, you use electric bar radiators for heating all day all year, then I afraid, my dear Watson, the "elementary" laws of physics dictate that you must waste no further time on this argument. It does not concern you, except to say that you are in the enviable position of soon being able to buy a formerly high end graphics card (am thinking dual core Nvidea here) for about the cost of a decent electric radiator, and getting quite decent framerates thrown in free with your heating bill.

    Using the below specs and above prices, a hd 4870 (90w) costs us$84.39 more to idle all year in california than the 5850/5870 (27w) at current prices. In about 18 months the better card should repay the premium paid for it.

    Hope this helps some of you cost justify your preferred card to those who must be obeyed.

    ATI HD 5870 & 5850 idle power 27W
    ATI Radeon HD 4870 idle 90w
    5770 18w
    5750 16w

    780g chipset mobo - AMD claims idle power consumption of the IGP is just 0.95W!

    Some nvidea card idle power specs: GPU Idle Watts

    NVIDIA GeForce GTX 280 30 W

    Gigabyte GeForce 9800 GX2 GV-NX98X1GHI-B 85 W

    ZOTAC GeForce 9800 GTX AMP! Edition ZT-98XES2P-FCP 50 w

    FOXCONN GeForce 9800 GTX Standard OC Edition 9800GTX-512N 48 W

    ZOTAC GeForce 9800 GTX 512MB ZT-98XES2P-FSP 53 W

    MSI NX8800GTX-T2D768E-HD OC GeForce 8800 GTX 76 W

    ZOTAC GeForce 8800 GT 512MB AMP! Edition ZT-88TES3P-FCP 33 W

    Palit GeForce 9600 GT 1GB Sonic NE/960TSX0202 30 W

    FOXCONN GeForce 8800 GTS FV-N88SMBD2-OD 59 w
  • Wolfpup - Friday, October 16, 2009 - link

    Okay, huh? I get it for the low end stuff, but mid range does this mean we're going to be wasting a ton of transistors on worthless integrated video that won't be used, taking up space that could have been more cache or another core or two?!?

    I have NO use for integrated graphics. Never have, never will.
  • Seramics - Friday, October 16, 2009 - link

    Nvidia is useless and going down. I dont mind they get out of the market at all. Reply
  • Sandersann - Friday, October 16, 2009 - link

    Even if you money is not an issue for you. You want Nvidia and ATI to do well or at least stay competitive because the competition encourages innovation. without it, you will have to pay more for less features and speed. We might get a taste of that this Christmas and into Q1 2010. Reply
  • medi01 - Friday, October 16, 2009 - link

    AMD - 27%
    nVidia - 65% (ouch)

  • shin0bi272 - Friday, October 16, 2009 - link

    hey can you post the most common models too? The last time I looked at that survey the 7800gtx was the fastest card and most people on the survey were using like 5200's or something outrageous like that. Reply
  • tamalero - Sunday, October 18, 2009 - link


    most people are on 8800 series, it doesnt reflect "current" gen
    in last gen the 48XX series are increasing to 11% while the 260 series are around 4%
  • Scali - Sunday, October 18, 2009 - link

    If you sort by percentage change:

    Then add up all the increases from nVidia, you get to +1.42%.
    Do the same for all ATi parts, and you get +1.06%.

    (Obviously you can't do a direct comparison of the 260 to the entire 4800-series, as the 260 is just a single part. You would then ignore various other parts that also compete with the 4800 series, such as the 9800GTX).
  • shin0bi272 - Thursday, October 15, 2009 - link

    It does MIMD processing and parallel kernel processing.
    true 32bit floating point processing in 1 clock (previous only did 24bit and emulated 32bit) and is now using the IEE754-2008 standard instead of the 1984 standard. so its 64bit or double precision performance is 1/2 that of the single precision now, while their previous was 1/8th as fast and AMD's is 1/5th.
    With the addition of a second dispatch unit they are able to send special functions to the sfu at the same time they are sending gp shaders... meaning they no longer have to take up the entire SM (group of shader cores acting as a unit from what I can gather) to do an interpolation calculation.
    They added an actual cache hierarchy instead of the software managed memory in the gt200 series which eliminates a large problem they were having which was shared memory size an that means increased performance.
    Nvidia says that its switching between cuda and gpu is now 10x faster which means that physx performance will increase dramatically annnnnd it supports parallel transfers to the cpu vs serial in previous cards... so multiple cpu cores and multiple connections to the gpu means much better performance.

    Nvidia claims they learned their lesson on moving to different die sizes and guessing on their pricing. They were really pushing the new die with the old 5800fx chip and they got beat bad by ati... conversely they took their time with the gt200's and ended up pricing them according to what they thought the ati 48xx would be and ended up overshooting by a LOT. Whether or not they are full of it remains to be seen here but hopefully we can expect a 450-500 dollar gt300 flagship card early in 2010. The biggest issue is that ati already has boards in production and 1600 simd (single instruction multiple data) cores vs 512 mimd cores in the fermi. The ati card also runs at 850mhz vs probably 650mhz in the nvidia. We will just have to wait and see the benchmarks I guess.

    Plus add in the horrible state of the economy (which is only going to get worse when congress goes to do the budget for 2011 in march), the falling value of the dollar, the continuing increase of the unemployment rate, the impending deficit/debt issues, the oil producing countries talking of dumping the dollar, the UN wanting to dump the dollar as the international reserve currency, etc., etc. and Nvidia cutting back on production might not be such a bad idea. If the economy collapses again once the government has to raise the interest rates and taxes to pay for their spending then AMD might be hurting with so much product sitting on the shelves collecting dust.

    (fermi data from http://anandtech.com/video/showdoc.aspx?i=3651&...">http://anandtech.com/video/showdoc.aspx?i=3651&...
  • Scali - Thursday, October 15, 2009 - link

    You should put the 1600 SIMD vs 512 SIMD in the proper perspective though...
    Last generation it was 800 SIMD (HD4870/HD4890) vs 240 SIMD (GTX280/GTX285).
    With 1600 vs 512, the ratio actually improves in nVidia's favour, going from 3.33:1 to 3.1:1.
  • shin0bi272 - Friday, October 16, 2009 - link

    Ahh thats true I didnt think about that. With all the other improvements and the move to ddr5 it should be a great card. Reply
  • Zool - Thursday, October 15, 2009 - link

    In the worst case scenariou ati has 320 ALUs and in the best case 1600.
    Nvidia has in best and worst case same 512 ALUs. And the nvidia shaders runs at much higher frequency. Let we say double altough for gt300 the 1700 Mhz shaders wont be too realistic.
    So actualy in the worst case scenario the gt300 has more than 3 times the ALUs than the radeon. In the best case scenario radeon has around 60% advantage over nvidia (with the unrealistic shader clocks).
    And in the end some instructions may take more clock cycles on the GT300 some on the radeon 5800.
    So your perspective is way off.
  • Scali - Thursday, October 15, 2009 - link

    My perspective was purely that the gap in number of SIMDs between AMD becomes smaller with this generation, not larger. It's spot on, as there simply ARE 800 and 240 units in the respective parts I mentioned.

    Now you can go and argue about clockspeeds and best case/worst case, but that's not going to much good since we don't know any of that information on Fermi. We'll just have to wait for some actual benchmarks.
  • Zool - Friday, October 16, 2009 - link

    The message of my reply was that just plain shader vs shader conclusions are very inaccurate and mean usualy nothing. Reply
  • Scali - Friday, October 16, 2009 - link

    I never claimed otherwise. I just pointed out that this particular ratio would move more towards nVidia, as the post I was replying to appeared to assume the opposite, speaking out a concern on the large difference in raw numbers. Reply
  • Zool - Thursday, October 15, 2009 - link

    The gt240 doesnt seem to be much better if they follow the gt220 price scenario.
    link http://translate.google.com/translate?prev=hp&...">http://translate.google.com/translate?p...sl=zh-CN...
    The core/shader clocks are prety low for 40nm. Lets hope that gt300 will run on higher than that.
  • shotage - Thursday, October 15, 2009 - link

    Just read this article and found it pretty ensightful, it's focus is the shift in microarchitecture of the GPU - primary focus is on Nvidia and their upcoming Fermi chipset. Very good technical read for those that are interested:


  • iwodo - Wednesday, October 14, 2009 - link

    Why no one thought of Nvidia Chipset using PCI-Express 8x?

    Couldn't you theoretically make an mGPU with IO function, ( the only thing left is SATA, USB and Ethernet ) and another PCI-Express 8x So the mGPU communicate with another Nvidia CPU via its own lane without going back to CPU.
  • chizow - Wednesday, October 14, 2009 - link

    [quote]Let’s look at what we do know. GT200b has around 1.4 billion transistors and is made at TSMC on a 55nm process. Wikipedia lists the die at 470mm^2, that’s roughly 80% the size of the original 65nm GT200 die. In either case it’s a lot bigger and still more expensive than Cypress’ 334mm^2 40nm die.[/quote]

    Anand, why perpetuate this myth comparing die sizes and price on different process nodes? Surely someone with intimate knowledge of the semiconductor industry like yourself isn't claiming a single TSMC 300mm wafer on 40nm costs the same as 55nm or 65nm?

    A wafer is just sand with some copper interconnects, the raw material price means nothing for the end price tag. Cost is determined by capitalization of assets used to manufacture goods, the actual raw material involved means very little. Obviously the uncapitalised investment on the new 40nm process exceeds that of 55nm or 65nm, so obviously prices would need to be higher to compensate.

    I can't think of anything "old" that costs more than the "new" despite the "old" being larger. If you think so, I have about 3-4 100 lb CRT TVs I want to sell you for current LCD prices. ;)

    In any case, I think the concerns about selling GT200b parts are a bit unfounded and mostly to justify the channel supply deficiency. We already know the lower bounds of GT200b pricing, the GTX 260 has been selling for $150 or less with rebates for quite some time already. If anything, the somewhat artificial supply deficiency has kept demand for Nvidia parts high.

    I think it was more of a calculated risk by Nvidia to limit their exposure to excess inventory in channel, which was a reportedly big issue during the 65nm G92/GT200 to 55nm G92b/GT200b transition. There's was also some rumors about Nvidia going to more of a JIT delivery system to avoid some of the purchasing discounts some of the major partners were exploiting. They basically waited for the last day of the quarter for Nvidia to discount and unload inventory stock levels in an effort to beef up quarterly results.
  • chizow - Wednesday, October 14, 2009 - link


    Let’s look at what we do know. GT200b has around 1.4 billion transistors and is made at TSMC on a 55nm process. Wikipedia lists the die at 470mm^2, that’s roughly 80% the size of the original 65nm GT200 die. In either case it’s a lot bigger and still more expensive than Cypress’ 334mm^2 40nm die.

    Properly formatted portion meant to be quoted in above post for emphasis.

  • MadMan007 - Wednesday, October 14, 2009 - link

    Possibly the most important question for desktop PC discrete graphics from gamers who aren't worried about business analysis is 'What will be the rollout of Fermi architecture to non-highend non-high cost graphics cards?'

    Is NV going to basically shaft that market by going with the cut down GT200 series DX10.1 chips like GT220? (OK, that's a little *too* lowend but I mean the architecture.) As much as we harped on G92 renaming at least it was competitive versus the HD4000 series in certain segments and the large GT200s, GTX260 in particular, were ok after price cuts. The same is very likely not going to be true for DX10.1 GT200 cards especially when you consider less tangible things like DX11 which people will feel better buying anyway.

    Answer that question and you'll know the shape of desktop discrete graphics for this generation.
  • vlado08 - Wednesday, October 14, 2009 - link

    I thing that Nvidia is preparing Fermi to meet with Larabee. Probably they are confident that AMD isn't a big threat. They know them very well and know what they are capable of and what to expect but they don't now their new opponent. Every body knows that Intel has very strong financial power and if they want something they do it some times it takes more time but eventually they bring every thing to an end. They are a force not to be underestimateed. If Nvidia has any time advantige they should use it. Reply
  • piesquared - Wednesday, October 14, 2009 - link

    [quote]Other than Intel, I don’t know of any company that could’ve recovered from NV30.[/quote]

    How about recoverying from both Barcelona AND R600
  • Scali - Thursday, October 15, 2009 - link

    AMD hasn't recovered yet. They've been making losses quarter after quarter. I expect more losses in the future, now that Nehalem has gone mainstream.
    Some financial experts have put AMD in their list of companies to most likely go bankrupt in 2010:
  • bhougha10 - Wednesday, October 14, 2009 - link

    The one thing that is not considered in these articals is the real world. In the real world people were not waiting for the ATI 5800s to come out, but they are waiting for the GTX 300s to come out. The perception in the market place is that NVIDIA is namebrand and ATI is generic.
    It is not a big deal at all that the GTX 300s are late. Nvidia had the top cards for 3/4ths of the year, it is only healthy that ATI have the lead for 1/4th of the year. The only part that is bad for NVIDIA is that they don't have this stuff out for Christmas, and I am not sure that is even a big deal. Even so, these high end cards are not gifts, they are budget items (i.e. like I plan to wait till the beginning of next year to buy this or that)
    Go do some online gamming if you think this is all made up. You will have your AMD fanboys, but the percentage is low. (sorry didn't want to say fanboy)
    These new 210/220 GTs sound like crap, but the people buying them won't know that and will pay the money and not be any the wiser that they could have gotten a better value. They only thought of the NVIDIA namebrand.
    Anyway, I say this in regards to the predicting of the eventual demise of NVIDIA. Same reason these financial anaylst can't predict the market well.

    Another great artical, a lot of smart guys working for these tech sites.
  • Zool - Wednesday, October 14, 2009 - link

    So you think that the company who has the top card and is a namebrand is the winner. Even with top cards in 3/4ths of the year they had lost quite a money on predicted margins with 4k radeons on market. They couldnt even get to midrange cards with GT200 the price for them was too high. And u think that with that monster gt300 it will be better.
    They will sure sell it but at what cost for them.
  • bhougha10 - Wednesday, October 14, 2009 - link

    AMD has lost money for the last 3 years (as far as I looked back) and not just a little money a lot of money. NVIDIA on the other hand lost a little money last year. So,not sure of the point of that last reply.
    The point of the original post was that there is an intrinsic value or novelty that comes with a company. AMD has very little novelty. This is just the facts. I have AMD and NVIDIA stock, I want them to both win. It's good for everyone if they do both win.
  • Zool - Thursday, October 15, 2009 - link

    And what is such a novelty on Nvidia that other dont hawe ? Oh wait maybe the PR team or the shiny flashy nvidia page that lets you believe that even the most useless product is the customers neverending dream.
    I need to admit that AMD is way behind in those areas.
    I wouldnt say its novelty but it seems its working for the shareholders.
  • jasperjones - Wednesday, October 14, 2009 - link

    Anand, while I generally agree with your comment, I believe there is one area where Nvidia has a competitive advantage: drivers and software.

    Two examples:
    - the GPGPU market. On the business side, double-precision (DP) arithmetic is of tremendous importance to scientists. GT200 is vastly inferior in DP arithmetic to R700, yet people bought Nvidia due to better software (ATI is also better in DP performance if you compare its enterprise GPUs to similarly-priced Nvidia GPUs). On the consumer side, look at the sheer number of apps that use CUDA vs the tiny number of apps that use Stream. If I look at other things (OpenCL or C/C++/Fortran support), I also see Nvidia ahead of AMD/ATI.

    - Linux drivers. AMD has stepped things up but Nvidia drivers are still vastly superior to ATI drivers.

    I know they're trading blows in some other areas closely related to software (Nvidia has Physx, AMD DX11) but my own experience still is Nvidia has the better software.
  • medi01 - Friday, October 16, 2009 - link

    Pardon me, but isn't like 95+% of the gaming market - Windows, not Linux? Reply
  • AtwaterFS - Wednesday, October 14, 2009 - link

    Where is Silicon Doc? Is he busy shoving a remote up his a$$ as Nvidia takes it up theirs? MUAHAHAHAHA!

    Seriously tho, I need Nvidia to drop price on 58xx cards so I can buy one for TES 5 / Fallout 4 - WTF!
  • tamalero - Sunday, October 18, 2009 - link

    he was banned, you forgot? Reply
  • Transisto - Wednesday, October 14, 2009 - link

    I never play games but spend a lot of money (to me) and time info a folding farm.

    The winter is setting and I plan on heating the whole house from these. 9600 9800 and 260. So timing is now !

    To me this is a bad time to invest into folding GPUs because the decisions are taking too much precious brainwidth.

    My worries are:

    Will there still be a demand for low end 9600 and gts250 even g200 card once the G300 come out ?

    And more importantly would the performance of these antique g92 and g200 still be efficient at research computing (price wise).

    I was actually purchasing more gtx 260s because these could still sell out after I upgrade to better. But these day I am more into buying used g92s on the cheap.

    Thank You.
  • The0ne - Wednesday, October 14, 2009 - link

    This doesn't come as a shock to the "few" of us that had believe NVidia was in trouble, won't have competition and will most likely leave the high end business video market. I've even posted a link on a Dailytech article, although from a ironic website name.

    What I'm concern about is why there aren't reports are the issues around NVidia's business/management? They have been undergoing restructuring for at least a year now and workers have been laid off here and there. Does Anandtech have any info on this matter? I haven't check lately but the CA based has been struggling.

    Personally, I think some "body" has gone Financial on the business and NVidia is suffering. Planning and strategies are either nonexistent, poor, or poorly communicated/executed. I think they're going to fail and go the way of Matrox. It's too bad since they were doing well. I've been around too many companies like this to not know that upper management just got greedy and lazy. Again, that's just personal opinion from what I've read and know.

    Anyone has any direct info regarding my concerns?
  • iwodo - Wednesday, October 14, 2009 - link

    Where did you heard they have laid off? As far as i am concern they are actually hiring MORE engineers to work on their products. Reply
  • frozentundra123456 - Wednesday, October 14, 2009 - link

    I have to say that I am somewhat disappointed with AMDs new lineup, even though I would prefer AMD to do well. They sure need it.

    If nVidia could switch to DDR5 and keep a larger bus, it seems like they could really outperform AMD. Even the current generation of cards from nVidia is relatively competitive with AMDs new cards except of course for Dx11. The "problem" is that nVidia seems to be trying to do too much with the GPU instead of just making good graphics performance.

    Anyway, lets hope that the new generation from nVidia at least is competitive enough to push down prices on AMDs new lineup, which seems overpriced for the performance, especially the 5770 and to a lesser extent the 5750.
  • MonkeyPaw - Wednesday, October 14, 2009 - link

    The problem is, it's all about making a profit. The GT200 was in theory a great product, but AMD essentially matched it with RV770, vastly eroding nVidia's profits on their entire top line. GT300 might be fantastic, but let's be realistic. The economy kinda sucks, and people are worried about becoming unemployed (if they aren't already). Being the best at all costs only works in prosperous times, but being good enough at the best price means much more in times like these. And considering that it takes 3 monitors to stress a 5870 on today's titles, how much better does the GT300 need to be? Reply
  • neomatrix724 - Wednesday, October 14, 2009 - link

    Were you looking at the same cards as everyone else? AMD has always aimed for the best price for performance. nVidia has always won hands down on performance...but these came at the expense of costlier cards.

    AMD hit one out of the park with their new cards. OpenCL, Eyefinity and a strong improvement over previous cards is a very strong feature set. I'm not sure about Fermi and I'm curious to see where nVidia is going with it...but their moves have been confusing me lately.
  • shin0bi272 - Thursday, October 15, 2009 - link

    actually nvidia hasnt always won. Their entire first 2 generations of dx9 cards were slower than ATi's because nvidia boycotted the meetings on the specs for dx9 and they made a card based on the beefier specs that they wanted and that card (the 5800) turned out to be 20% slower than the ati 9700pro. This trend sort of continued for a couple of years but nvidia got closer with the 5900 and eeked out a win with the 6800 a little later. Keep in mind I havent owned an ATi card for gaming since the 9700pro (and that was a gift). So I am no way an ATi fan but facts are facts. Nvidia has made great cards but not always the fastest. Reply
  • Griswold - Wednesday, October 14, 2009 - link

    That didnt make alot of sense... Reply
  • vlado08 - Wednesday, October 14, 2009 - link

    I am wondering about wddm 2.0 and multitasking on GPU are they coming soon. May be Fermi is better prepared for it? Reply
  • Scali - Wednesday, October 14, 2009 - link

    WDDM 2.0 is not part of Windows 7, so we'll need to wait for at least another Windows generation before that becomes available. By then Fermi is most probably replaced by a newer generation of GPUs anyway.
    Multitasking on the GPU is possible for the first time on Fermi, as it can run multiple GPGPU kernels concurrently (I believe up to 16 different kernels).
  • vlado08 - Wednesday, October 14, 2009 - link

    You are right about that we are going to wait but what about Microsoft and what about Nvidia they should be working on it. Probably Nvidia don't want to be late again. May be they want to be first this time seeing where the things are going. If their hardware is more prepared for wddm 2.0 today, then they will have more time to gain experience and to polish their drivers. Ati(AMD) have a hard only launch of the DirectX11. They are missing the "soft" part of it - (drivers not ready). ATI needs to win they have to make DX11 working and they are puting alot of efort in it so Nvidia is skipping DX11 battle and starting to get ready for the next one. Every thing is getting more complex and needs more time to mature. We are also getting more demanding and less forgiving. So for the next Windows to be ready after 2 or 3 years they need to start now. At least planning. Reply
  • Mills - Wednesday, October 14, 2009 - link

    Couldn't these 'extra transistors' be utilized in games as well, similar to how NVIDIA handled PhysX? In other words, incorporate NVIDIA-specific game enhancements that utilize these functions in NVIDIA sponsored titles?

    Is it too late to do this? Perhaps they will just extend the PhysX API.

    Though, PhysX has been out for quite some time and there are only 13(?) PhysX supported titles. NVIDIA better pick up its game here if they plan to leverage PhysX to out-value ATI. Does anyone know if there are any big name titles that have announced PhysX support?
  • Griswold - Wednesday, October 14, 2009 - link

    Physx is a sinking ship, didnt you get the memo? Reply
  • shin0bi272 - Thursday, October 15, 2009 - link

    nvidia says that the switching between gpu and cuda is going to be 10x faster in fermi meaning that physx performance will more than double. Reply
  • Scali - Thursday, October 15, 2009 - link

    Yup, and that's a hardware feature, which applies equally to any language, be it C/C++ for Cuda, OpenCL or DirectCompute.
    So not only PhysX will benefit, but also Bullet or Havok, or whatever other GPU-accelerated physics library might surface.
  • sbuckler - Wednesday, October 14, 2009 - link

    Not all doom and gloom: http://www.brightsideofnews.com/news/2009/10/13/nv...">http://www.brightsideofnews.com/news/20...contract...

    Which also puts them in the running for the next Wii I would have thought?
  • Zapp Brannigan - Thursday, October 15, 2009 - link

    unlikely, tegra is basically just an arm11 processor, allowing full backwards compatibility with the current arm9 and arm7 processors in the ds. If Nintendo want to have full backwards compatibility with the wii 2 then they'll have to stick the current ibm/ati combo. Reply
  • papapapapapapapababy - Wednesday, October 14, 2009 - link

    1) ati launches crappy cards, 2) anand realizes "crapy cards, we might need nvidia after all" 3) anand does some nvidia damage control. 4)damage control sounds like wishful thinking to me 5) lol

    "While RV770 caught NVIDIA off guard, Cypress did not". XD
    NVIDIA knew they were going to -FAIL- and made a conscious decision to KEEP FAILING? Guys, guys, they where cough off guard AGAIN. It does not matter if they know it! IT IS STILL A BIG FAIL! THEY KNEW? what kind of nonsense is that? BTW They could not shrink @ launch anything except that OEM garbage... what makes you think that fermi is going to be any different ?
  • whowantstobepopular - Wednesday, October 14, 2009 - link

    "1) ati launches crappy cards, 2) anand realizes "crappy cards, we might need nvidia after all" 3) anand does some nvidia damage control"


    Maybe Anand wrote this article to lay to the rest the last vestiges of SiliconDoc's recent rantings.

    Seriously, Anand and team...

    You guys do a fine job of thoroughly covering the latest official developments in the enthusiast PC space. You're doing the right thing by sticking to info that's confirmed. Charlie, Fudo and others are covering the rumours just fine, so what we really need is what we get here at Anandtech: Thorough, prompt reviews of tech products that have just released, and interesting commentary on PC market developments and directions (such as the above article).

    I like the fact that you add a little bit of your own interpretation into these sorts of commentaries, and at the same time make sure we know what is fact and what is interpretation.
    I guess I see it this way: You've been commentating on this IT game for quite a few years now, and the articles show it. There are plenty of references to parallels between current situations and historic ones, and these are both interesting and informative. This is one of many aspects of the articles here at Anandtech that make me (and others, it seems) keep coming back. Your knowledge of the important points in IT history is confidence inspiring when it comes to weighing up the value of your commentaries.

    Finally, I have to commend the way that everyone on the Anandtech team appears to read through the comments under their articles. It's rather encouraging when suggestions and corrections for articles are noted and acted upon promptly, even when it involves extra work (re-running benchmarks, creating new graphs etc.). And the touch of humour that comes across in some of the replies (and articles) from the team makes a good comedic interlude during an otherwise somewhat bland day at work.

    Keep up the good work Anandtech!

  • Transisto - Wednesday, October 14, 2009 - link

    I like this place. . . Reply
  • Transisto - Wednesday, October 14, 2009 - link

    I like this place. . . Reply
  • shotage - Wednesday, October 14, 2009 - link

    Thumbs up to this post. These are my thoughts and sentiments also. Thank you to all @ Anandtech for excellent reading! Comments included ;) Reply
  • Shayd - Wednesday, October 14, 2009 - link

    Ditto, thanks! Reply
  • Pastuch - Wednesday, October 14, 2009 - link

    Fantastic post. I couldn't have said it better myself. Reply
  • Scali - Wednesday, October 14, 2009 - link

    We'll have to see. nVidia competed just fine against AMD with the G80 and G92. The biggest problem with GT200 was that they went 65 nm rather than 55 nm, but even so, they still held up against AMD's parts because of the performance advantage. Especially G92 was hugely successful, incredible performance at a good price. Yes, the chip was larger than a 3870, but who cared?

    Don't forget that GT200 is based on a design that is now 3 years old, which is ancient. Just going to GDDR5 alone will already make the chip significantly smaller and less complex, because you only need half the bus width for the same performance.
    Then there's probably tons of other optimizations that nVidia can do to the execution core to make it more compact and/or more efficient.

    I saw someone who estimated the number of transistors per shader processor based on the current specs of Fermi, compared to G80/G92/GT200. The result was that they were all around 5.5M transistors per SP, I believe. So that means that effectively nVidia gets the extra flexibility 'for free'.
    Combine that with the fact that 40 nm allows them to scale to higher clockspeeds, and allows them to pack more than twice the number of SPs on a single chip, and the chip as a whole will probably be more efficient anyway, and it seems very likely that this chip will be a great performer.
    And if you have the performance, you dictate the prices. It will then be the salvage parts and the scaled down versions of this architecture that will do the actual competing against AMD's parts, and those nVidia chips will obviously be in a better position to compete on price than the 'full' Fermi.
    If Fermi can make the 5870 look like a 3870, nVidia is golden.
  • AnandThenMan - Wednesday, October 14, 2009 - link

    Leave it to Scali to regurgitate the same old same old.
  • TGressus - Wednesday, October 14, 2009 - link

    It's always the same, man. When ATI/AMD is down people get interested in their comeback story too.

    I've always wondered why people bother to "take a side". How'd that work out with Blu-Ray? Purchased many BD-R DL recently?

    Personally, I'd like to see more CPU and GPU companies. Not less.
  • Scali - Thursday, October 15, 2009 - link

    What comeback story?
    My point was that it wouldn't be the first time that the bigger, more expensive GPU was the best bang for the buck.
    It isn't about taking sides or comebacks at all.
    I'm interested in Fermi because I'm a technology enthusiast and developer. It sounds like an incredible architecture. It has nothing to do with the fact that it happens to have the 'nVidia' brand attached to it. If it was AMD that came up with this architecture, I'd be equally interested.
    But let's just view it from a neutral, technical point of view. AMD didn't do all that much to its architecture this time, apart from extending it to support the full DX11 featureset. It will not do C++, it doesn't have a new cache hierarchy approach, it won't be able to run multiple kernels concurrently, etc etc. There just isn't as much to be excited about.
    Intel however... now their Larrabee is also really cool. I'm excited to see what that is going to lead to. I just like companies that go off the beaten path and try new approaches, take risks. That's why I'm an enthusiast. I like new technology.
    At the end of the day, if both Fermi and Larrabee fail, I'll just buy a Radeon. Boring, but safe.
  • Scali - Wednesday, October 14, 2009 - link

    "Fermi devotes a significant portion of its die to features that are designed for a market that currently isn’t generating much revenue."

    The word 'devotes' is in sharp contrast with what Fermi aims to achieve: a more generic programmable processor.
    In a generic processor, you don't really 'devote' anything to anything, your execution resources are just flexible and can be used for many tasks.
    Even today's designs from nVidia do the same. The execution units can be used for standard D3D/OpenGL rendering, but they can also be used for PhysX (gaming market), video encoding (different market), Folding@Home (different market again), PhotoShop (another different market), HPC (yet another market), to name but a few things.
    So 'devoted', and 'designed for a market'? Hardly.
    Sure, the gaming market may generate the most revenue, but nVidia is starting to tap into all these other markets now. It's just added revenue, as long as the gaming performance doesn't suffer. And I don't see any reason for Fermi's gaming performance to suffer. I think nVidia's next generation is going to outperform AMD's offerings by a margin.
  • wumpus - Thursday, October 15, 2009 - link

    Go back and read the white paper. Nvidia plans to produce a chip that computes roughly half the double floating point multiplies as it can produce single point. This means that they have doubled the amount of transistors in the multipliers so that they can keep up with the rest of the chip in double mode (1 double or two singles both produce 8 bytes that need to be routed around the chip).

    There is no way to deny that this takes more transistors. Simply put if each letter represents 16 bits two singles represent:


    But if you have to multiply one double you get


    Which works to twice the work. Of course, the entire chip isn't multipliers, but they make up a huge chunk. Somehow I don't think either ATI nor nvidia are going to say exactly what percentage of the chip is made up by multipliers. I do expect that it is steadily going down and if such arrays keep being made, they will all eventually use double precision (and possibly full ieee754 with all the rounding that entails).
  • Scali - Saturday, October 17, 2009 - link

    My point is that the transistors aren't 'dedicated' to DP.
    They just make each single unit capable of both SP and DP. So the same logic that is used for DP is also re-used for SP, and as such the unit isn't dedicated. It's multi-functional.

    Besides, they probably didn't just double up the transistorcount to get from SP to DP.
    I think it's more likely that they'll use a scheme like Intel's SSE units. In Intel's case you can either process 4 packed SP floats in parallel, or 2 packed DP floats, with the same unit. This would also make it more logical why the difference in speed is a factor 2.
    Namely, if you take the x87 unit, it can always process only one number at a time, but SP isn't twice as fast as DP. Since you always use a full DP unit, SP only benefits from early-out, which doesn't gain that much on most operations (eg add/sub/mul).
    So I don't think that Fermi is just a bunch of full DP ALUs which will run with 'half the transistors' when doing SP math. Rather, I think they will just 'split' the DP units in some clever way that they can process two SP numbers at a time (or fuse two SP units to process one DP number, however you look at it). This only requires you to double up a relatively small part of the logic, you split up your internal registers.
  • Zool - Wednesday, October 14, 2009 - link

    Maybe but you forget one thing. Ati could pull out without problem a 5890 (with faster clocks and maybe 384bit memory) in Q1 2010 or a whole new chip somewhere in Q2 2010.
    So it doesnt change the fact that they are late. In this position it will be hard for nvidia if ati can make always the first move.
  • Scali - Wednesday, October 14, 2009 - link

    A 5890 doesn't necessarily have to be faster than Fermi. AMD's current architecture isn't THAT strong. It's the fastest GPU on the market, then again, it's the only high-end GPU that leverages 40 nm and GDDR5. So it's not all that surprising.
    Fermi will not only leverage 40 nm and GDDR5, but also aim at a scale above AMD's architecture.

    AMD may make the first move, but it doesn't have to be the better move.
    Assuming Fermi performance is in order, I very much believe that nVidia made the right move. Where AMD just patched up their DX10.1 architecture to support DX11 features, nVidia goes way beyond DX11 with an entirely new architecture.
    The only thing that could go wrong with Fermi is that it doesn't perform well enough, but it's too early to say anything about that now. Other than that, Fermi will mark a considerable technological lead of nVidia over AMD.
  • tamalero - Sunday, October 18, 2009 - link

    and you know this.... based on what facts?
    the "can of whoopass" from nvidia's marketting?
  • AnandThenMan - Wednesday, October 14, 2009 - link

    "The only thing that could go wrong with Fermi is that it doesn't perform well enough"

    Really? You really believe that? So if it has a monstrous power draw, extremely expensive, 6 months late, (even longer for scaled down parts) low yields etc. that's a-okay? Not to mention a new architecture always has software challenges to make the most of it.
  • Finally - Wednesday, October 14, 2009 - link

    Yes, I agree. Covering 18% of the market is definitely more important than covering only a mere 82%..
  • Zingam - Wednesday, October 14, 2009 - link

    Yes but you should consider the profit margins too. You could sell ten graphics cards with a profit $1 or sell two Teslas with a profit $5. What would you prefer to sell then? Reply
  • JarredWalton - Wednesday, October 14, 2009 - link

    I'd prefer to sell hundreds of thousands of Teslas with $1000+ profit margins, personally. I'm just not sure who's going to buy them! Reply
  • AnandThenMan - Wednesday, October 14, 2009 - link

    Nvidia claims that Fermi will scale down "easily" to all price points. I am highly skeptical of this myself, I have the same mind set that the non-graphics related portions of the chip will make it difficult to compete with AMD in the mainstream market.

    The compute sector they are going after is a market waiting to happen really, does that mean Nvidia is counting on substantial revenue from a market that still has to mature? Seems dubious.
  • haukionkannel - Wednesday, October 14, 2009 - link

    I am sure that Fermi will scale down easily to all price points. It just does not mean that it's competitive in all price points!
    The high end Ferni can most propably beat 5870, but it will be more expensive (it's bigger so more expensive to produce), but in any way they would have the fastest GPU around, so they can sell cut downs versions, to all ignorant people who don't read all reviews, but who knows that the "Ferni" is the fastest GPU you can get...
    But I really hope that it will also be competitive. In that way we will see cheaper GPU's from both companies!
  • Zingam - Wednesday, October 14, 2009 - link

    IBM sells big machines with various architectures for decades. Maybe that's where NVIDIA is trying to compete next?
    That market is there and it was before PCs anyway. Perhaps the journalists do not present us the whole story properly???
  • Zingam - Wednesday, October 14, 2009 - link

    We actually do need NVIDIA to do well. I'll need a new GPU soon and I want it to be powerful and cheap! Competition does matter!

    :D What about VIA buying NVIDIA :) What I say might be a heresy but if the x86 license is not transferable, can't they do the other way around? That way we could have three complete competitors who could offer a full range of PC products.
  • Griswold - Wednesday, October 14, 2009 - link

    Considering that VIA belongs to the Formosa Plastics Group, there is easily enough money in the background to take over nvidia if they wanted to. Still unlikely to happen. Reply
  • samspqr - Wednesday, October 14, 2009 - link

    crappy GPUs on crappy CPUs: way to go, INVVIDIAA!! Reply
  • guywithopinion - Wednesday, October 14, 2009 - link

    CD and the campaign against Nvidia is just dumb (IE "bumpgate"). I seriously wonder at this point why they don't just cut the loss and put the legal screws to him, bury him in court for a few years. The cost would be well worth it and the online community would be better for it. I so miss Mike M, he seemed to know how to publish the unfaltering without it degenerating into to mindless partisan garbage. Reply
  • Zingam - Wednesday, October 14, 2009 - link

    "Blhaflhvfa"? This is the first sentence of the article. What the heck does it mean? :) Reply
  • strikeback03 - Wednesday, October 14, 2009 - link

    No room for a long intro, so we got a short one. Reply
  • Leopoldo - Wednesday, October 14, 2009 - link

    "Blhaflhvfa" is the term used to discribe Nvidia's approach to confirming/refuting the rumors/articles that have been circulating recently about them. Reply
  • yacoub - Wednesday, October 14, 2009 - link


    Built Like Hairy Arse Feathers, Ladybug Hearts, Vultures, Furry Animals.
  • overzealot - Thursday, October 15, 2009 - link

    I like yours more than mine. Reply
  • overzealot - Wednesday, October 14, 2009 - link

    Bad luck, have a fun life. Havok vaporised forgettable Ageia. Reply
  • Visual - Wednesday, October 14, 2009 - link


    My first reaction when reading your post was actually "WTF, what does this has to do with the thread or the previous post?" It appeared to me just as a random whine, didn't notice it matched the acronym until a few moments later.

    Pure gold.
  • JarredWalton - Wednesday, October 14, 2009 - link

    LMAO! Reply
  • AnandThenMan - Wednesday, October 14, 2009 - link

    Google comes up with one result, this article. 3^) So I have no idea. Reply
  • Ben90 - Wednesday, October 14, 2009 - link

    Actually now that you repeated it, it comes up with 2 Reply
  • dan101rayzor - Wednesday, October 14, 2009 - link

    anyone know when the fermi is coming out? Reply
  • Zingam - Wednesday, October 14, 2009 - link

    Sometime in the future. It appears to me that it will be a great general purpose processor but if it will be great graphics processor is not quite clear.
    If you want a great GPU for graphics you won't be wrong with ATI in the near future. If you want to do scientific computing probably Fermi will be the king next year.
    And then comes Larrabee and thing would get quite interesting.

    I wonder who will release a great, next generation, mobile GPU first!
  • dragunover - Wednesday, October 14, 2009 - link

    I believe the 5770 could easily be transplanted into laptops with a different cooling solution and lower clocks and called the 5870M.
  • dan101rayzor - Wednesday, October 14, 2009 - link

    so for gaming an ATI 5800 is the best bet? Fermi will it be very expensive? Fermi not for gaming? Reply
  • Scali - Wednesday, October 14, 2009 - link

    We can't say yet.
    We don't know the prices of Fermi parts, their gaming performance, or when they'll be released.
    Which also means we don't know how 'good' the 5800 series is. All we know is that it's the best part on the market today.
  • swaaye - Monday, October 19, 2009 - link

    It's always been rather hard to pronounce a part as "best". There's always an unknown future part in the works. And, even good parts sometimes have downsides. For ex, I can see a reason or two for going NV30 back in the day instead of R300 (namely OpenGL apps/games).

    I think it's safe to say that you can't go wrong with 58x0 right now. It doesn't have any major issues, AFAIK, the price is decent, and the competition is nowhere in sight.
  • MojaMonkey - Wednesday, October 14, 2009 - link

    I have a simple brand preference for nVidia and I use Badaboom so I'd buy their products even if price performance doesn't quite stack up. However there are limits in performance difference I'm willing to accept.

    I really hope the gt300 is a great competitive product as it sounds interesting.

    One thought, given that PC gaming is now on the back burner compared to consoles maybe nVidia is being smart branching out into mobiles and GPU computing? Both of these could be real growth areas as PC gaming starts to fade.
  • neomatrix724 - Wednesday, October 14, 2009 - link

    People have been saying that PC gaming has been dying for years. Most of the GPU developments and technologies have been created for the PC first and then downstream improvements are what the console gets.

    I don't foresee the PC disappearing as a gaming medium for a while.
  • MonkeyPaw - Wednesday, October 14, 2009 - link

    I don't really see games disappearing from PCs either. However, consoles are very popular, and are a much more guaranteed market, since developers know console sales numbers, and that essentially every console owner is a gamer. Consequently, developers will target a lucrative console market (at $40-60 per game). This means their games have to run on the consoles well enough (speed and graphics quality) to be appealing. That doesn't necessarily kill PC gaming, but it does slow the progression (and the need for expensive hardware). This, along with the fact that many gamers don't likely play on 30" LCDs, means high-end graphics card sales are directed at a very limited audience. When I gamed on my PC, I had a 4670, since this would play most games just fine on a 19" LCD. I would never need a 4870, much less a 5870.

    So no, PC gaming isn't dead, but it's not the target market anymore.
  • atlmann10 - Thursday, October 15, 2009 - link

    The point made earlier that PC Gaming has been dieing for decades, as an incorrect statement is true. I think we will see more personally, but largely in this nvidia has room to gain, that is in other things that gigantic PC graphics card though.

    The online gaming thing is much better than even the console market. Why you may ask? That is because of revenue stream, and delivery. With an online pc game you can update it weekly thereby changing anything you want to.

    Yes, console is concentrated to a point because they know everyone will go out and buy a 45 dollar game. But the subscription thing is the killer. The gamer who plays an online game subscribes. Therefore the buy the 45 dollar game and give you another 15 at the end of the month, and then when you have an upgrade you have a direct pat to them as a distributor.

    So then that customer that bought your 45 dollar game, and pays you 15 a month for access logs onto there game, there is a new update for 35 dollars do you want it? That user clicks yes proceeds to download it what have you done as a company? You have eliminated advertising completely for existing customers, you have also eliminated packaging, and distribution.

    So then what did you have to do to produce that upgrade? We didn't do anything but development. So we cut our production and cost of operations in half. Plus they bought our new game or upgrade for the same price, even though it cost us 60 percent less to make, and pay us an automatic 15 a month as long as they play it.

  • Rindis - Wednesday, October 14, 2009 - link

    "People have been saying that PC gaming has been dying for years."

    Decades, actually.
  • thermaltake - Saturday, October 24, 2009 - link

    I too believe that many are overstating that PC gaming is dying. Comparing the hardware alone, PC graphics cards upgrade from less than a year, how long have the PS3, Xbox 360 been around? Game consoles are far inferior compared to PC hardware, and it shall always be the case. PC 3D graphics would always be better than that of the game consoles, yet they claim that PC game is dying? Reply
  • dragunover - Wednesday, October 14, 2009 - link

    Since it ever came out?
    Truth is, it's something like global warming. People use it to their advantage, take it way out of proportions, and no one knows where the real data is. I know for a fact that one of the reasons is that people simply don't have the money to buy a good pc. For this reason, free to play games with very low-level hardware entry like runescape, crossfire, war rock, combat arms, etc., etc. are actually there in the first place.

    I for one know that many developers have used it just because they can't be assed to make a product - they won't be seeing any bigger paychecks because of it, so they'll FUD it up. *cough ID software john carmack cough cough* - Which I'd like to add doesn't seem to work in anything related to computer gaming in the past generation.
  • Zingam - Wednesday, October 14, 2009 - link

    The previews of Fermi so far describe it as the greatest GPGPU but they don't even try to describe it as good graphics processor. And if you take into account what NVIDIA has said in the past about GPUs, things get quite unclear and interesting. Reply
  • dmv915 - Wednesday, October 14, 2009 - link

    Thank goodness for significant barriers for entry into a market. Who knows where Nvidia would be now. Reply
  • Unpas - Sunday, December 27, 2009 - link

    Thank god that we have a console market kept alive by artificial breathing since the PC-gaming came along. Im so happy to see the world revert back into the middle ages with a new console only every 4-5 years, and then the same shitty graphics in every released game from then to the next shitty console comes out. Of course this way is the new way. Its the future for innovation and technology. Companies dont have to compete with eachother. Perfectly socialistic or fascistic, or just plain stupid. Reply

Log in

Don't have an account? Sign up now