Let's talk precision (Stage 5 continued)

Historically the vertex engines have been composed of floating point units for much longer than the pixel pipes have been; remember, we're thinking about these stages not in generic marketing terms but in their true existence - as functional units.

With the transition to DX9 hardware came the move to floating point units in the shading/texturing stages. FPUs can provide greater precision (try representing 1.04542 as anything other than a decimal number and still retain full precision) but at the cost of much larger and more expensive hardware, primarily an increase in transistor count and die size. We've discussed the merits of FP precision in previous articles, for more information on what you can do with FP support hop over here.

Once again here's where the marketing folks do the industry a severe injustice when it comes to talking about the "precision" of these FPUs. Precision, when referring to any sort of floating point number, is given in the number of bits allocated to representing that number. In the case of DirectX 9, there are two modes that are supported - full and half precision.

Full precision, according to Microsoft's spec, calls for 24-bit representation for each color component for a total of 96-bits to represent each pixel. The DX9 specification also calls for partial precision support with 16-bit components, for 64-bit representation of every pixel. ATI's R3xx hardware supports both of these modes and helped define them in the original specification.

NVIDIA does things a little differently; they support the 16-bit partial precision mode, however their NV3x hardware does not support the 24-bit full precision mode. Instead, NVIDIA supports the IEEE-754 spec which calls for 32-bits of precision; this is not included as part of the DirectX 9 specification.

So when ATI mentions that NVIDIA does not support "full precision" they are, in some respects, correct - NVIDIA does not support the 24-bit precision mode as defined by Microsoft, they support a mode with greater precision. The question is, if a game doesn't request partial precision, what does NVIDIA's hardware do? According to NVIDIA, the NV3x will default to 32-bit precision unless requested to render in partial precision (16-bit) mode.

Now that we've got the precision issue squared away, let's talk about speed. Does executing in full precision occur any faster than if you execute in partial precision? The design of both ATI's and NVIDIA's hardware dictates that there is only one set of FPUs for the shading/texturing stage, and all operations regardless of their precision go through this set of FPUs. On ATI's R3xx GPUs this means that there are clusters of 4 x 24-bit FPUs, while on NVIDIA's NV3x GPUs there are clusters of 4 x 32-bit FPUs. All operations on these FPUs occur at the same latency, regardless of precision. So whether you're doing a 16-bit add or a 24/32-bit add, it occurs at the same rate.

The only penalty to using higher precision is of course in regards to memory bandwidth and memory size. So while it has been claimed in the past that full precision can only be had at the expense of speed, this isn't true from a computational standpoint, only from the perspective of the memory subsystem.

Stage 5: Shading/Texturing Stages 6 & 7: Raster Operations & DRAM accesses


View All Comments

  • Anonymous User - Thursday, October 16, 2003 - link

    After reading this article, how can I determine which GeForceFX 5600 card has the NV30 core or the NV35 core. I'm currently interested in purchasing one, but on any of the retail boxes or manuals from the manufacturer's web site say nothing about the type of core used. Did NVidia corrected themselves using the NV35 core before releasing their 5600 cards to the market? Or are there 5600's NV30 cards on the retail shelves too. Help is appreciated. Thanks. Reply
  • JamesVon - Thursday, December 27, 2018 - link

    Have you tried to play any Fortnite in GeForceFX 5600? Actually you can get free v-bucks or free fortnite leaked skins here if you interested <a href="https://newfortnite.com/">https://newfortn... Reply
  • Anonymous User - Saturday, September 6, 2003 - link

    You should be ashamed. The linking of words to completely unrelated MARKETING ADS is absolutely ridiculous...as if you don’t have ENOUGH ads already.

  • Shagga - Saturday, August 9, 2003 - link

    I certainly found the article informative. I read the article with a view to making a decision on which card to purchase over the next week or so and to be honest the article said enough to convince me to sit tight. I also felt there is more to come from both ATI and nVidia and the results which are presented are perhaps not entirely complete. This is pointed out by Anand and at $499 I need to be making the right choice, however, Anand did succeed in convincing me to wait a tad longer.

    Good article I thought.
  • Anonymous User - Friday, August 1, 2003 - link

    Please stop using Flash graphics! Reply
  • JamesVon - Thursday, December 27, 2018 - link

    What is the problem with Flash Graphics? Have you tried using Steam Platform? You can get free steam keys here https://steamity.com if you want to download free steam games Reply
  • Pete - Tuesday, July 22, 2003 - link

    It's only fair that I praise the article, as well. As I said above, in the initial article comment thread, I congratulated Anand on what I thought was a well-written article. I appreciate his lengthy graphics pipeline summary, his extensive image quality investigation, and his usual even-handed commentary (though I had problems with the latter two). Reply
  • Pete - Tuesday, July 22, 2003 - link

    I think this is a great article with a few significant flaws in its benchmarking.

    Firstly, the Doom 3 numbers. Anand acknowledged that he could not get the 9800P 256MB to run the tech demo properly, yet he includes the numbers anyway. This strikes me as not only incorrect but irresponsible. People will see 9800P 256MB numbers and note that its extra memory makes no difference over its 128MB sibling, yet only if they read the article carefully would they know that the driver Anand used limits the 9800P 256MB to only 128MB, essentially crippling the card.

    Also, note the difference between Medium Quality and High Quality modes in Doom 3 is only anisotropic filtering (AF), which is enabled in HQ mode. Note that forcing AF in the video card's drivers, rather than via the application, will result in higher performance and potentially lower image quality! This was shown to be the case both in a TechReport article on 3DM03 ("3DMurk"), in forum discussions at B3D, and in an editorial at THG. Hopefully this will be explored fully once a Doom3 demo is released to the public, and we have more open benchmarking of this anticipated game.

    Secondly, Anand's initial Quake 3 5900U numbers seemed way off compared to other sites that tested the same card in similar systems at the same settings. At 1600x1200 with 4xAA 8xAF, Anand was scoring over 200fps, well higher than any other review. And yet, after weeks of protest in the forum thread on this article, all that happened was the benchmark results for 12x10 and 16x12 were removed. The text, which notes:

    "The GeForceFX 5900 Ultra does extremely well in Quake III Arena, to the point where it is CPU/platform bound at 1600x1200 with 4X AA/8X Anisotropic filtering enabled."

    was left unchanged, even though it was based on what many assumed were erroneous benchmark data. I can only conclude that the data were indeed erroneous, as they have been removed from the article. Sadly, the accompanying text has not been edited to reflect that.

    Thirdly, the article initially tested Splinter Cell with AA, though the game does not perform correctly with it. The problem was that NVIDIA's drivers automatically disable AA if it's selected, yielding non-AA scores for what an unsupsecting reviewer believes is an AA mode. ATi's driver allow AA, warts and all, and thus produce appropriately dimished benchmark numbers, along with corresponding AA errors. The first step at correcting this mistake was to remove all Splinter Cell graphs and place a blurb in the driver section of the review blaming ATi for not disabling AA. Apparently a second step has been taking, expunging Splinter Cell from the article text altogether. Strangely, Splinter Cell is still listed in the article's drop-down menu as p. 25; clicking will bring you to the one last Quake 3 graph with the incorrect analysis, noted above.

    Finally, a note on the conclusion:

    "What stood out the most about NVIDIA was how some of their best people could look us in the eye and say "we made a mistake" (in reference to NV30)."

    What stands out most to me is that NVIDIA still can't look people in the eye and say they made a mistake by cheating in 3DMark03. Recent articles have shown NVIDIA to be making questionable optimizations (that may be considered cheats in the context of a benchmark) in many games and benchmarks, yet I see only a handful of sites attempt to investigate these issues. ExtremeTech and B3D noted the 3DMark03 "optimizations." Digit-Life has noted CodeCreatures and UT2K3 benchmark "optimizations," and Beyond3D and AMDMB have presented pictorial evidence of what appears to be the reason for the benchmark gains. NVIDIA appears to currently foster a culture of cutting corners without the customer's (and, hopefully, reviewer's) knowledge, and they appear reticent to admit it at all.

    I realize this post comes off as harsh against both Anand and NVIDIA. In the initial comment thread on this article, I was gentler in my (IMO, constructive) criticism. As the thread wore on for weeks without a single change in the multiple errors perceived in the original article, I gradually became more curt in my requests for corrections. Anand elicits possibly the greatest benefit of the doubt of any online hardware reviewer I know, as I've read his site and enjoyed the mature and thoughtful personality he imbued it with for years. I'm sorry to say his response--rather, his lack of response, as it was only Evan and Kristopher, not Anand, that replied to the original article thread--was wholly unsatisfactory, and the much belated editing of the article into what you read today was unsatisfactory as well. I would have much preferred Anand(tech) left the original article intact and appended a cautionary note or corrected benchmarks and commentary, rather than simply cutting out some of the questionable figures and text.

    Consider this post a summation of the criticism posted in the original article thread. I thought they would be useful to put this article in context, and I hope they are taken as constructive, not destructive, criticism. The 5900 is no doubt a superior card to its predecessor. I also believe this article, in its current form, presents an incomplete picture of both the 5900U and its direct competition, ATi's 9800P 256MB. Hopefully the long chain of revelations and commentary sparked by and after this article will result not in hard feelings, but more educated, thorough, and informative reviews.

    I look forward to Anandtech's next review, which I believe has been too long in coming. :)
  • ritaeora - Tuesday, December 11, 2018 - link

    I like your review about the GeForce.
  • kyrac - Monday, December 24, 2018 - link

    I am a user of Nvidia and i have a great experience using it.

Log in

Don't have an account? Sign up now