Compilation Integration

In order to maximize performance, the NV3x pipeline needs to be as full as possible all the time. For this to happen, special care needs to be taken in how instructions are issued to the hardware. One aspect of this is that the architecture benefits from interleaved pairs of different types of instructions (for instance: issue two texture instructions, followed by two math instructions, followed by two texture instructions, etc). This is in contrast to ATI's hardware which prefers to see a large block of texture instructions followed by a large block of math instructions for optimal results.

As per NVIDIA's sensitivity to instruction order, we can (most easily) offer the example of calculating a^2 * 2^b:

mul r0,a,a
exp r1,b
mul r0,r0,r1

-takes 2 cycles on NV35

exp r1,b
mul r0,a,a
mul r0,r0,r1

-takes 1 cycle on NV35

This is a trivial example, but it does the job of getting the point across. Obviously, there are real benefits to be had from doing simple standard compiler optimizations which don't effect the output of the code at all. What kind of optimizations are we talking about here? Allow us to elaborate.

Aside from instruction reordering to maximize the parallelism of the hardware, reordering can also help reduce register pressure if we minimize the live ranges of registers within independent data. Consider this:

mul r0,a,a
mul r1,b,b
st r0
st r1

If we reorder the instructions we can use only one register without affecting the outcome of the code:

mul r0,a,a
st r0
mul r0,b,b
st r0

Register allocation is a very hefty part of compiler optimization, but special care needs to be taken to do it correctly and quickly for this application. Commonly, a variety of graph coloring heuristics are available to compiler designers. It seems NVIDIA is using an interference graph style of register allocation, and is allocating registers per component, though we are unclear on what is meant by "component".

Dead code elimination is a very common optimization; essentially, if the developer includes code that can never be executed, we can eliminate this code from the program. Such situations are often revealed when performing multiple optimizations on code, but it’s still a useful feature for the occasional time a developer falls asleep at the screen.

There are a great many other optimizations that can be performed on code which have absolutely no effect on outcome. This is a very important aspect of computing, and only gets more complicated as computer technology gets more powerful. Intel's Itanium processors are prohibitive to hand coding, and no IA64 based processor would run code well unless the compiler that generated the code was able to specifically tailor that code to the parallel nature of the hardware. We are seeing the same type of thing here with NVIDIA's architecture.

Of course, NVIDIA has the added challenge of implementing a real-time compiler much like the java JIT, or Transmeta's code morphing software. As such, there are other very interesting time saving things they need to do with their compiler in order to reduce the impact of trying to adequately approximate the solution to an NP complete problem into am extremely small amount of time.

A shader cache is implemented to store previously compiled shaders; this means that shaders shouldn't have to be compiled more than once. Directed Acyclic Graphs (DAGs) of the code are used to fingerprint compiled shaders. There is also a stock set of common, precompiled, shaders that can get dropped in when NVIDIA detects what a developer is trying to accomplish. NVIDIA will need to take special care to make sure that this feature remains a feature and doesn't break anything, but we see this as a good thing as long no one feels the power of the dark side.

Also, until the most recent couple driver releases from NVIDIA, the real-time compiler didn't implement all of these important optimizations on shader code sent to the card by a game. The frame rate increases of beyond 50% with no image quality loss can be attributed to the enhancements of the real-time compiler NVIDIA has implemented. All of the performance we've previously seen has rested on how well NVIDIA and developers were able to hand code shaders and graphics subroutines.

Of course, writing "good code" (code that suits the hardware it’s written for) will help the compiler be more efficient as well. We certainly won't be seeing the end of NVIDIA sitting down at the table with developers to help them acclimate their code to NV3x hardware, but this Unified Compiler technology will definitely help us see better results from everyone's efforts.

Architecture Image Quality
Comments Locked

114 Comments

View All Comments

  • Anonymous User - Thursday, October 23, 2003 - link

    it seems toms review puts into question ati's optimizations moreso than nvidia's image quality
  • Anonymous User - Thursday, October 23, 2003 - link

    In any case,.....it's another round of new card releases and hopefully cheaper prices around for the
    "older" models.
  • Anonymous User - Thursday, October 23, 2003 - link

    #19, I don't think it is a fanboy thing. It's an AT thing that's costing them their respect from other hardware sites and readers.
  • Anonymous User - Thursday, October 23, 2003 - link

    ati fanboys above dont look to happy :)
  • Anonymous User - Thursday, October 23, 2003 - link

    # 15, If someone writes a crappy review then he deserves all the problems and flak the come with it.
  • gordon151 - Thursday, October 23, 2003 - link

    #14, "The GeForce FX 5700 Ultra will be debuting at $199 after a mail in rebate. If $200 is your hard limit, and you need a midrange card right now, the 5700 Ultra is the way to go if you want ****solid frame rates****." Now you could say they dodged the image quality bullet on that comment, but that's really the only recommendation they made on the 5700 Ultra.

    When the new article comes out and they do an image quality analysis, if their findings are similar to that of HardOCP and TomsHardware the conclusion will be something similar to "5700 Ultra still for solid frame rates and 9600 XT for solid frame rates *AND* image quality".

    BTW Derek I don't believe was even at the press event, that was Anand. Derek is the sole author of this article it seems and unlike Toms and HardOCP he didn't have any direct aide from other staff.
  • Anonymous User - Thursday, October 23, 2003 - link

    #13,

    No, we don't need to bitch at every AT review. But when the conclusion CONTRADICTS the very data he supplies us, then something is seriously wrong. Wouldn't you say?
  • Anonymous User - Thursday, October 23, 2003 - link

    #14, if anyone buys an expensive video card based on 1 review from 1 tech site, they deserve the problems that could come with it.
  • Anonymous User - Thursday, October 23, 2003 - link

    The review crowned a new midrange segment winner without dealing with image quality. What are they going to do, retract that later after their image tests? What about the people that bought the cards based on their review - and then they find out the cards have image quality problems?

    Other sites in the past when they discovered issues waited until they had done further testing before coming out with any review. Perhaps anandtech should have followed hardocp's lead, and instead of partying it up and brown-nosing at nvidia press events they should have been doing their image tests so they could put out a full review.
  • gordon151 - Thursday, October 23, 2003 - link

    Do we seriously need the comments crying for the authors head with *EVERY* review? They already said they were working on an article which will do a study on the image quality tests and will be posted laters. This review will clearly stress the numbers and that's where they draw conclusions. Damn, give them a frigging break.

Log in

Don't have an account? Sign up now