Under the Hood of G84

So the quick and dirty summary of the changes is that the G84 is a reduced width G80 with a higher proportion of texture to shader hardware and a reworked PureVideo processing engine (dubbed VP2 as opposed to G80's VP1). Because there are fewer ROPs, fill rate and antialiasing capabilities will be reduced from the G80 as well. This isn't as necessary on a budget card where shader power won't be able to keep up with huge resolutions either.

We expect the target audience of the 8600 series to be running 1280x1024 resolution panels. Of course, some people will be running larger panels and we will test some higher resolutions to see what kind of capabilities the hardware has, but above 1600x1200 tests are somewhat academic. As 1080p TVs become more popular in the coming years, however, we may start putting pressure on graphics makers to target 1920x1200 as their standard resolution for mainstream parts even if average computer monitor sizes weigh in with fewer pixels.

In order to achieve playable performance at 1280x1024 with good quality settings, NVIDIA has gone with 32 shaders, 16 texture address units, and 8 ROPs. Here's the full breakdown:

GeForce 8600/8500 Hardware
GeForce 8600 GTS GeForce 8600 GT GeForce 8500
Stream Processors 32 32 16
Texture Address / Filtering 16/16 16/16 8/8
ROPs 8 8 4
Core Clock 675 MHz 540 MHz 450 MHz
Shader Clock 1.45 GHz 1.19 GHz 900 MHz
Memory Clock (Data Rate) 2 GHz 1.4 GHz 800 MHz
Memory Bus Width 128-bit 128-bit 128-bit
Frame Buffer 256 MB 256 MB 256MB / 512MB
Outputs 2x dual-link DVI 2x dual-link DVI ?
Transistor count 289 M 289 M ?
Price $200 - $230 $150 - $160 $90 - $130

We'll tackle the 8500 in more depth when we have hardware. For now, we'll include the data as reference. As for the 8600, right out of the gate, 32 SPs mean one third the clock for clock shader power of the 8800 GTS. At the same time, NVIDIA has increased the ratio of Texture address units to SPs from 1:4 to 1:2. We also see a 1:1 ratio of texture address and filter units. These changes prompted NVIDIA to further optimize their scheduling algorithms.

The combination of greater resource availability and improved scheduling allow for increased efficiency. In other words, clock for clock, G84 SPs are more efficient than G80 SPs. This makes it harder to compare performance based on specifications. Apparently stencil culling performance has also been improved, which should help boost algorithms like the Doom 3 engine's shadowing technique. NVIDIA didn't give us any detail on how stencil culling performance was improved, but indicated that this, among other things, was also tweaked with the new hardware.

Top this off with the fact that G84 has also been enhanced for higher clock speeds than G80 and we can expect much more work to be done by each SP per second than on 8800 hardware. Exactly how much is something we don't have an easy way of measuring as changes in efficiency will vary by the algorithms running on the hardware as well.

With 256 MB of memory on a 128-bit bus, we can expect a little more memory pressure than on the 8800 series. The 2 x 64-bit wide channels provide 40% of the bus width of an 8800 GTS. This isn't as cut down as the number of SPs; remember that the texture address units have only been reduced from 24 on the 8800 GTS to 16 on the 8600 series. Certainly the reduction of 20 ROPs to 8 will help cut down on memory traffic, but that extra texturing power won't be insignificant. While we don't have quantitative measurements, our impression is that memory bandwidth is more important in NVIDIA's more finely grained unified architecture than it was with the GeForce 7 series pipelined architecture. Sticking with a 128-bit memory interface for their mainstream part might work this time around, but depending on what we see from game developers over the next six months, this could easily change in the near future.

Let's round out our architectural discussion with a nice block diagram for the 8600 series:



We can see very clearly that this is a cut down G80. As we have discussed, many of these blocks have been tweaked and enhanced to provide more efficient processing. The fundamental function of each block remains the same, and the inside of each SP remains unchanged as well. The features supported are also the same as G80. For 8500 hardware, based on G86, we drop down from two blocks of Shaders and ROPs to one each.

Two full dual-link DVI ports on a $150 card is a very nice addition. With the move from analog to digital displays, seeing a reduction in maximum resolution on budget parts because of single-link bandwidth limitations, while not devastating, isn't desirable. There are tradeoffs in moving from analog to digital display hardware, and now an additional issue has a resolution. Now we just need to see display makers crank up pixel density and improve color space without reducing response time and this old Sony GDM-F520 can finally rest in peace.

In the video output front, G84 makes a major improvement over all other graphics cards on the market: G84 based hardware supporting HDCP will be capable of HDCP over dual-link connections. This is a major feature, as a handful of larger widescreen monitors like Dell's 30" only support 1920x1080 with a dual-link connection. Unless both links are protected with HDCP, software players will refuse to play AACS protected HD content. NVIDIA has found a way around the problem by using one key ROM but sending the key over both links. The monitor is able to handle HDCP connections on both links, and is able to display the video properly at the right resolution.

As for manufacturing, the G84 is still an 80 nm part. While G80 is impressively huge at 681M transistors, G84 is "only" 289M transistors. This puts it at nearly the same transistor count as G71 (7900 GTX). While performance of the 8600 series doesn't quite compare to the 7900 GTX, the 80 nm process makes smaller die sizes (and lower prices) possible.

In addition to all this, PureVideo has received a significant boost this time around.

Index The New Face of PureVideo HD
Comments Locked

60 Comments

View All Comments

  • JarredWalton - Tuesday, April 17, 2007 - link

    It's not surprising that G84 has some enhancements relative to G80. I mean, G80 was done six months ago. I'd expect VP2 is one of the areas they worked on improving a lot after comments post-8800 launch. Now, should they kill the current G80 and make a new G80 v1.1 with VP2? That's up for debate, but you can't whine that older hardware doesn't have newer features. "Why doesn't my Core 2 Duo support SSE4?" It's almost the same thing. I wouldn't be at all surprised to see a new high-end card from NVIDIA in the future with VP2, but when that will be... dunno.
  • harshw - Tuesday, April 17, 2007 - link

    quote:

    It is important to emphasize the fact that HDCP is supported over dual-link DVI, allowing 8600 and 8500 hardware to play HDCP protected content at its full resolution on any monitor capable of displaying 1920x1080


    So ... to confirm, the card *does* let you watch HDCP content on a Dell 3007WFP at 2560x1600 ? Of course, the card would probably scale the stream to the panel resolution ...
  • DerekWilson - Tuesday, April 17, 2007 - link

    The card will let you watch HDCP protected content at the content's native resolution -- 1920x1080 progressive at max ...

    Currently if you want to watch HDCP protected content on a Dell 30", you need to drop your screen resolution to 1280x800 and watch at that res -- the video is downscaled from 1920x1080. Higher resolutions on the panel require dual-link DVI, and now HDCP protected content over a dual-link connection is here.
  • AnnonymousCoward - Tuesday, April 17, 2007 - link

    Maybe I'm in the minority, but I don't care about this HDCP business. The players are still ultra expensive, and the resolution benefit doesn't really change how much I enjoy a movie. Also, a 30" screen is pretty small to be able to notice a difference between HD and DVD, if you're sitting at any typical movie-watching distance from the screen. Well, I would guess so at least.
  • Spoelie - Wednesday, April 18, 2007 - link

    We're talking about 30" lcd monitors with humongous resolutions, not old 30" lcd tvs with 1386x768 something.

    Or do your really don't see any difference between
    http://www.imagehosting.com/out.php/i433150_BasicR...">http://www.imagehosting.com/out.php/i433150_BasicR... and http://www.imagehosting.com/out.php/i433192_HDDVD....">http://www.imagehosting.com/out.php/i433192_HDDVD....
    or
    http://www.imagehosting.com/out.php/i433157_BasicR...">http://www.imagehosting.com/out.php/i433157_BasicR... and http://www.imagehosting.com/out.php/i433198_HDDVD....">http://www.imagehosting.com/out.php/i433198_HDDVD....
  • Myrandex - Tuesday, April 17, 2007 - link

    I loved it how the two 8600 cards listed 256MB memoy only however the 8500 card showed 256MB / 512MB. Gotta love marketing in attempting to grab the masses attention by throwing more ram into a situation where it doesn't really help...
    Jason
  • KhoiFather - Tuesday, April 17, 2007 - link

    Horrible, horrible performance. I'm so disappointed its not even funny! I'm so waiting for ATI to release their mid-range cards and blow Nvidia out the water to space.
  • jay401 - Tuesday, April 17, 2007 - link

    quote:

    We haven't done any Windows Vista testing this time around, as we still care about maximum performance and testing in the environment most people will be using their hardware. This is not to say that we are ignoring Vista: we will be looking into DX10 benchmarks in the very near future. Right now, there is just no reason to move our testing to a new platform.


    Very true, and not only because the vast majority of gamers are still running XP, but also because no games out to this point gain anything from DX10/Vista (aside from one or two that add a few graphical tweaks here and there in DX10).

    When there are enough popular, well-reviewed DX10/Vista focused games available that demonstrate appreciable performance improvement when running in that environment, such that you can create a test suite around those games, then it would be time to transition to that sort of test setup for GPUs.
  • Griswold - Tuesday, April 17, 2007 - link

    The real reason would that nobody wants to go through the nightmare of dealing with nvidia drivers under vista. ;)
  • jay401 - Tuesday, April 17, 2007 - link

    Derek you should add the specs of the 8800GTS 320MB to the spec chart on page 2, unless of course NVidia forbids you to do that because it would make it too obvious how they've cut too many stream processors and too much bus size from these new cards.

    Now what they'll do is end the production of the 7950GTs to ensure folks can't continue to pick them up cheaper and will be forced to move to the 8600GTS that doesn't yet offer superior performance.

    gg neutering these cards so much that they lose to your own previous generation hardware, NVidia.

Log in

Don't have an account? Sign up now