Integrated Graphics

Beginning now, all new NVIDIA chipsets will ship with integrated graphics (which NVIDIA is now calling the mGPU), regardless of what market segment they are targeted at. It's a particularly bold move by NVIDIA but much appreciated given that the mGPU in all of its chipsets will receive PureVideo HD and thus can fully accelerate H.264/MPEG-2/VC1 decode streams.

While it's unlikely that many would purchase a high-end motherboard based on the NVIDIA nForce 780a SLI chipset and simply use its integrated graphics, the mGPU in the 780a is the same GPU used in the 750a, 730a, 720a and the GeForce 8200 based motherboards, so the discussion here is far reaching.

AMD 780G vs. NVIDIA 780a Graphics Architecture

AMD has built a superior Integrated graphics part this time around, both from a technical standpoint and in terms of realized performance. It isn't that AMD really went much further than NVIDIA in terms of engineering something great: they just selected a higher performance core to integrate into their chipset than NVIDIA did.

Neither AMD nor NVIDIA told us exactly how the built their interface to the system bus and system memory, but the lack of a local framebuffer does mean that fast and as low latency as possible communication with system memory are required. In both cases, the discrete GPU from which the integrated part is derived uses a 64-bit width connection to local memory. In both cases, since system memory offers a 128-bit wide but these parts make use of a wider bus to help compensate for increased latency to system memory. Increasing local (on die) cache would also help here, but since IGP solutions are as low cost as possible it doesn't seem likely that we've got loads more cache to play with.

We used 3dmark's single texture test to try to get an idea of memory bandwidth. The test largely removes computation overhead and ends up just pulling in as much data as possible as fast as possible and throwing it up on the screen. The result in MTexels/sec shows that NVIDIA has a bit of an advantage here, but the gap isn't huge. This means that performance differences will likely come down to compute power rather than bandwidth

  AMD 780G NVIDIA nForce 780a
3DMark '06 Single Texture Fillrate 910.6 MTexels/s 983.4 MTexels/s

 

Past here, NVIDIA and AMD integrated hardware diverge. AMD's solution is based on the RV610 graphics core. In fact, it is an RV610 core shrunk to 55nm and integrated into the Northbridge. This means we get 8 5-wide blocks of shader processors (SPs -- 40 total). In the very very worst case, we get 8 shader ops per clock (which isn't likely to happen in any real situation). Compare this to NVIDIA's G86 based 8 SP offering with a maximum of 8 shader ops per clock and we see quite a difference emerge. AMD's IGP can handle 8 vector instructions per clock and then some, while the similar code could run at 2 instructions per clock on NVIDIA hardware.

Of course, this difference isn't as decimating to NVIDIA as one might think at first blush. We must remember that NVIDIA cranks up it's shader clock to ridiculous speeds while AMD's shaders all run at core clock speed. With AMD and NVIDIA core clocks both coming in at 500MHz, NVIDIA's shader core runs at 1200MHz. In spite of the fact that AMD's part can do more operations per clock (probably averaging out to somewhere between 3x and 4x; it heavily depends on the application), NVIDIA is able to do 2.4x as many clocks per second which closes the gap a bit.

The only discrete part with 8 SPs is the GeForce 8300 which is OEM only. As of this writing, NVIDIA has not confirmed details other than core and shader speeds and the number of SPs in the part with us. They have stated that their integrated hardware is simlar to the 8400/8500 in order to optimize the benefit of Hybrid SLI, so it's possible the number of texutre and ROP units are 8 each. Of course, if half the number of SPs is "similar" to the 8400 and 8500 parts, we can't really be sure until NVIDIA confirms the details. We do know that AMD's hardware has 4 texture and 4 render outs since it is RV610. With so few SPs, and the competition sticking with 4/4 texture/render units, we suspect that this is what NVIDIA has done as well.

What is clear is that either way, AMD's hardware is more robust than NVIDIA's offering. Our performance tests reflect this, as we will soon show.

The Rest of the Family Integrated Graphics Performance & GeForce Boost
Comments Locked

38 Comments

View All Comments

  • wjl - Wednesday, May 7, 2008 - link

    I tried a Wolfdale 2,6GHz (E8200) with Intel's G35, and it's an improvement already - tho for "serious" HTPC usage, I would probably wait for the G45, which should be out this summer.

    Sure, Intel chip sets are not flawless, like their drivers also. But Intel and AMD are moving into the right direction, and I wish this would be honoured more when comparison tests like the one here are performed.

    The world isn't only Windows, and only gamers - wake up guys. Take the Phoronix test suite if you have to compare and show numbers. I think even this test suite is GPL'ed, so...

    Anyway: the ATI/AMD 690G (RS690) will work now with 3D, using only open source drivers - and it's news like these which are really important for the rest of us - not which newest chip set has a few frames per second more or less, which is really ONLY interesting for first person shooters.
  • Natfly - Tuesday, May 6, 2008 - link

    quote:

    HyperTransport 3.0 capability (5.2GT/s+ interface) is included and is important in getting the most out of the 780a graphics core. With a Phenom onboard, the 780a will perform post-processing on high-definition content and it makes a difference in image quality and fluidity during 1080p playback.


    How important is HT3 for the IGP? Is 1080P content watchable without it?

    Also, is there an equivalent to AMD's sideport memory that may show up in some 780a/8200 boards?
  • derek85 - Tuesday, May 6, 2008 - link

    HT3 is most important when you watch interlaced contents (1080i) because of the extra HDHQV features require alot more bandwidth than normal 1080p. Theoratically 1080p should be watchable without HT3, but this largely depends on the K8 model you get.

    I'm not sure about sideport equivalence from NVIDIA, I haven't heard anything related to it and I highly doubt they will be able to come up with one, because that requires modification of their existing blocks which they probably won't bother to spend the time on. If you really want that, just get an AMD board ;)
  • Natfly - Tuesday, May 6, 2008 - link

    Well I was planning on getting a 4850e and have been recently trying to decide between the 780G and 8200. I'd like to get the best IGP performance and also have RAID5 w/out using any extra cards, but that seems impossible at this point. Maybe a manufacturer will pair up 780G with SB750 when it gets released.
  • derek85 - Thursday, May 8, 2008 - link

    If you want to max out 3D performance, HT3 is the way to go. HT1 can provide maximum 8GB of bandwidth, HT3 with 1800MHz can provide 14.4GB of bandwidth (2 channel DDR2-800 is 12.8GB). The actual improvement of this reflected in benchmarks such as 3DMarko6 is quite significant (>20%), but nonetheless it is still IGP, so whether you would like to invest more into it is totally up to you.
  • Von Matrices - Tuesday, May 6, 2008 - link

    Is my PC at fault or does anyone else notice the horrible compression of the charts on page 6?
  • JarredWalton - Tuesday, May 6, 2008 - link

    Fixed... Gary changed the chart sizes but didn't update the HTML (where a smaller width and height was hard-coded). Shame on him. I have had him flogged with a Cat-o-nine-SATA-cords.
  • Mgz - Tuesday, May 6, 2008 - link

    in page 4 you have a little typo "we can't really be sure until NVIDI confirms the details"
  • homerdog - Tuesday, May 6, 2008 - link

    I appreciate the effort by Nvidia to reduce idle power consumption, but I would much rather see a discrete GPU that doesn't draw so much power when idling in first place. ATI has been making significant strides in this department lately with PowerPlay, and EVERY motherboard/configuration benefits. Having two GPUs with redundant framebuffers is going around your elbow to get to your ******* if you ask me.
  • ChrisRay - Tuesday, May 6, 2008 - link

    HomerDog. Not sure I entirely understand your problem with Hybrid Power. Its basically a technology that lets you shut of your discreet GPUS completely. No amount of power saving tech is going to have that measure of impact. ((Or system noise impact)).

    Your right that every motherboard benefits from power saving tech on discreet GPUs. But the difference in power saving by using a feature like Hybrid power is huge compared to any idle technology existing on GPUS. Browsing from my desktop with Hybrid Power enabled and Quad SLI 9800GX2. My AVG room temp went down 4-5C after 2 hours of web activity from having hybrid power enabled. Thats significant.

    SLIZONE Forum Admin.
    Nvidia User Group

Log in

Don't have an account? Sign up now