Power Consumption

The reason that a handful of execution engines within a $150 graphics card can be faster than even some of the most powerful desktop microprocessors is because of the use of specialized logic designed specifically for the task at hand. NVIDIA took this approach to an even greater degree by effectively making its BSP engine useful for exactly one thing: CAVLC/CABAC bitstream decoding for H.264 encoded content. Needless to say, NVIDIA's approach is not only faster than the general purpose microprocessor approach, but it should also be more power efficient.

To measure the improvement in power efficiency, we outfitted our test bed with a GeForce 8600 GT and ran the Yozakura benchmark with hardware acceleration enabled and disabled. With it enabled, the 8600 GT is handling 100% of the H.264 decode process; with it disabled the host CPU (an Intel Core 2 Duo E6320) is responsible for decoding the video stream. We measured total system power consumption at the wall outlet and reported the average and max values in Watts.

Power Consumption

At idle, our test bed consumed 112W and when decoding the most stressful H.264 encoded HD-DVD we've got the power jumped up to 124.8W. Relying on the CPU alone to handle the decoding required 8% more power, bringing the average system power usage up to 135.1W.

Power Consumption

Surprisingly enough, the difference in power consumption isn't as great as we'd expect. Obviously system performance is a completely different story as the 8600's hardware acceleration makes multitasking while watching H.264 content actually feasible, but these numbers show the strength of Intel's 65nm manufacturing process. We do wonder what the power consumption difference would look like if a CPU manufacturer was able to produce a CPU and a GPU on the very same process. With AMD's acquisition of ATI, we may very well know the answer to that question in the coming years.

Serenity (VC1) Final Words
Comments Locked

64 Comments

View All Comments

  • kilkennycat - Friday, April 27, 2007 - link

    Er, its successor-family (89xx??, G9x??) which is now well into design most likely will. Expected out before the end of 2007. Double-precision math etc for the dual role of GPU and general-purpose parallel computation. Maybe with VC1 hardware decode for the little extra icing....
  • DigitalFreak - Friday, April 27, 2007 - link

    LOL. ...and you know this how?
  • Griswold - Saturday, April 28, 2007 - link

    He doesnt know, he is just guessing/wishful thinking. :p
  • Cascavel - Friday, April 27, 2007 - link

    Likewise, impressed. I think one of these will be going in my HTPC.

    And thanks for the review guys, first I have seen on these cards which covered this topic

Log in

Don't have an account? Sign up now