The Test

As we previously indicated, we need to use at least a Core 2 Duo E6400 in order to avoid dropping frames while testing graphics card decode acceleration under X-Men: The Last Stand. As we also wanted an accurate picture of how much GPU decode acceleration really helps, we needed to use a CPU powerful enough to avoid dropping frames even under the most stressful load without GPU assistance. Thus we chose the Core 2 Duo X6800 for our tests. Using this processor, we can more accurately see how each graphics card compares to the others and how much each graphics card is able to assist the CPU.

We tested CPU utilization by using perfmon to record data while we viewed a section of X-Men: The Last Stand. The bookmark feature really helped out, allowing us to easily jump to the specific scene we wanted to test in Chapter 18. In this scene, the Golden Gate is being torn apart and people are running everywhere. This is one of the most stressful scenes in the movie, reaching a bitrate of over 41 Mbps at one point.

Unfortunately, we haven't found a feature in PowerDVD or another utility that will allow us to count dropped frames. This means we can't really compare what happens to the video quality when the CPU is running at 100%. In lieu of dropped frames, we will need to stick with CPU overhead as our performance metric.

For reference we recorded average and maximum CPU overhead while playing back our benchmark clip with no GPU acceleration enabled.

Here is the rest of our test system:

Performance Test Configuration
CPU: Intel Core 2 Duo X6800
Motherboard(s): ASUS P5B Deluxe
Chipset(s): Intel P965
Chipset Drivers: Intel 7.2.2.1007 (Intel)
Hard Disk: Seagate 7200.7 160GB SATA
Memory: Corsair XMS2 DDR2-800 4-4-4-12 (1GB x 2)
Video Cards: Various
Video Drivers: ATI Catalyst 6.11
NVIDIA ForceWare 93.71
NVIDIA ForceWare 97.02
Desktop Resolution: 1920x1080 - 32-bit @ 60Hz
OS: Windows XP Professional SP2


H.264 Encoded HD Content: A Good Thing X-Men: The Last Stand CPU Overhead
Comments Locked

86 Comments

View All Comments

  • ss284 - Monday, December 11, 2006 - link

    Should be, since memory bandwidth performance usually doesn't play a big factor in decode performance.
  • Eug - Monday, December 11, 2006 - link

    Do you have a link for that?

    I was under the impression that for HD decoding with advanced video codecs, memory bandwidth was actually fairly important. However, I can't see to find a link to support this (or to support the opposite).
  • Aikouka - Monday, December 11, 2006 - link

    In the 2nd trial, the 8800GTX post 10.9% higher cpu utilization than the 8800 GTS (the top performing card this trial). Is there any reason for this, as the post itself makes no mention of this anomoly.
  • Orbs - Monday, December 11, 2006 - link

    I noticed this too. The GTS was better than the GTX and this was not explained.
  • DerekWilson - Monday, December 11, 2006 - link

    The maximum CPU utilization is a little less consistent than average CPU utilization. Such is one of the issues with using max CPU... these numbers are more for reference -- average should be used to determine the general performance of the hardware.
  • bluh264 - Saturday, December 3, 2011 - link

    Just find a site about blu ray to h.264
    http://www.bluraytoh264.com/

Log in

Don't have an account? Sign up now