The Test

As we previously indicated, we need to use at least a Core 2 Duo E6400 in order to avoid dropping frames while testing graphics card decode acceleration under X-Men: The Last Stand. As we also wanted an accurate picture of how much GPU decode acceleration really helps, we needed to use a CPU powerful enough to avoid dropping frames even under the most stressful load without GPU assistance. Thus we chose the Core 2 Duo X6800 for our tests. Using this processor, we can more accurately see how each graphics card compares to the others and how much each graphics card is able to assist the CPU.

We tested CPU utilization by using perfmon to record data while we viewed a section of X-Men: The Last Stand. The bookmark feature really helped out, allowing us to easily jump to the specific scene we wanted to test in Chapter 18. In this scene, the Golden Gate is being torn apart and people are running everywhere. This is one of the most stressful scenes in the movie, reaching a bitrate of over 41 Mbps at one point.

Unfortunately, we haven't found a feature in PowerDVD or another utility that will allow us to count dropped frames. This means we can't really compare what happens to the video quality when the CPU is running at 100%. In lieu of dropped frames, we will need to stick with CPU overhead as our performance metric.

For reference we recorded average and maximum CPU overhead while playing back our benchmark clip with no GPU acceleration enabled.

Here is the rest of our test system:

Performance Test Configuration
CPU: Intel Core 2 Duo X6800
Motherboard(s): ASUS P5B Deluxe
Chipset(s): Intel P965
Chipset Drivers: Intel 7.2.2.1007 (Intel)
Hard Disk: Seagate 7200.7 160GB SATA
Memory: Corsair XMS2 DDR2-800 4-4-4-12 (1GB x 2)
Video Cards: Various
Video Drivers: ATI Catalyst 6.11
NVIDIA ForceWare 93.71
NVIDIA ForceWare 97.02
Desktop Resolution: 1920x1080 - 32-bit @ 60Hz
OS: Windows XP Professional SP2


H.264 Encoded HD Content: A Good Thing X-Men: The Last Stand CPU Overhead
Comments Locked

86 Comments

View All Comments

  • DerekWilson - Monday, December 11, 2006 - link

    cool -- we'll have to investigate this.
  • liquidaim - Monday, December 11, 2006 - link

    Did you use the 3d clocks for ati cards or the normal 2d?

    Just wondering if that was taken into account for the MPEG-2 tests previously and not here, which is why ati cards didn't perform as well.

    Not a fanboy, just asking for clarification.

  • DerekWilson - Monday, December 11, 2006 - link

    I don't believe you can specify what clock ATI uses when decoding video -- I think this is handled internally. It may be that the hardware that helps accelerate MPEG-2 the most is tied to clock, while the majority of what benefits H.264 is not. We'll have to dig further to really know.
  • pata2001 - Monday, December 11, 2006 - link

    It was the same thing when MPEG2 came out. Heck, even in the old days of 386s, PCs are too slow to decode MPEG1 VCDs, to the point that we have a seperate MPEG1 decoder cards. Remember when DVD came out, there was a big push for GPU accelerated hardware iDCT. Today, most CPUs are powerful enough to decode MPEG2 on its own. The same thing agian with MPEG4. By the time 4-core/8-core CPUs become mainstream, we won't be hearing the need for GPU acceleration as much anymore. And by that time, there will be probably the next next gen HD format that is too powerful for CPUs for that time, cycle and repeat.
  • DerekWilson - Monday, December 11, 2006 - link

    MPEG-4 contains many advanced features not currently in use. We first saw MPEG-4 part 2 in the form of DivX, but MPEG-4 part 10 takes quite a bit more work. Some of the profiles and levels of H.264/AVC will be too much for quad core CPUs to handle. These may not be adopted by studios for use on physical media, but the codec itself is very forward looking.

    But in the end, you are correct -- the entire MPEG-4 spec will be a simple matter in a handful of years.

    This is the case with everything though. Even if something will one day pose no trouble to computers, we can't ignore current performance. Studios must balance current performance with the flexibility to support the type of image quailty they will want near the end of the life cycle of BD and HDDVD formats.

    I always look forward to this kind of thing, and it's why I test hardware -- I want to know what my PC can currently do with what is out there.

    I suppose the "news" is that we've got something everyone wouldn't mind having that very few will be able to use for the time being.
  • Staples - Monday, December 11, 2006 - link

    This is good news that MPEG2 won't become the standard for BD. Until today, I figured all movies were in MPEG2 and if this became standard and won the format war, we would be stuck with what could arguably give a worse picture than HDDVD using VC1.

    How do you know what movies are 50gb and or h264? Does it usually say on the box or does the player tell you?
  • DerekWilson - Monday, December 11, 2006 - link

    In our experience with Blu-ray, the format is listed on the box. HDDVDs have been a little more cryptic and we are having to ask for help determining format.

    For our X-Men BD, the back of the case stated AVC @18 Mbps.

    I don't think disk size has been listed on the case, and we've had to ask for this info from industry sources.
  • CrystalBay - Monday, December 11, 2006 - link

    Are AMD X2's unable to efficiently work in these scenarios ?
  • DerekWilson - Monday, December 11, 2006 - link

    AMD CPUs will very likely perform worse than Core 2 Duo CPUs.

    We are considering doing a CPU comparison.
  • Xajel - Monday, December 11, 2006 - link

    IT's logical to be worse, but most users are using these processors and they really wanna know if there rig's can handle it...

    it's not about AMD only, there's plenty of Pentium 4, Pentium D in these rigs, even Athlon XP still rocks in some..

    what about core scaling test ?? I mean

    1- Single Core
    2- Single Core with Hyper Threading
    3- Two Cores
    4- Two Cores with Hyper Threading
    5- Four Cores

    it will be hard to do this scale as they are not from one arch. ( 1 to 4 are NetBurst with Pentium 4, Pentium D, Pentium EE while the last is Core Arch. )

Log in

Don't have an account? Sign up now