H.264 Encoded HD Content: A Good Thing

Almost anything can be done in a faster, more compact, or higher quality way. Sometimes there are tradeoffs to be made, and sometimes one way of doing things is just better than another. It has been quite some time since studios began distributing movies encoded in MPEG-2 stored on DVDs. Now that we have some new physical media entering the market, we will also see more efficient codecs enter the playing field as well.

H.264 is another name for a subset of MPEG-4 called MPEG-4 Part 10, or AVC (for Advanced Video Coding). This codec is a big step beyond MPEG-2 in terms of how heavily video of a given quality can be compressed. There are quite a few factors that make H.264 a better vehicle for video, but these are a little beyond the scope of this article. For now we will focus on the impact of H.264 and why it's a better option than MPEG-2.

The major benefit of H.264 over MPEG-2 is its small file size due to high compression. High resolution video can be stored in much less space. This is very useful because even though BDs can be 25 or 50 GB, high quality high resolution video is not small. The higher the compression we have, the higher the quality of video that will fill up a disk. Alternately, with high compression we also have extra room for the all important bonus features and extra content that we expect with any good DVD today.

Higher image quality is also inherent in H.264 due to some of the improved features of the codec. Variable block size motion compensation, better handling of interlaced video, in-loop deblocking, and better subpixel accuracy all contribute to a better overall image quality. Alternately, studios can use the image quality advantages to lower bitrate even more, as compression artifacts don't show up as readily.

With all these advantages, there is one downside to H.264: decoding the video takes much more work than with MPEG-2. High powered, dedicated H.264 decoding hardware is required in standalone BD and HDDVD players, as a generic processor just isn't enough to handle the work load. This is understandable as we have to make a tradeoff between file size/bitrate and the amount of work a CPU needs to do to reproduce the video, and H.264 produces very small files.

The large file size vs. heavy compression issue is actually fairly intuitive. Imagine completely uncompressed video where every pixel of every frame is stored in memory. The only thing we need to do to display the video is to send the data to the TV. This requires almost no processing but very high file size and bandwidth from the storage media. As a reference point, uncompressed 24-bit 1080p content at 24fps (the standard frame rate for movies) would require a whopping 1.19 Gbps of bandwidth and a 90 minute movie would need about 750GB of storage. Obviously, some form of compression is absolutely required.

When storing less data through compression, the CPU must do work to fill in the blanks before sending the video out to a display. With our previous Blu-ray test movie Click (which used MPEG-2), we saw bitrates of 50-60 Mbps throughout our test (representing somewhere between a 20:1 and 24:1 compression rate). Moving to X-Men: The Last Stand, most of our test is at about 20 Mbps, though we do see a very short spike that hits over 40 Mbps (somewhere around a 60:1 compression rate). We would need to compare the same section of one movie encoded in both MPEG-2 and H.264 in order to speak directly to the differences between the two, but for now we will generally see at least half the bitrate with H.264 that we get with MPEG-2. We also see a much lower CPU utilization with MPEG-2 because it doesn't compress the video as much as H.264.

If we focus on our high compression codec, we'll see that higher bitrates with H.264 mean more work for the CPU. When complex scenes occur, more data is required to generate a proper image. The CPU still needs to process all this data in the same way it would with a less complex scene, and we end up seeing higher processor utilization.

The encoding process takes more work as well, and we've been told that this is part of the reason we haven't seen many H.264 BD movies before now. When getting a movie ready for sale, studios will encode it many times and have people to view every frame of video and make sure nothing needs to be cleaned up. Every time a problem is found, the entire movie must be encoded again. It takes significantly more time to do this with H.264 than with MPEG-2. Fortunately, it seems that studios are making the sacrifices they need to make in order to bring a better experience to the end user.

To sum up, while MPEG-2 is relatively easy to decode, H.264 enables smaller files with better image quality. On the down side, the time it takes to encode a movie using H.264 is much higher than required for MPEG-2, and the processing power needed to decode H.264 without dropping frames can be very large. Without GPU acceleration, not even an Intel Core 2 Duo E6600 can play X-Men: The Last Stand without dropping frames.

Before we get to the test, we'll leave you with a short list of H.264 Blu-ray titles. While we don't have the bitrate information for all of these, we chose X-Men: The Last Stand because it listed 18 Mbps video (higher than some of the others) and has some fairly complex special effects.

Blu-ray H.264 Movies:
Behind Enemy Lines
The League of Extraordinary Gentlemen
X-Men: The Last Stand
Speed
Glory Road
Gone in 60 Seconds
Eight Below
The Great Raid


Index The Test
Comments Locked

86 Comments

View All Comments

  • Tujan - Monday, December 11, 2006 - link


    So heres a Sony notebook. It probably uses less than 40 or 50 watts. Has an HDMI connector on it. And runs on a battery. No less.

    http://www.learningcenter.sony.us/assets/itpd/note...">http://www.learningcenter.sony.us/asset...CMP=vaio...

    So what is my question here. This is a Centrino Core Duo for a notebook. With graphics enough to run using only battery power .

    As well the notebook has a Blue-Ray drive wich can be written to.AND watch blue-ray titles.

    Is this mostly in the liscencing ? How can it be when the processor used,and graphics cards used are such absolute 'top notch'for the desktop. And the notebook puts the works of them to shame.

    Blue-ray,and HDMI on battery power.

    This was one of AnandTechs Adds.Incodently - Hi Anandtech(Add-Click),HI Sony.
  • cmdrdredd - Monday, December 11, 2006 - link

    I too wonder how a laptop can play blue-ray fine but a $400+ video card with a CPU probably 2x+ more powerful and more memory...can't.
  • fanbanlo - Monday, December 11, 2006 - link

    most efficient software decoder! Maybe we don't need Core 2 Duo after all!

    http://www.coreavc.com/">http://www.coreavc.com/
  • DerekWilson - Monday, December 11, 2006 - link

    my understanding is that coreavc doesn't work in conjunction with HDDVD/BD -- that it doesn't support AACS.
  • totalcommand - Monday, December 11, 2006 - link

    BluRay support will be added to CoreAVC soon.
  • KashGarinn - Tuesday, December 12, 2006 - link

    When CoreAVC will support HD-DVD and bluray H.264, I'd be very interested in seeing this article updated with the comparison.

    Regarding the article itself, I thought it wasn't up to normal anandtech standards.. skimping on the H.264 details which makes it better and giving the reason as "but these are a little beyond the scope of this article." - What is anandtech coming to? That's like saying "we're going to compare graphic cards with directx9 capabilities, but explaining what directx is, is a little beyond the scope of this article"

    Also, not comparing amd cpus? What's up with that?

    And I find it odd that you didn't comment on the strangeness that nvidia has better acceleration across the board than the ATI cards, especially as the ATI cards have better shader throughput, so probably most likely hampered by software rather than hardware.. so this: "ATI hardware is very consistent, but just doesn't improve performance as much as NVIDIA hardware." - only paints the incorrect picture.

    I would give this article a 2 out of 5.. 1 for at least covering the basics (h.264 is a better codec than mpeg2) and 1 for showing that ati needs to improve it's decoder.. even though you don't point it out.

    K.
  • ninjit - Monday, December 11, 2006 - link

    I had a question about why you chose the golden-gate bridge scene to stress test the decoding capabilities of the various setups.

    You said that you chose that point in the movie because it had the highest bitrate (41Mbps), indicating a more complex scene.

    To me though that would indicate LESS encoding done by H.264, and subsequently LESS decoding work needed to be done for playback of that particular scene.

    I justify that by thinking with a very complex scene the codec cannot compress the stream as much because it would introduce too many artifacts, so the compression rate is dropped and the data rate increased to compensate for that particular section in time.

    Is my reasoning correct? If not, can someone explain to me why?

    I don't think choice of scene should change the graphs in terms of relative performance between setups, but it would affect absolute numbers - an easy way to check whether my thinking is wrong or not is to see if there are more dropped frames in the Golden Gate scene on the software-decoded E6600 vs. other less busy scenes.
  • DerekWilson - Monday, December 11, 2006 - link

    we tried to explain this a little bit, so I'm sorry if we didn't get it across well enough.

    I'm not an expert on H.264 by any means, but I can talk about other types of decoding as they relate to still images.

    The issue isn't really less compression -- when using H.264, we are always using H.264 complexity to encode the bitstream. We don't fall back to just saving raw pixel data if a scene is overly complex -- we encode more detailed information about the scene.

    For instance, with still images, run length encoding can be performed with huge compression especially in images with large blocks of identical colors (like logos or images on a solid background color). Basically, the idea is to list a color and then a number of pixels that use that color. For an image that is a single solid color, you could list the color and then the number of pixels in the image. This is a very small file with little processing requirement that represents a full image. If, on the other hand, we have a checker board pattern with every other pixel being a different color, we have to list the color of every pixel, BUT we also have to process every color to makes sure of how many consecutive pixels it represents (even if it only represents one). Thus, we end up donig more processing than we would on a smaller (lower "bitrate") file.

    This example is very fabricated as sophisticated run lenth encoding can handle more complex patterns, but it serves to illustrate the point: when using a specific type of encoding, higher bitrates can (and usually do) mean more complexity and processing.

    As we mentioned, using no encoding requires zero processing. MPEG-2 can compress the data to lower the bitrate while increasing computational complexity. But higher bitrate MPEG-2 means more data to process per frame -- which means more CPU overhead for higher bitrates under MPEG-2. The same is true with H.264 -- bitrates are genearlly lower than MPEG-2 and require more processing power, but as H.264 encoded movies use more bitrate (more data per frame), more processing is required.

    I hope this helps.

    Also, to clarify -- the spot at the video that reaches 41Mbps corresponds to the highest CPU utilization (we can see this on a the perfmon timeline).
  • ninjit - Monday, December 11, 2006 - link

    Thanks for the explanation Derek. That was very helpful.
  • jeffbui - Monday, December 11, 2006 - link

    The PS3 is able to play Blu-Ray back at 50% over normal speed without dropping frames. That gives an idea of how much power these consoles are capable of.

    Some interesting tidbits from a translation of an article interviewing PS3 developers.

    -H.264 decoding itself was not very difficult for Cell with moderate optimization and they could play a movie in realtime at the first try unlike very difficult SACD optimization. However, because they began the development without knowing the final Blu-ray standard, they set the goal very high for decoding 2 full HD H.264 streams at 40Mbps simultaneously. Besides the clockspeed of the devkit was lower than the final product which made the development difficult. The current decoder can decode full HD H.264 with 3 SPEs.

    -An SCE developer recommends trying 1.5x fast-forward playback in the PS3 BD player to see the power of Cell. When it's connected to a display via 1080/60p, it becomes very smooth as Cell has an enough margin for video decoding. In 1.5x fast-forward playback it decodes all frames then inserts them into 60fps with sped up audio.

Log in

Don't have an account? Sign up now