HD HQV Image Quality Analysis

We have already explored Silicon Optix HD HQV in detail. The tests and what we are looking for in them have not changed since our first round. Fortunately, the ability of NVIDIA and AMD hardware to actually perform the tasks required of HD HQV has changed quite a bit.

Both AMD and NVIDIA told us to expect scores of 100 out of 100 using their latest drivers and hardware. We spent quite a bit of time and effort in fully evaluating this test. We feel that we have judged the performance of these solutions fairly and accurately despite the fact that some subjectivity is involved. Here's what we've come up with.

Silicon Optix HD HQV Scores
Noise Reduction Video Res Loss Jaggies Film Res Loss Stadium Total
AMD Radeon HD 2900 XT 15 20 20 25 10 90
AMD Radeon HD 2600 XT 15 20 20 25 10 90
AMD Radeon HD 2600 Pro 15 20 20 25 10 90
AMD Radeon HD 2400 XT 0 20 0 25 10 55
NVIDIA GeForce 8800 GTX 25 20 20 25 10 100
NVIDIA GeForce 8600 GTS 25 20 20 25 10 100
NVIDIA GeForce 8600 GT 25 20 20 25 10 100


The bottom line is that NVIDIA comes out on top in terms of quality. We've seen arguments for scoring these cards differently, but we feel that this is the most accurate representation of the capabilities offered by each camp.

On the low end, both AMD and NVIDIA hardware begin to stumble in terms of quality. The HD 2400 XT posts quite a lack luster performance, failing in noise reduction and HD deinterlacing (jaggies). But at least it poorly deinterlaces video at full resolution. We excluded tests of NVIDIA's 8500 series, as their video drivers have not yet been optimized for their low end hardware. Even so, we have been given indications not to expect the level of performance we see from the 8600 series. We would guess that the 8500 series will perform on par with the AMD HD 2400 series, though we will really have to wait and see when NVIDIA releases a driver for this.

With video decode hardware built in as a separate block of logic and post processing being handled by the shader hardware, it's clear that the horrendous 3D performance of low end parts has bled through to their video processing capability as well. This is quite disturbing, as it removes quite a bit of potential value from low cost cards that include video decode hardware.

Both AMD and NVIDIA perform flawlessly and identically in every test but the noise reduction test. AMD uses an adaptive noise reduction algorithm that the user is unable to disable or even adjust in any way. NVIDIA, on the other hand, provides an adjustable noise reduction filter. In general, we prefer having the ability to adjust and tweak our settings, but simply having this ability is irrelevant in HQV scores.

The major issue that resulted in our scoring AMD down in noise reduction was that noise was not reduced significantly enough to match what we expected. In addition to the tests, Silicon Optix provides a visual explanation of the features tested, including noise reduction. They show a side by side video of a yellow flower (a different flower than the one presented in the actual noise reduction test). The comparison shows a noisy video on the left and a video with proper noise reduction applied on the right. The bottom line is that there is almost no noise at all in the video on the right.

During the test, although noise is reduced using AMD hardware, it is not reduced to the level of expectation set by the visual explanation of the test. Based on this assessment, we feel that AMD noise reduction deserves a score of 15 out of 25. Silicon Optix explains a score of 15 as: "The level of noise is reduced somewhat and detail is preserved." In order to achieve a higher score, we expect the noise to be reduced to the point where we do not notice any "sparkling" effect in the background of the image at all.

By contrast, with NVIDIA, setting the noise reduction slider anywhere between 51% and 75% gave us a higher degree of noise reduction than AMD with zero quality loss. At 75% and higher we noticed zero noise in the image with no detail loss until noise reduction was set very high. Tests done with the noise reduction slider at 100% show some detail loss, but there is no reason to crank it up that high unless your HD source is incredibly noisy (which will not likely be the case). In addition, at such high levels of noise reduction, we noticed banding and artifacts in some cases. This was especially apparent in the giant space battle near the end of Serenity. It seems to us that computer generated special effects seemed to suffer from this issue more than other aspects of the video.

While, ideally, we would like to see artifacts avoided at all cost, NVIDIA has provided a solution that offers much more flexibility than their competition. With a little experimentation, a higher quality experience can be delivered on NVIDIA hardware than on AMD hardware. In fact, because NVIDIA sets noise reduction to default off, we feel that the overall experience provided to consumers will be higher.

The Test Transporter 2 Trailer (High Bitrate H.264) Performance
POST A COMMENT

63 Comments

View All Comments

  • TA152H - Monday, July 23, 2007 - link

    Just my opinion, but I would save money on the Power DVD if you are buying ATI and just use theirs. Power DVD is not cheap, and I personally do not like it is much, but I am sure others do. He has to use it, of course, because how else would he be able to test Nvidia and ATI on the same software. But it's not a trivial expense, and the ATI stuff works well enough that it seems, to me, an unnecessary expense. You might be happier with spending that money on hardware instead of Power DVD. Again, all this assumes an ATI card purchase. Reply
  • phusg - Monday, July 23, 2007 - link

    Good questions. From what I've seen the 2600 Pro is the least power hungry card at under 50W. Any chance you could shed some light Derek? Reply
  • TA152H - Monday, July 23, 2007 - link

    Choosing a Pentium 4 560 is a really strange choice, do you think there are a lot of them out there with PCI-E waiting to upgrade to one of these cards. It's a minor point, but I think a Pentium D 805 would have been an excellent choice, since a lot of people bought these and it would be a much more interesting data point, and many of them on PCI-E based motherboards.

    My next point is the expectation of the 2900 XT. I totally disagree this is something they needed to add, because what they are saying is absolutely true. Someone who will buy this item will almost certainly do it with a very capable CPU. Since high end processors are dual cores, it is not as if you can not do something else if the CPU is assisting with it. It's not free, you pay for it with cost, and you pay for it with power use, and you pay for it to heat, and it's going to be a waste the vast majority of time. Considering the power use of the 2900 is appalling already, adding to this is highly undesirable considering the very questionable usefulness of it.

    I think they should be congratulated for using intelligent feature targeting for their products, rather than bloating a product with useless features and making people pay for it.
    Reply
  • johnsonx - Tuesday, July 24, 2007 - link

    Clearly, the point was to get a single-core point of reference. While admittedly that exact CPU would be a slightly rare case, it's a simple matter to benchmark it since it fits the same 775 mainboard as the two Core2 chips. A PD805 wouldn't be much use to compare, as it would simply be a bit slower than the E4300... so what? The P4 560 makes a reasonable proxy for the variety of good performing single-core P4's and Athlon64's out there, while the E4300 stands in for all the X2's.
    Reply
  • TA152H - Tuesday, July 24, 2007 - link

    Are you crazy?

    The Pentium D 805 is a very popular chip and widely used, and represents an entirely different architecture. It would be an extremely valid data point because it's a popular item. It's not "a little slower", it has completely different performance characteristics.

    A Pentium 560 owner will probably never buy this card, and many of these owners are not even on a PCI-E platform. I wouldn't even have had a problem if they sold a single core Sempron, but a Pentium 560 makes no sense at all. People are still buying the 805, in fact, and you don't think the idea of popping one of these cards with an 805, while waiting for the Penryn to come out, is not something people think about? Or a similar Pentium D? Except, they'll not know how it performs. Luckily, though, they'll know how the Pentium 560 performs, because, I'm sure, that is their next choice.

    Reply
  • 100proof - Monday, July 23, 2007 - link

    Derek,

    Seeing as this is an article concerning media decoding with an emphasis towards HD media playback, shouldn't Anandtech be applying some pressure on Nvidia to support open drivers for linux? mythTV and XBMC are promising HTPC options, perfectly suited towards this test scenario.

    Why should h.264 offloading be exclusive to users of Microsoft operating systems?
    Reply
  • 100proof - Monday, July 23, 2007 - link

    This complaint applies to ATi\AMD as well. Reply
  • erwos - Monday, July 23, 2007 - link

    Linux doesn't have a framework to support H.264 or VC-1 acceleration yet. When that happens, I would expect the binary drivers to catch up fairly quickly. Reply
  • smitty3268 - Monday, July 23, 2007 - link

    Actually, it does. The problem is that it is open source, while the MS equivalent is closed. ATI/NVIDIA don't want to share their specs in an open manner and never came up with a suitable API to make public. Reply
  • wien - Monday, July 23, 2007 - link

    Well, gstreamer allows for closed source plug-ins since it's licensed under LGPL. Fluendo has already implemented a lot of proprietary (patented) codecs in gstreamer. With the required features exposed through the driver, it shouldn't be too hard for the IHVs to do the same with hardware accelerated H.264/VC-1.

    It's probably not worth their time yet though...
    Reply

Log in

Don't have an account? Sign up now