New Features you Say? UVD and DirectX 10.1

As we mentioned, new to RV670 are UVD, PowerPlay, and DX10.1 hardware. We've covered UVD quite a bit before now, and we are happy to learn that UVD is now part of AMD's top to bottom product line. To recap, UVD is AMD's video decode engine which supports decode, deinterlacing, and post processing for video playback. The key features of UVD are full decode support for both VC-1 and H.264. MPEG-2 decode is also supported, but the entropy decode step is not performed for MPEG-2 video in hardware. The advantage over NVIDIA hardware is the inclusion of entropy decode support for VC-1 video, but this tends to be overplayed by AMD. VC-1 is lighter weight than H.264, and the entropy decode step for VC-1 doesn't make or break playability even on lower end CPUs.

DirectX 10.1 is basically a release of DirectX that clarifies some functionality and adds a few features. Both AMD and NVIDIA's DX10 hardware support some of the DX10.1 requirements, but since they don't support everything they can't claim DX10.1 as a feature. Because there are no capability bits, game developers can't rely on any of the DX10.1 features to be implemented in DX10 hardware.

It's good to see AMD embracing DX10.1 so quickly, as it will eventually be the way of the world. The new capabilities that DX10.1 enables are enhanced developer control of AA sample patterns and pixel coverage, blend modes can be unique per render target rather, vertex shader inputs are doubled, fp32 filtering is required, and cube map arrays are supported which can help make global illumination algorithms faster. These features might not make it into games very quickly, as we're still waiting for games that really push DX10 as it is now. But AMD is absolutely leading NVIDIA in this area.

Better Power Management

As for PowerPlay, which is usually found in mobile GPUs, AMD has opted to include broader power management support in their desktop GPUs as well. While they aren't to wholly turn off parts of the chip, clock gaiting is used, as well as dynamic adjustment of core and memory clock speed and voltages. The command buffer is monitored to determine when power saving features need to be applied. This means that when applications need the power of the GPU it will run at full speed, but when less is going on (or even when something is CPU limited) we should see power, noise, and heat characteristics improve.

One of the cool side effects of PowerPlay is that clock speeds are no longer determined by application state. On previous hardware, 3d clock speeds were only enabled when a fullscreen 3D application started. This means that GPU computing software (like folding@home) was only run at 2D clock speeds. Since these programs will no doubt fill the command queue, they will get full performance from the GPU now. This also means that games run in a window will perform better which should be good news to MMO players everywhere.

But like we said, dropping 55nm parts less than a year after the first 65nm hardware is a fairly aggressive schedule and one of the major benefits of the 3800 series and an enabler of the kind of performance this hardware is able to deliver. We asked AMD about their experience with the transition from 65nm to 55nm, and their reply was something along the lines of: "we hate to use the word flawless... but we're running on first silicon." Moving this fast even surprised AMD it seems, but it's great when things fall in line. This terrific execution has served to put AMD back on level competition with NVIDIA in terms of release schedule and performance segment. Coming back from the delay in R600 to hit the market in time to compete with 8800 GT is a huge thing and we can't stress it enough. To spoil the surprise a bit, AMD did not outperform 8800 GT, but this schedule puts AMD back in the game. Top performance is secondary at this point to solid execution, great pricing, and high availability. Good price/performance and a higher level of competition with NVIDIA than the R600 delivered will go a long way to reestablish AMD's position in the graphics market.

Keeping in mind that this is an RV GPU, we can expect AMD to have been working on a new R series part in conjunction with this. It remains to be seen what (and if) this part will actually be, but hopefully we can expect something that will put AMD back in the fight for a high end graphics part.

Right now, all that AMD has confirmed is a single slot dual GPU 3800 series part slated for next year, which makes us a little nervous about the prospect of a solid high end single GPU product. But we'll have to wait and see what's in store for us when we get there.

Index Sensible Naming and the Cards
POST A COMMENT

114 Comments

View All Comments

  • yacoub - Thursday, November 15, 2007 - link

    I really really like the new style to the charts and graphs. Everything is very easy to read and understand! Much improved over some older review designs! =)

    Also, lol @ how pathetic the 8600GT performs! :D
    Reply
  • Iger - Thursday, November 15, 2007 - link

    Actually, in terms of power consumption I would call this round a win for AMD. My home PC is on 24/7, but I really get to play on it for maybe a couple of hours a day at best (actually, probably, much less). AMD leads idle consumption by 40w, while losing the load power by 5. I think for pretty much every one 3870 will turn out cheaper than 8800GT. And I think it's important enough to be mentioned in article (no offence - just trying to be helpful).

    About prices - currently on overclocker.co.uk 8800GT 512 is preorderable for 350$, 8800GT 256 - for 290$, 3870 - for 320$ and 3850 - for 235$ (and AMD cards actually are listed in stock(!!) - impressive).
    With such disposition I would be close to buying a 3850 atm, btw... But, anyway, europe's prices are terrible :(

    Thanks very much for the article - it'll serve to satisfy at least some hunger before Phenom's ;)

    Ilya.
    Reply
  • Leadthorns - Thursday, November 15, 2007 - link

    Some review sights suggest that the IQ is marginally better on the 3870. Would be interested to know your take on this Reply
  • lux4424 - Thursday, November 15, 2007 - link

    In 2006 there were number of articles and presentations about benefits of new WDDM (Windows Vista Driver Model). These also mentioned WDDM 2.1, coming with DX10.1, and the benefits it should bring. Couple of examples:
    quote:

    WinHEC 2006, http://download.microsoft.com/download/5/b/9/5b970...">Future Directions In Graphics:
    *) Move to preemptive context switching and page-level memory management
    *) Video, Glitch-resilience: Preemptive context switching in WDDM 2.1 is key
    *) WDDM 2.1 – efficient GPU virtualization


    quote:

    WinHEC 2006, http://download.microsoft.com/download/5/b/9/5b970...">Desktop And Presentation Impact On Hardware Design:
    *) Advanced Scheduling with page level context switching
    *) Direct impact on desktop scenarios



    Since then it's absolute silence on the matter. It would be really great if Anandtech would cover the promises made WRT WDDM 2.1 (DX10.1) or even WDDM 2.0 (DX10) after SP1 for Vista is released.

    Regards
    Reply
  • GTMan - Thursday, November 15, 2007 - link

    Sentence with no ending...

    "Hopefully with DX11 Microsoft will be a little more used to the"

    Thanks for the article, interesting reading.
    Reply
  • Anand Lal Shimpi - Thursday, November 15, 2007 - link

    eep, thanks :)

    Take care,
    Anand
    Reply
  • NullSubroutine - Thursday, November 15, 2007 - link

    I am extremely disappointed in the review of the product.

    1) Only Vista was used, though XP has a lot larger user base.

    2) Limited variety of games.

    3) Limited variation of AF/AA

    4) No UVD tests.

    All could be forgiven if the title would have included First Look: DX10. I understand there is a limited time to do tests and it seems you had trouble getting your samples so this could lead to the problem. I usually look to anand for the most complete review of products (rather than having to look at many different incomplete ones sites use), but I believe this review to be incomplete and not what I expect from Anandtech.

    I await follow up reviews to reinstate my faith in this site. (and yes I am sure I will modded down as I will probably been seen as a 'hater' rather than trying to give constructive critism.

    Reply
  • Locut0s - Thursday, November 15, 2007 - link

    1) Only Vista was used, though XP has a lot larger user base.

    You answered your own question there. Remember this card is aimed at the midrange not the enthusiast and even more of these consumers are using XP.

    2) Limited variety of games.

    The games covered though are all the important big names that actually stress these cards and show what they are made of.

    3) Limited variation of AF/AA

    See Anand's reply above

    4) No UVD tests.

    You can see previous reviews to see UVD performance. I doubt this has changed at all since the hardware is identical.
    Reply
  • NullSubroutine - Thursday, November 15, 2007 - link

    I was saying XP should have been benchmarked because it is the largest userbase and most people especially at this price range will be using XP.

    When you limit the number of games benchmark you do not show an accurate performance of a video card, it has been shown that certain games play better on certain cards. Some sites only do reviews with games that are biased towards a certain brand or GPU; I expect that Anandtech is not one of those sites and expect a variety of games that show the true performance of the cards.
    Reply
  • Locut0s - Thursday, November 15, 2007 - link

    Sorry misread your question about XP/Vista. Yes they could test on XP. However it has been shown that the performance difference between the two is fairly small now and is in XPs favour meaning that games should run as good or better than what they show here. Reply

Log in

Don't have an account? Sign up now