New Features you Say? UVD and DirectX 10.1

As we mentioned, new to RV670 are UVD, PowerPlay, and DX10.1 hardware. We've covered UVD quite a bit before now, and we are happy to learn that UVD is now part of AMD's top to bottom product line. To recap, UVD is AMD's video decode engine which supports decode, deinterlacing, and post processing for video playback. The key features of UVD are full decode support for both VC-1 and H.264. MPEG-2 decode is also supported, but the entropy decode step is not performed for MPEG-2 video in hardware. The advantage over NVIDIA hardware is the inclusion of entropy decode support for VC-1 video, but this tends to be overplayed by AMD. VC-1 is lighter weight than H.264, and the entropy decode step for VC-1 doesn't make or break playability even on lower end CPUs.

DirectX 10.1 is basically a release of DirectX that clarifies some functionality and adds a few features. Both AMD and NVIDIA's DX10 hardware support some of the DX10.1 requirements, but since they don't support everything they can't claim DX10.1 as a feature. Because there are no capability bits, game developers can't rely on any of the DX10.1 features to be implemented in DX10 hardware.

It's good to see AMD embracing DX10.1 so quickly, as it will eventually be the way of the world. The new capabilities that DX10.1 enables are enhanced developer control of AA sample patterns and pixel coverage, blend modes can be unique per render target rather, vertex shader inputs are doubled, fp32 filtering is required, and cube map arrays are supported which can help make global illumination algorithms faster. These features might not make it into games very quickly, as we're still waiting for games that really push DX10 as it is now. But AMD is absolutely leading NVIDIA in this area.

Better Power Management

As for PowerPlay, which is usually found in mobile GPUs, AMD has opted to include broader power management support in their desktop GPUs as well. While they aren't to wholly turn off parts of the chip, clock gaiting is used, as well as dynamic adjustment of core and memory clock speed and voltages. The command buffer is monitored to determine when power saving features need to be applied. This means that when applications need the power of the GPU it will run at full speed, but when less is going on (or even when something is CPU limited) we should see power, noise, and heat characteristics improve.

One of the cool side effects of PowerPlay is that clock speeds are no longer determined by application state. On previous hardware, 3d clock speeds were only enabled when a fullscreen 3D application started. This means that GPU computing software (like folding@home) was only run at 2D clock speeds. Since these programs will no doubt fill the command queue, they will get full performance from the GPU now. This also means that games run in a window will perform better which should be good news to MMO players everywhere.

But like we said, dropping 55nm parts less than a year after the first 65nm hardware is a fairly aggressive schedule and one of the major benefits of the 3800 series and an enabler of the kind of performance this hardware is able to deliver. We asked AMD about their experience with the transition from 65nm to 55nm, and their reply was something along the lines of: "we hate to use the word flawless... but we're running on first silicon." Moving this fast even surprised AMD it seems, but it's great when things fall in line. This terrific execution has served to put AMD back on level competition with NVIDIA in terms of release schedule and performance segment. Coming back from the delay in R600 to hit the market in time to compete with 8800 GT is a huge thing and we can't stress it enough. To spoil the surprise a bit, AMD did not outperform 8800 GT, but this schedule puts AMD back in the game. Top performance is secondary at this point to solid execution, great pricing, and high availability. Good price/performance and a higher level of competition with NVIDIA than the R600 delivered will go a long way to reestablish AMD's position in the graphics market.

Keeping in mind that this is an RV GPU, we can expect AMD to have been working on a new R series part in conjunction with this. It remains to be seen what (and if) this part will actually be, but hopefully we can expect something that will put AMD back in the fight for a high end graphics part.

Right now, all that AMD has confirmed is a single slot dual GPU 3800 series part slated for next year, which makes us a little nervous about the prospect of a solid high end single GPU product. But we'll have to wait and see what's in store for us when we get there.

Index Sensible Naming and the Cards
Comments Locked

117 Comments

View All Comments

  • Locut0s - Thursday, November 15, 2007 - link

    Well yes I know but the "cores" that they are using are extremely simplified, more so than I was thinking of. Instead I was thinking of each "core" as being able to perform most if not all of the steps in the rendering pipeline.
  • Guuts - Thursday, November 15, 2007 - link

    I think the simple answer is that in the CPU world, they hit a clockspeed wall due to thermal issues and had to change their design strategy to offer greater performance, which was to go to multiple cores.

    The GPU makers haven't reached this same wall yet, and it must be cheaper and/or easier to make one high-performing chip than redesigning for multi-GPU boards... though there are some boards that have 2 GPUs on it that act like SLI/Crossfire, but in a single board package.

    I'm sure when the GPUs start suffering the same issues, we'll start seeing multi-core graphic cards, and I would assume that nvidia and AMD are already researching and planning for that.
  • dustinfrazier - Thursday, November 15, 2007 - link

    Going on a year for Nvidia dominance and boy does it feel good. I bought my 8800gtx pair the first day they were available last year and never expected them to dominate this long. God I can't wait to see what comes out next for the enthusiasts. It get the feeling it is gonna rock! I really wanna see what both companies have up their sleeves as I am ready to retire my 8800s.

    I understand that these latest cards are great for the finances and good energy savers, but what does it matter if they already have a hard time keeping up with current next gen games at reasonable frame rates, 1920x1200 and above? What good does saving money do if all the games you purchase in 08 end up as nothing but a slide show? I guess I just want AMD to release a card that doesn't act like playing Crysis is equivalent to solving the meaning of life. Get on with it. The enthusiasts are ready to buy!
  • Gholam - Thursday, November 15, 2007 - link

    For the reference, over here in Israel, 8800GT is promised to arrive next week - for approximately $380 + VAT (11.5%). For comparison, 8800GTS 640MB costs a bit over $400+VAT; 8800GTS 320MB used to cost in the low to mid 300s, but they're no longer available. I wonder when will 38xx get here, and at what price...
  • abhaxus - Thursday, November 15, 2007 - link

    let me just say that i love my 8800 gts. however, as a person stuck with a 939 athlon x2 @ 2.5 ghz, and wanting to upgrade to a quad core setup, I've been freaking out lately about what motherboard to buy, and the lack of new video cards has made that very difficult. If the 320mb gts dropped in price in relation to the new GT, I'd buy a 650i/680i board in a heartbeat and just SLI it up. But the fact that no innovation is going on has kept prices too high for too long. I've had this card since march and prices are actually higher now than when I bought it originally.

    At least intel isn't resting on their laurels the way nvidia has been. I want new cards... so the old ones get cheaper!

    also if anyone wants to go really OT with a reply and tell me whether an Asus P5N32 SLI Plus would be a good choice to O/C a Q6600 to about 3.2 ghz and run 2 8800 GTS 320mb cards in SLI... let me know :)
  • wolfman3k5 - Thursday, November 15, 2007 - link

    No, the P5N32SLI wouldn't be a good choice to overclock a Quad. Neither would be the Striker. The fact of the matter is that both this ASUS boards have a hard time putting out high FSB clock and sustain them with Quad Cores. You either go EVGA 680i (LT) if you want to retain the SLI capability, or I would suggest a P35 or X38 based motherboard.
    Just my 0.02C.
  • abhaxus - Thursday, November 15, 2007 - link

    I've read that... but then I've also read on AT and that with current bios releases the asus boards are fine to around 360-400 FSB. I haven't O/C'ed an intel chip since the Celeron 300A so I am pulling my hair out trying to decide if it's worth it to plan for going SLI or just get a P35 board and stay with a single card.
  • Anand Lal Shimpi - Thursday, November 15, 2007 - link

    <font color=black>
  • abhaxus - Thursday, November 15, 2007 - link

    I apologize for breaking the comments... silly me for mentioning another site :)
  • bupkus - Thursday, November 15, 2007 - link

    Just highlight the blank areas with your mouse.
    Click and drag.

Log in

Don't have an account? Sign up now