New Features you Say? UVD and DirectX 10.1

As we mentioned, new to RV670 are UVD, PowerPlay, and DX10.1 hardware. We've covered UVD quite a bit before now, and we are happy to learn that UVD is now part of AMD's top to bottom product line. To recap, UVD is AMD's video decode engine which supports decode, deinterlacing, and post processing for video playback. The key features of UVD are full decode support for both VC-1 and H.264. MPEG-2 decode is also supported, but the entropy decode step is not performed for MPEG-2 video in hardware. The advantage over NVIDIA hardware is the inclusion of entropy decode support for VC-1 video, but this tends to be overplayed by AMD. VC-1 is lighter weight than H.264, and the entropy decode step for VC-1 doesn't make or break playability even on lower end CPUs.

DirectX 10.1 is basically a release of DirectX that clarifies some functionality and adds a few features. Both AMD and NVIDIA's DX10 hardware support some of the DX10.1 requirements, but since they don't support everything they can't claim DX10.1 as a feature. Because there are no capability bits, game developers can't rely on any of the DX10.1 features to be implemented in DX10 hardware.

It's good to see AMD embracing DX10.1 so quickly, as it will eventually be the way of the world. The new capabilities that DX10.1 enables are enhanced developer control of AA sample patterns and pixel coverage, blend modes can be unique per render target rather, vertex shader inputs are doubled, fp32 filtering is required, and cube map arrays are supported which can help make global illumination algorithms faster. These features might not make it into games very quickly, as we're still waiting for games that really push DX10 as it is now. But AMD is absolutely leading NVIDIA in this area.

Better Power Management

As for PowerPlay, which is usually found in mobile GPUs, AMD has opted to include broader power management support in their desktop GPUs as well. While they aren't to wholly turn off parts of the chip, clock gaiting is used, as well as dynamic adjustment of core and memory clock speed and voltages. The command buffer is monitored to determine when power saving features need to be applied. This means that when applications need the power of the GPU it will run at full speed, but when less is going on (or even when something is CPU limited) we should see power, noise, and heat characteristics improve.

One of the cool side effects of PowerPlay is that clock speeds are no longer determined by application state. On previous hardware, 3d clock speeds were only enabled when a fullscreen 3D application started. This means that GPU computing software (like folding@home) was only run at 2D clock speeds. Since these programs will no doubt fill the command queue, they will get full performance from the GPU now. This also means that games run in a window will perform better which should be good news to MMO players everywhere.

But like we said, dropping 55nm parts less than a year after the first 65nm hardware is a fairly aggressive schedule and one of the major benefits of the 3800 series and an enabler of the kind of performance this hardware is able to deliver. We asked AMD about their experience with the transition from 65nm to 55nm, and their reply was something along the lines of: "we hate to use the word flawless... but we're running on first silicon." Moving this fast even surprised AMD it seems, but it's great when things fall in line. This terrific execution has served to put AMD back on level competition with NVIDIA in terms of release schedule and performance segment. Coming back from the delay in R600 to hit the market in time to compete with 8800 GT is a huge thing and we can't stress it enough. To spoil the surprise a bit, AMD did not outperform 8800 GT, but this schedule puts AMD back in the game. Top performance is secondary at this point to solid execution, great pricing, and high availability. Good price/performance and a higher level of competition with NVIDIA than the R600 delivered will go a long way to reestablish AMD's position in the graphics market.

Keeping in mind that this is an RV GPU, we can expect AMD to have been working on a new R series part in conjunction with this. It remains to be seen what (and if) this part will actually be, but hopefully we can expect something that will put AMD back in the fight for a high end graphics part.

Right now, all that AMD has confirmed is a single slot dual GPU 3800 series part slated for next year, which makes us a little nervous about the prospect of a solid high end single GPU product. But we'll have to wait and see what's in store for us when we get there.

Index Sensible Naming and the Cards
Comments Locked

117 Comments

View All Comments

  • bbqchickenrobot - Wednesday, May 7, 2008 - link


    But - now new Catalyst drivers have been released - so an updated benchmark needs to be completed as the drivers provide better support for the hardware and thus, better performance.

    Also, you used a non-AMD MoBo and Chipset... if you went with XFire + AMD 790 chipset + Phenom X3/X4 processor (Spider platform) you would have seen a better performance as well. There are other benchmarks that are/were done with these components (spider) and the results weren't nearly as mediocre. Just a little tip...
  • Adamseye - Tuesday, February 12, 2008 - link

    I cant see how every review I have read differs from your charts, the 2900 xt can't be faster then the 3850.I mean I spent a month researching cards and the winner was the 3850 overclocking it to 3870 speeds. To think that AMD spent all that time to make a new 2900xt and name it the 3850-70, is just foolsih. from the benchmarks you provided only an idiot would buy the new gen cards for 60-100 buxks more when the 2900xt is on par. Could you please explain to me how this happened? I feel like ordering a 3850 was a waste of money because the old 2900 is better anyway.
  • aznboi123 - Saturday, February 2, 2008 - link

    Welll dang that bothers me...666...>,<
  • spaa33 - Monday, December 3, 2007 - link

    It looked to me that the biggest complaint on the HD Video Decode article was that the 2600/2900 options did not provide an off switch for the Noise Reduction. Did you notice if this option appeared to be present in the newer drivers of this card (3850)?

    Regards,
    Dan
  • emilyek - Tuesday, November 27, 2007 - link

    So AMDTI is still getting stomped by year old hardware?

    That's what I read.
  • jpierce55 - Saturday, November 24, 2007 - link

    This is really a good review, some others are very Nvidia biased. I would like to see you do an update with the new drivers in the near future if possible.
  • gochichi - Friday, November 23, 2007 - link

    Anand,

    First Nvidia with its 8800GT... I clearly recall seing those at about $200, now they're $300 or more. At least these may come bundled with a game... they also "hold the crown".

    Now the HD 3870 has gone up to $269.99 (at newegg) and availability is every bit as bad as the 8800GT.

    This review assumes that AMD/ATI was going to deliver in volume, at a fixed price and they haven't delivered either. It would be really nice if you could slap their wrists... as individual consumers we are being tossed about and we don't have the "pull" to do anything other than "take it".

    Shouldn't AMD be accountable to deliver on their promises?
  • SmoulikNezbeda - Thursday, November 22, 2007 - link

    Dear Anand,

    I would like to ask you what exactly results in individual games represents. Are those average FPS, or something like (min + max + ave)/3 FPS. On one czech website there were similar results to what was presented here, but they were showing (min + max + ave)/3 FPS, which is a complete nonsense as this would be advantageous for cards which have more volatile results. In case when they were comparing average fps the radeon had the same results as GT card. Also I would like to ask you whether you have used the same demo for both cards or you were playing a game and therefore testing a game in different situations?

    Thanks in advance

    Petr
  • SmoulikNezbeda - Thursday, November 22, 2007 - link

    Dear Anand,

    I would like to ask you what exactly results in individual games represents. Are those average FPS, or something like (min + max + ave)/3 FPS. On one czech website there were similar results to what was presented here, but they were showing (min + max + ave)/3 FPS, which is a complete nonsense as this would be advantageous for cards which have more volatile results. In case when they were comparing average fps the radeon had the same results as GT card. Also I would like to ask you whether you have used the same demo for both cards or you were playing a game and therefore testing a game in different situations?

    Thanks in advance

    Petr
  • Sectoid - Sunday, November 18, 2007 - link

    If I'm not mistaken the 8800GT is DX10 only right? Is DX10.1 so insignificant as to not count to the favor of the 3800's over the GT's? Don't get me wrong, I'm not trying to defend AMD; I just want to know if it's a good idea to sell my 8800GTS 320mb still at a good price now(I live in Brazil and they're still pricey here) and buy a 3870 or a 8800GT with 512mb. I recently bought a 22" monitor and the GTS is somewhat disappointing at 1600x1050. Nah, it's just that crappy game world in conflict. It runs similar to crysis demo at max! I have to play at medium and the textures are really crappy for a high-end pc 8-month old :(
    Who knows, maybe I'm already CPU or memory bound with a core 2 duo 6400@24xxMhz and dual ocz platinum 2 1gb 800mhz(2gb total)...
    Thanks in advance for any more input on the qualities of DX10.1 :)

Log in

Don't have an account? Sign up now