Original Link: http://www.anandtech.com/show/2376



Finally. We're finally getting somewhere interesting in the graphics industry. Although they're sure to return, the days of reviewing $600 graphics card after $600 graphics card are on hiatus, and instead we're reviewing a new class of mainstream cards with earth-shattering performance.

NVIDIA's GeForce 8800 GT kicked off the trend, in one fell swoop making almost all of NVIDIA's product line obsolete thanks to the high performance and low price tag (we'll talk about that last part shortly). But what we saw there wasn't a fluke, it was a preemptive strike against AMD, who have been hard at work on an affordable GPU of their own.

This new product, like the 8800 GT, would be aimed squarely at the $150 - $250 market segment, something both AMD and NVIDIA did a horrible job at with mainstream releases earlier this year (2600 and 8600 both sucked guys).

Introducing the RV670

AMD's two new graphics cards launching today are both based off a new GPU, referred to internally as the RV670. The basic architecture of the hardware is largely unchanged from R600; there has been some additional functionality added, and a great deal of internal bandwidth removed, but other than that this is very much an R600 based part.

The biggest news of this part is that it is fabbed on a 55nm TSMC process. This is a half-node process based on 65nm technology, giving AMD an advantage in die size (cost) and potentially clock speed and/or power.

Historically, AMD's RV series has been a cost cut version of their R series designed for lower end volume parts, and that's where RV670 started. Right of the bat, half the external and internal memory bandwidth of R600 was cut out. External bandwidth dropped from 512-bit to 256-bit, but AMD stuck with 8 memory channels (each dropped from 64bit to 32bit).

Internally, the ring bus dropped from 1024-bit to 512-bit. This cut in bandwidth contributed to a significant drop in transistor count from R600's ~720M. RV670 is made up of 666M transistors, and this includes the addition of UVD hardware, some power saving features, the necessary additions for DX 10.1 and the normal performance tuning we would expect from another iteration of the architecture.

Processing power remains unchanged from the R600; the RV670 features 320 stream processors, 16 texture units and 16 redner back-ends. Clock speeds have gone up slightly and memory speeds have increased tremendously to make up for the narrower memory bus.

The RV670 GPU is also fully PCI Express 2.0 compliant like NVIDIA's G92, the heart and soul of the GeForce 8800 GT.



New Features you Say? UVD and DirectX 10.1

As we mentioned, new to RV670 are UVD, PowerPlay, and DX10.1 hardware. We've covered UVD quite a bit before now, and we are happy to learn that UVD is now part of AMD's top to bottom product line. To recap, UVD is AMD's video decode engine which supports decode, deinterlacing, and post processing for video playback. The key features of UVD are full decode support for both VC-1 and H.264. MPEG-2 decode is also supported, but the entropy decode step is not performed for MPEG-2 video in hardware. The advantage over NVIDIA hardware is the inclusion of entropy decode support for VC-1 video, but this tends to be overplayed by AMD. VC-1 is lighter weight than H.264, and the entropy decode step for VC-1 doesn't make or break playability even on lower end CPUs.

DirectX 10.1 is basically a release of DirectX that clarifies some functionality and adds a few features. Both AMD and NVIDIA's DX10 hardware support some of the DX10.1 requirements, but since they don't support everything they can't claim DX10.1 as a feature. Because there are no capability bits, game developers can't rely on any of the DX10.1 features to be implemented in DX10 hardware.

It's good to see AMD embracing DX10.1 so quickly, as it will eventually be the way of the world. The new capabilities that DX10.1 enables are enhanced developer control of AA sample patterns and pixel coverage, blend modes can be unique per render target rather, vertex shader inputs are doubled, fp32 filtering is required, and cube map arrays are supported which can help make global illumination algorithms faster. These features might not make it into games very quickly, as we're still waiting for games that really push DX10 as it is now. But AMD is absolutely leading NVIDIA in this area.

Better Power Management

As for PowerPlay, which is usually found in mobile GPUs, AMD has opted to include broader power management support in their desktop GPUs as well. While they aren't to wholly turn off parts of the chip, clock gaiting is used, as well as dynamic adjustment of core and memory clock speed and voltages. The command buffer is monitored to determine when power saving features need to be applied. This means that when applications need the power of the GPU it will run at full speed, but when less is going on (or even when something is CPU limited) we should see power, noise, and heat characteristics improve.

One of the cool side effects of PowerPlay is that clock speeds are no longer determined by application state. On previous hardware, 3d clock speeds were only enabled when a fullscreen 3D application started. This means that GPU computing software (like folding@home) was only run at 2D clock speeds. Since these programs will no doubt fill the command queue, they will get full performance from the GPU now. This also means that games run in a window will perform better which should be good news to MMO players everywhere.

But like we said, dropping 55nm parts less than a year after the first 65nm hardware is a fairly aggressive schedule and one of the major benefits of the 3800 series and an enabler of the kind of performance this hardware is able to deliver. We asked AMD about their experience with the transition from 65nm to 55nm, and their reply was something along the lines of: "we hate to use the word flawless... but we're running on first silicon." Moving this fast even surprised AMD it seems, but it's great when things fall in line. This terrific execution has served to put AMD back on level competition with NVIDIA in terms of release schedule and performance segment. Coming back from the delay in R600 to hit the market in time to compete with 8800 GT is a huge thing and we can't stress it enough. To spoil the surprise a bit, AMD did not outperform 8800 GT, but this schedule puts AMD back in the game. Top performance is secondary at this point to solid execution, great pricing, and high availability. Good price/performance and a higher level of competition with NVIDIA than the R600 delivered will go a long way to reestablish AMD's position in the graphics market.

Keeping in mind that this is an RV GPU, we can expect AMD to have been working on a new R series part in conjunction with this. It remains to be seen what (and if) this part will actually be, but hopefully we can expect something that will put AMD back in the fight for a high end graphics part.

Right now, all that AMD has confirmed is a single slot dual GPU 3800 series part slated for next year, which makes us a little nervous about the prospect of a solid high end single GPU product. But we'll have to wait and see what's in store for us when we get there.



Sensible Naming and the Cards

It looks like we may just be seeing some of the fruits of the ATI acquisition here today; no, we're not talking about the Radeon HD 3800 series, but rather the naming of the cards. AMD is releasing two cards today, the Radeon HD 3870 and the 3850, both based off of the new RV670 GPU. Notice anything missing from the GPU names? That's right, gone are the annoying suffixes. AMD is committed to getting rid of the suffix with its GPU products, so you won't see any XT, LE, PE, FUFME, SE etc... versions of these graphics cards. Can we just say now that we think this is a great idea?

Even though the name ATI Radeon HD 3870 is still a little long for our tastes, it's still better than having confusing suffixes. As long as AMD sticks to the higher numbers means better cards methodology we're happy.

There is a method to the nomenclature madness, which the image below should explain:

The first digit is the product generation, the second digit is the family, and then the last two digits refer to performance within that family. This should sound a lot like AMD's new CPU naming system or Intel's current Core 2 family. Note that with today's launch we're already pretty high in the 3800 series, whether or not that means we'll be looking forward to a 3900 or 4000 soon is another matter entirely.

Specifics on the two cards are as follows:

The 3870 is a two-slot solution, it runs its core at a minimum of 775MHz and comes with 2.25GHz data rate memory. Despite the two-slot cooler, the 3870 is actually quieter than the 3850, which itself is much quieter than the 2900 XT.


The Radeon HD 3870

The 3850 is a single slot card, with a 670MHz core clock and a 1.66GHz memory clock. The cards are priced at $219 and $179, respectively (more on pricing later). Like the 3870, the Radeon HD 3850 is actually quiet.


The Radeon HD 3850



2, 3 or 4 GPUs: Introducing CrossFire X

DirectX 9 didn't support more than 3 frame render ahead, and we saw this manifest in less than optimal scaling on NVIDIA's quad SLI solutions. Now that Vista and DirectX 10 are around, it's possible to render 4 or more frames ahead, and quad solutions have a higher potential. AMD is taking advantage of this via CrossFireX which currently enables up to 4 GPUs to be connected in the same system with three CrossFire bridges. It's not a pretty solution: you'll need a non NVIDIA chipset motherboard with 4 physical x16 PCIe slots.

Aside from potential performance scalability, there is also the capability to support up to 8 monitors from one system with 4 graphics cards installed. While this isn't as universally desired, it could be something fun to play with. We don't currently have a platform solution that we can use to test this yet, but we will certainly test this when we are able.



Pricing and Availability

It wasn't too long ago that every time we reviewed an ATI video card we had to complain about pricing and availability, not to mention that anytime either company released a new graphics card we'd get a friendly reminder email from NVIDIA letting us know how highly it values hard launches and immediate availability. This time around, the tables are turned, and while we still love NVIDIA's GeForce 8800 GT, the fact of the matter is that the pricing and availability of those cards are just not what NVIDIA promised.

Leading up to the day the 8800 GT NDA lifted, you could actually purchase the 8800 GT for as little as $220 from a variety of online vendors. Once the embargo was lifted, the story changed considerably. Prices went from the expected $199 - $249 to a completely unexpected $250 - $300 range. Looking at our own price search engine we see that only Amazon is listing a card available at $249, but it's not in stock, nor are any of the other more expensive 8800 GTs listed.

The cheapest 8800 GT we can find at Newegg.com is $269 for either a XFX or PNY card, but neither are in stock, not to mention that the listed price is still $20 over what NVIDIA told us the maximum would be.

AMD will have you believe that NVIDIA simply can't make the 8800 GT cheap enough, citing die sizes and bill of materials costs. Without access to that sort of information, it's tough for us to verify, and NVIDIA isn't really willing to let us know exactly how much it costs to build one of these things. It's more likely however that NVIDIA didn't produce enough 8800 GTs to meet demand, which is understandable given how fast the part is. As we mentioned in our review of the card, it basically makes NVIDIA's entire product lineup obsolete. We've heard that during launch week, hundreds of thousands of boards were shipped out to add-in board vendors, which should start appearing soon, but at who knows what price.

It simply doesn't matter how good the 8800 GT if you can't buy it, and right now it's just not available. NVIDIA is promising that in the next two weeks we will see an influx of 256MB 8800 GT cards, and more 512MB cards are coming. NVIDIA's recommendation is to hop on a pre-order list if you want one, as new cards are coming in regularly and pre-orders are filled first. We don't know how the 256MB variants will perform, but NVIDIA claims that they will arrive at $179 - $199. Whether or not they will stay that way is another issue entirely.

All this brings us to AMD, and its proposed pricing/availability of the Radeon HD 3870 and 3850. The 3870 is supposed to retail for $219, while the 3850 will carry a $179 price tag. We've already mentioned that neither card is faster than the 8800 GT (we'll get to the numbers momentarily), but if AMD is actually able to hit these price points then the cards are still quite competitive.

We've gotten a lot of information about quantities of boards shipped from various manufacturers and vendors, and here's what we've been able to piece together. While there will be quantities of the 3870 and 3850 available at launch, it doesn't look like there will be any more of these two than there were of 8800 GTs at launch. Production will continue to ramp up and we expect to see multiple hundreds of thousands of cards from both AMD and NVIDIA by the end of this year, whether or not that will be enough to satisfy demand is a different question entirely.

If the supply satiates the demand, then AMD shouldn't have a problem hitting its price points, meaning that the Radeon HD 3870 would actually be a viable alternative to the 8800 GT. You'd have less performance, but it'd be met with a lower price.

Now if AMD can't hit its price points, then none of this matters, we'll be stuck with two GPUs from two different companies that we can't buy. Great.

The Test

For this test, we are using a high end CPU configured with 4GB of DDR2 in an NVIDIA 680i motherboard. While we are unable to make full use of the 4GB of RAM due to the fact that we're running 32-bit Vista, we will be switching to 64-bit within the next few months for graphics. Before we do so we'll have a final article on how performance stacks up between the 32-bit and 64-bit versions of Vista, as well as a final look at Windows XP performance.

Our test platform for this article is as follows:

Test Setup
CPU Intel Core 2 Extreme X6800
Motherboard NVIDIA 680i SLI
ASUS P5K-E (CrossFire)
Video Cards AMD Radeon HD 3870
AMD Radeon HD 3850
AMD Radeon HD 2900 XT
AMD Radeon X1950 XTX
NVIDIA GeForce 8800 GTX
NVIDIA GeForce 8800 GT
NVIDIA GeForce 8600 GTS
NVIDIA GeForce 7950 GT
Video Drivers AMD: Catalyst 7.10
NVIDIA: 169.01
Hard Drive Seagate 7200.9 300GB 8MB 7200RPM
RAM 4x1GB Corsair XMS2 PC2-6400 4-4-4-12
Operating System Windows Vista Ultimate 32-bit




Let's Get It Out of the Way: Radeon HD 3870 vs. GeForce 8800 GT

The question on everyone's mind is how well does the 3870 stack up to the recently launched GeForce 8800 GT? If you haven't been noticing our hints throughout the review, AMD doesn't win this one, but since the 3870 is supposed to be cheaper a performance disadvantage is fine so long as it is justified by the price.

Does the 3870 deliver competitive performance given its price point? Let's find out.

Honestly, the Radeon HD 3870 stays very close to the 8800 GT, much closer than AMD's previous attempts to touch the 8800 series. But is the price low enough to justify the performance difference? For that we must do a little numerical analysis; the table below shows you what percentage of the 8800 GT's performance the Radeon HD 3870 delivers:

 3870: % of GeForce 8800 GT Performance 1280 x 1024 1600 x 1200 1920 x 1200 2560 x 1600
Bioshock 84.4% 82.4% 87.9% 93.9%
Unreal Tournament 3 87.8% 85.8% 89.6% 91.6%
ET: Quake Wars 80.5% 95.9% 96.8% 103%
Oblivion 66.7% 74.1% 74.4% 71.5%
Oblivion (4X AA) 70.5% 77.7% 80.2% 82.6%
Half Life 2: Episode 2 101% 95% 91%

86.7%

World in Conflict 81.5% 85.7% 84.9% 89.2%
Call of Duty 4 103% 98.3% 92.3% 82.1%
Crysis 72.4% 73.3% 75.5% -
Average 83.1% 85.3% 85.8% 87.6%

Here's what's really interesting, on average the Radeon HD 3870 offers around 85% of the performance of the 8800 GT, and if we assume that you can purchase an 8800 GT 512MB at $250, the 3870 manages to do so at 87% of the price of the 8800 GT. The Radeon HD 3870 becomes even more attractive the more expensive the 8800 GT is and the opposite is true the cheaper it gets; if the 8800 GT 512MB was available at $219, then the 3870 doesn't stand a chance.

If AMD can actually meet its price expectations then it looks like the 3870 is actually competitive. It's slower than the 8800 GT, but the price compensates.



Obsoleting Products: Radeon HD 3870 vs. 2900 XT

There must be something in the water these days, first NVIDIA makes most of its product line obsolete and now with the Radeon HD 3870 AMD gets rid of any reason to have the 2900 XT.

Our benchmarks show that the cheaper, cooler, quieter Radeon HD 3870 is at worst, the same speed as the poorly received Radeon HD 2900 XT. Granted there are a few areas where the 2900 XT does better, but for the most part it simply can't hold its own against the 3870.

These next two tables summarize things a little better for those of you that are more interested in raw numbers. What you're looking at here is the percentage of 2900 XT performance each one of these cards delivers, first off is the Radeon HD 3870 vs. the 2900 XT:

 3870: % of Radeon HD 2900 XT Performance 1280 x 1024 1600 x 1200 1920 x 1200 2560 x 1600
Bioshock 107% 106% 107% 110%
Unreal Tournament 3 98.8% 96.2% 93.3% 93.8%
ET: Quake Wars 108% 117% 118% 111%
Oblivion 101% 103% 101% 100%
Oblivion (4X AA) 104% 103% 105% 105%
Half Life 2: Episode 2 100% 97.7% 96.3%

97.8%

World in Conflict 118% 120% 115% 118%
Call of Duty 4 136% 130% 118% 102%
Crysis 104% 104% 103% -
Average 110% 110% 108% 106%

On average, the Radeon HD 3870 gives us a 6 - 10% increase in performance over the more expensive, less featured, louder Radeon HD 2900 XT. Not bad for improvement over the course of 6 months.

 

 3850: % of Radeon HD 2900 XT Performance 1280 x 1024 1600 x 1200 1920 x 1200 2560 x 1600
Bioshock 90.7% 91% 92.9% 60.1%
Unreal Tournament 3 92.1% 86.1% 80.8% 77.2%
ET: Quake Wars 107% 104% 99.3% 81.7%
Oblivion 91.1% 86.4% 85.8% 85.4%
Oblivion (4X AA) 92.5% 89.3% 89.1% 83.5%
Half Life 2: Episode 2 97.4% 90% 87.1%

86.1%

World in Conflict 109% 108% 97.4% 92.9%
Call of Duty 4 108% 93.6% 88.3% 75.8%
Crysis 93.7% 91.4% 89.7% -
Average 97.9% 93.2% 90.1% 80.3%

The Radeon HD 3850 comes close in performance to the 2900 XT, especially at lower resolutions, but at ultra high resolutions it delivers only about 80% of the performance of its older brother.



Mid-Range Battle: Radeon HD 3850 vs. GeForce 8600 GTS

Until NVIDIA actually releases a 8800 GT 256MB at $179, the closest competitor to the Radeon HD 3850 is actually the GeForce 8600 GTS. We don't need to show you too many numbers for you to understand the magnitude of this massacre:

The Radeon HD 3850 is so much faster than the 8600 GTS at a very competitive price.



Out with the Old, in with the Mid-Range

We did this comparison in our 8800 GT review and decided to port the numbers over here too. To make the comparison a little more dramatic, we're pitting the lowly Radeon HD 3850 against the some of the previous kings of the hill: the GeForce 7950 GT and the Radeon X1950 XTX.

At only $179, the Radeon HD 3850 manages to outperform both cards. You might view Bioshock as an exception but keep in mind that the X1950 XTX is running the DX9 codepath while the Radeon HD 3850 is running a more GPU-intensive DX10 path, the new midrange card still wins.



Multi-GPU Scaling: Two 3850s = One 8800 GTX?

AMD only sent us a pair of Radeon HD 3850s for this review (believe it or not, we had to beg to get a single 3870), so our only CrossFire numbers come from this setup. That being said, the performance is quite respectable:

Believe it or not, but a pair of these $179 Radeon HD 3850s actually gives you the same performance as a single GeForce 8800 GTX.

 Multi-GPU Scaling (2560 x 1600) Radeon HD 3850 CF GeForce 8800 GT SLI
Oblivion 1.7x 1.87x
Unreal Tournament 3 1.48x 1.66x

 

Scaling looks pretty good from the Radeon HD 3850, however it's still not as good as what NVIDIA is able to achieve with the 8800 GT. NVIDIA consistently achieves about 11% better scaling from one to two GPUs than AMD.

The other problem with CrossFire is that it simply doesn't always work, so a pair of 3850s is not necessarily a better option than a single 8800 GT or GTX. Case in point would be the two other games that we wanted to include here: Quake Wars and Call of Duty 4, both gave us lower frame rates with CF enabled than without. AMD's release notes for the Radeon HD 3800 drivers informs us that some applications may show a performance decrease with CF enabled, so we're not too surprised.

While it'd be nice to be able to purchase two cheap cards and get better performance than the best out there, there are simply too many caveats to really embrace the idea.



Power Consumption

Dramatically improved is the Radeon HD 3800's power consumption over its predecessor, for obvious reasons. Despite being on a smaller manufacturing process however, the Radeon HD 3870 consumes slightly more power than the GeForce 8800 GT under load. At idle however, the 3800's smaller manufacturing process and new PowerPlay features give it the definite advantage. HTPC users will really appreciate that aspect of the Radeon HD 3800 series power consumption.

Idle Power

Load Power

Moving to load power use, the difference between the new NVIDIA and AMD offerings is negligible. When you include the fact that the 8800 GT is faster, the Radeon HD 3870 actually has worse performance-per-watt than the competition. Under normal usage (e.g. not in a super hot case), both the 3870 and 3850 ran very quiet. We didn't have time to test how the coolers behaved under high heat conditions.



Final Words

Without a doubt, AMD is back in the graphics game. When the Radeon HD 2900 XT launched, we couldn't be more surprised at how poorly the product did. The lack of competition allowed NVIDIA to sit back and relax as the orders for more 8800-based product kept on flowing in. While the Radeon HD 3870 isn't faster than the GeForce 8800 GT, if AMD can hit its price point, it is a viable alternative if you're looking to save money.

AMD is in a lot of trouble however if the 8800 GT pricing/availability problem does get worked out; the 8800 GT does offer better performance-per-watt and better performance in general, at the same price the decision is clear, but luckily for AMD the two don't appear to be selling at the same price.

The Radeon HD 3850 is a bit slower than its more expensive sibling and as such ends up being tremendous competition for current mid-range cards like the GeForce 8600 GTS or Radeon HD 2600 XT. We only compared it to the 8600 GTS in this review, but the 3850 similarly obsoletes the 2600 XT.

Both cards from AMD are quite competitive today, but the balance of competition could easily shift depending on pricing and availability of either these cards or their competition. If AMD can't deliver on the prices it is so adamant about meeting, it loses serious cool points. Similarly, if NVIDIA can get enough 8800 GTs in the market, or if the 256MB version actually hits at $179 - $199, AMD would be in a lot of trouble.

Today the Radeon 3870 seems like a nice, albeit slower, alternative to the 8800 GT. But it's difficult to make a thorough recommendation without knowing how the 256MB 8800 GT will stack up and where it'll be priced. Given how the 8800 GTs sold out, if you're truly interested in the 3870 pick one up now, but if you're like us and want to carefully weigh all options - wait a couple of weeks and see what happens with the 8800 GT 256MB.

There is one more point to discuss, and that is: what happens to the high end GPU market? AMD is talking about sticking two 3800 GPUs on a single card and NVIDIA has been very quiet about its next-generation high end GPU plans, but with games like Crysis and Gears of War out on the PC, it'd be nice to actually advantage peak performance as well as affordable performance. What we do like about these new affordable GPUs is that they finally leave us with a feeling that you're getting something for your money, whereas mid-range GPUs of recent history seemed to just give you mediocre performance while lightening your wallet a lot more than they should.

While this may seem like a blip in an otherwise very profit-centric product lineup, we'd love to see similar performance revolutions at other price points in the graphics market. Give us a $100 graphics card that's actually worth something, and maybe we'll end up seeing a resurgence in PC gaming after all.

Log in

Don't have an account? Sign up now