Original Link: http://www.anandtech.com/show/2261
More Mainstream DX10: AMD's 2400 and 2600 Seriesby Derek Wilson on June 28, 2007 8:35 AM EST
- Posted in
We've known about the basic architecture of AMD's lower end DX10 hardware ever since mid May, but retail product hasn't made its way out the door until now. Finally launching today, and available within the next two weeks (says AMD), the Radeon HD 2400 XT and Pro and the Radeon HD 2600 XT and Pro will serve to bring competition to the $50 - $150 DX10 graphics card market. These are the cards that most people will actually end up purchasing, so both AMD and NVIDIA would like to come out on top in this market.
But even before we begin, we have to go back to the 8800 GTS 320 and talk about what a terrific value it is for people who want great performance and don't need ultra high resolutions with AA cranked up. If $300 is in the budget for graphics, this is the way to spend it. We would really love to offer more flexibility in our recommendation, but both NVIDIA and AMD have seen fit to leave a huge gap in performance between their lower high end part and upper low end parts. We saw this with the 8600 GTS falling way short of the 8800 series, and we will see it again with the HD 2600 XT not even getting close to the 2900 XT.
AMD's price gap will be even larger than NVIDIA's, leaving a hole between $150 and $400 with nothing to fill it. This seems quite a bit excessive with no other real product lines hinted at until we see a product refresh down the line. When the 8600 series launched, we were quite disappointed with the performance of the part and hoped that AMD would step up to the plate and offer a real challenger that could fill the needs of midrange graphics hardware buyers everywhere. Now we are left with a sense of desolation and a feeling that neither AMD nor NVIDIA know how to properly target the $200 - $300 price range. We would go so far as to say that neither camp offers top-to-bottom DX10, but something more along the lines of top and bottom end solutions.
But regardless of what is lacking in their lineup, the new Radeon HD cards are aimed at filling a specific need. We will talk about what they bring to the table and how they manage to do the job AMD has designed them to perform. First up is a brief look back at what's actually inside these GPUs.
UPDATE: In going back to add power tests, we discovered that the GeForce 8600 GTS we used had a slight overclock over the stock version. We have gone back and rerun our tests with the GeForce 8600 GTS at stock clock speeds and our current graphs reflect the new data. The changes, generally on the order of 5%, did not have a significant impact on the overall outcome of the article. There are a couple cases where the performance gap narrows, but the fact remains that the 8600 GTS is under powered and the 2600 XT is generally more so.
We do apologize for the initial testing error, and we will certainly do everything we can to avoid such problems in the future.
A Closer Look at RV610 and RV630
The RV6xx parts are similar to the R600 hardware we've already covered in detail. There are a few major differences between the two classes of hardware. First and foremost, the RV6xx GPUs include full video decode acceleration for MPEG-2, VC-1, and H.264 encoded content through AMD's UVD hardware. There was some confusion over this when R600 first launched, but AMD has since confirmed that UVD hardware is not at all present in their high end part.
We also have a difference in manufacturing process. R600 uses an 80nm TSMC process aimed at high speed transistors, while their RV610 and RV630 GPU based cards are fabbed on a 65nm TSMC process aimed at lower power consumption. The end result is that these GPUs will run much cooler and require much less power than their big brother the R600.
Transistor speed between these two processes ends up being similar in spite of the focus on power over performance at 65nm. RV610 is built with 180M transistors, while RV630 contains 390M. This is certainly down from the huge transistor count of R600, but nearly 400M is nothing to sneeze at.
Aside from the obvious differences of transistor count and the number of different units (shaders, texture unit, etc.), the only other major difference is in memory bus width. All RV610 GPU based hardware will have a 64-bit memory bus, while RV630 based parts will feature a 128-bit connection to memory. Here's the layout of each GPU:
One of the first things that jump out is that both RV6xx based designs feature only one render back end block. This part of the chip is responsible for alpha (transparency) and fog, dealing with final z/stencil buffer operations, sending MSAA samples back up to the shader to be resolved, and ultimately blending fragments and writing out final pixel color. Maximum pixel fill rate is limited by the number of render back ends.
In the case of both current RV6xx GPUs, we can only draw out a maximum of 4 pixels per clock (or we can do 8 z/stencil-only ops per clock). While we don't expect extreme resolutions to be run on these parts (at least not in games), we could run into issues with effects that make heavy use of MRTs (multiple render targets), z/stencil buffers, and antialiasing. With the move to DX10, we expect developers to make use of the additional MRTs they have available, and lower resolutions benefit from AA more than high resolutions as well. We would really like to see higher pixel draw power here. Our performance tests will reflect the fact that AA is not kind to AMD's new parts, because of the lack of hardware resolve as well as the use of only one render back end.
Among the notable features that we will see here are tessellation, which could have an even larger impact on low end hardware for enabling detailed and realistic geometry, and CFAA filtering options. Unfortunately, we might not see that much initial use made of the tessellation hardware, and with the reduced pixel draw and shading power of the RVxx series, we are a little skeptical of the benefits of CFAA.
From here, lets move on and take a look at what we actually get in retail products.
Just a day before publication, we were called up and told of revised pricing for different RV6xx based solutions. Our request to have the information emailed to us was declined, as AMD only wanted this information discussed over the phone. While there is nothing wrong with that, we did find it a little odd and at least worth mentioning.
We were told that price would be broken down as follows:
AMD Radeon HD 2600 XT: $120 - $150
AMD Radeon HD 2600 Pro: $90 - $100
AMD Radeon HD 2400 XT: $75 - $85
AMD Radeon HD 2400 Pro: $50 - $55
This means we can expect high priced 2600 XT cards to be priced just below 8600 GTS parts (which are currently available at around $170 online), and will also compete with some overclocked 8600 GT hardware. The 2600 Pro will compete with the cheaper 8600 GT cards. The 2400 XT and Pro will compete with different flavors of the 8500 GT. While we didn't include 8500 GT tests in this article, we will be including the low end NVIDIA part in future reviews.
As for the cards themselves, here are some images of what we are testing today:
AMD Radeon 2600 XT
AMD Radeon HD 2600 Pro
AMD Radeon HD 2400 XT
AMD Radeon HD 2400 Pro
|AMD R6xx Hardware|
|SPs||PPC||Core Clock||TMUs||DDR Rate||Bus Width||Memory Size||Price|
|HD 2900 XT||320||16||740MHz||16||825MHz||512bit||512MB||$399|
|HD 2600||120||4||600 - 800MHz||8||400 - 1100MHz||128bit||256MB||$90 - $150|
|HD 2400||40||4||525 - 700MHz||4||400 - 800MHz||64bit||128MB / 256MB||$50-$85|
The higher end cards will come with an HDMI converter that includes sound, but AMD has given board partners the ability to chose whether or not to include this with lower end parts (even though all the boards will support the feature).
The Test and Power
We will only be looking at DX9 performance under Windows XP today. This is still the platform of choice for gamers, and thus very important to examine. This doesn't mean we are ignoring DX10. We have a follow-up article on DX10 performance coming down the pipe next week. Here we'll take a look at how these cards stack up against the currently available DX10 games and demos.
We are also planning to look at UVD vs. PureVideo in a follow up article. Video decode is an important feature of these cards and we are interested in seeing how NVIDIA and AMD hardware stacks up against each other. Please stay tuned for this article as well.
For this series of tests, we used the following setup:
Performance Test Configuration:
|CPU:||Intel Core 2 Extreme X6800 (2.93GHz/4MB)|
|Chipset Drivers:||Intel 184.108.40.2064|
|Hard Disk:||Seagate 7200.7 160GB SATA|
|Memory:||Corsair XMS2 DDR2-800 4-4-4-12 (1GB x 2)|
|Video Drivers:|| ATI Catalyst 220.127.116.11-rc2
NVIDIA ForceWare 158.22
|Desktop Resolution:||1280 x 800 - 32-bit @ 60Hz|
|OS:||Windows XP Professional SP2|
As for power, the 65nm AMD hardware shows rather unimpressive results. At idle, both the 8600 GTS and 8600 GT draw less power than the 2600 XT and 2600 Pro respectively. Under load we see the AMD parts become more competitive in terms of low power. Not even 65nm can help push the 2600 XT past the 8600 GTS in terms of power draw though.
As for our game tests, first we'll take a look at how only the new AMD HD series parts stack up against NVIDIA's 8 series competitors. Following that we'll break down test by game and show performance verses previous and current generation hardware.
Up Close and Personal: 8600 vs. 2600/2400
These graphs really do a terrific job of speaking for themselves. The only tests we performed where the AMD Radeon HD 2000 series could honestly keep up with their competitors from NVIDIA was under Rainbow Six: Vegas. The 2600 XT did give a good showing against the 8600 GTS under Oblivion and Prey as well, but that's about as much as we can say there.
The numbers were so disappointing that we actually went back and retested everything from the beginning a second time to make sure we didn't have something wrong. Especially impressive is how poorly the new cards perform under Battlefield 2 with and without 4xAA enabled.
Battlefield 2 Performance
The Elder Scrolls IV: Oblivion Performance
Rainbow Six: Vegas Performance
Supreme Commander Performance
We had no problems expressing our disappointment with NVIDIA over the lackluster performance of their 8600 series. After AMD's introduction of the 2900 XT, we held some hope that perhaps they would capitalize on the huge gap NVIDIA left between their sub $200 parts and the higher end hardware. Unfortunately, that has not happened.
In fact, AMD went the other way and released hardware that performs consistently worse than NVIDIA's competing offerings. The only game that shows AMD hardware leading NVIDIA is Rainbow Six: Vegas. Beyond that, our 4xAA tests show the mainstream Radeon HD lineup, which already lags in performance, scales even worse than NVIDIA. Not that we really expect most people with this level of hardware to enable 4xAA, but it's still a disappointment.
Usually it's easier to review hardware that is clearly better or worse than it's competitor under the tests we ran, but this case is difficult. We want to paint an accurate picture here, but it has become nearly impossible to speak negatively enough about the AMD Radeon HD 2000 Series without sounding comically absurd.
Even with day-before-launch price adjustments, there is just no question that, in the applications the majority of people will be running, AMD has created a series of products that are even more unimpressive than the already less than stellar 8600 lineup.
While we will certainly concede that video decode capability may be a saving grace in some applications, the majority of end users are not saving their money for a DX10 class video card in order to play movies on their PC. For those who really are interested in this, stay tuned for an article comparing UVD and PureVideo coming next week.
We also won't have data on the performance of these cards under DX10 until next week. Maybe DX10 could make a difference, but we still won't have the full picture. These first DX10 games are more like DX9 titles running on a different API. Of course, this is a valid way to use DX10, but we will probably see more intense and demanding uses of DX10 when developers start targeting the new features as a baseline.
All we can do at this point is lament the sad state of affordable next generation graphics cards and wait until someone at NVIDIA and AMD gets the memo that their customers would actually like to see better performance that at least consistently matches previous generation hardware. For now, midrange DX10 remains MIA.