Original Link: http://www.anandtech.com/show/2453
Bringing Competition to Midrange: The GeForce 9600 GT Raises NVIDIA's Sub $200 Barby Derek Wilson on February 21, 2008 9:00 AM EST
- Posted in
As the G9x series of GPUs slowly trickles into the mainstream, we are very happy to report that NVIDIA has executed a solid post 8800 GT launch: the G94 is very competitive at its price point in the form of the GeForce 9600 GT. That the current generation couldn't outpace the previous generation is a major complaint we had of previous midrange launches. Hopefully NVIDIA and AMD will be able to keep up the competition for all the new introductions we see this year.
The Radeon HD 3850 has been doing fairly well, and we are glad that, for a change, AMD has been able to put the pressure on NVIDIA. The 8800 GT has done a good job above $200, but now we'll be taking a look at what happens when the technology creeps below a threshold that makes it infinitely more attractive to the average gamer.
The GeForce 9600 GT, in addition to finally encroaching on ATI's naming scheme, is fabbed on a 65nm process by TSMC and sports a 256-bit memory bus. The differences between G9x and G8x are small, but even so details were light. Their compression technology has evolved to provide higher effective bandwidth between the GPU and framebuffer. We would love to provide more details on this and the other changes, but NVIDIA is still being a bit tight lipped.
The only other major difference is in PureVideo. The G92 and the G94 both support new PureVideo features that should enable a better, more flexible experience when video players roll out software support for these additions. The changes include performance improvements in some situations, as well as potential quality improvements in others. We have yet to test out these changes as none of the players currently support them, but we will certainly talk a little bit about what to expect.
Here's a look at exactly what we get under the hood of a stock GeForce 9600 GT as compared to the rest of the NVIDIA lineup.
|Form Factor||8800 GTS 512||8800 GT 256MB||8800 GT||9600 GT||8600 GTS|
|Texture Address / Filtering||64 / 64||56 / 56||56 / 56||32 / 32||16 / 16|
|Memory Clock||1.94GHz||1.4GHz - 1.6GHz||1.8GHz||1.8GHz||
|Memory Bus Width||256-bit||256-bit||256-bit||256-bit||128-bit|
|Manufacturing Process||TSMC 65nm||TSMC 65nm||TSMC 65nm||TSMC 65nm||TSMC 80nm|
|Price Point||$279 - $349||$199 - $219||$209 - $279||$169 - $189||$140 - $199|
PureVideo HD Enhancements
NVIDIA introduced two new PureVideo HD features with the 9600 GT that will also be enabled on G92 based GPUs as well (GeForce 8800 GT & 8800 GTS 512): Dynamic Contrast Enhancement and Automatic Green, Blue and Skin Tone Enhancements.
Dynamic Contrast Enhancement simply takes, on a frame by frame basis, the contrast histogram of a scene and stretches it out - resulting in artificially increased contrast. NVIDIA indicated that Dynamic Contrast Enhancement is most useful in scenes that have relatively high contrast already, as it is specifically programmed to ignore certain low contrast scenes to avoid completely corrupting the intention of a frame.
Automatic Green, Blue and Skin Tone Enhancements is a longer way of saying automatic color saturation adjustment. When enabled, this feature looks at midtones of most colors and simply boosts their values so that these colors appear brighter/more vibrant. The higher a color's initial starting value, the lower the amount it will be boosted by - in other words, this isn't a linear function. Because it's a non-linear function, you don't end up crushing the colors but instead you get a curve that tapers off giving you more vibrant, brighter colors overall. Like the Dynamic Contrast Enhancement feature, the Green/Blue and Skin Tone Enhancements are evaluated on a frame-by-frame basis.
Video purists will hate these features as they don't accurate reproduce the image that was originally recorded, instead you're getting the Best Buyification of your computer monitor: oversaturated colors and overboosted contrast galore. However it turns out that most users prefer oversaturated colors and overboosted contrast, which is why most TV makers ship their sets far from calibrated. Most PC monitors lack the sort of configuration options to achieve the same effect as an improperly, but appealingly calibrated TV. NVIDIA hopes that its PureVideo HD Enhancements will be able to bridge the gap between how things look on your PC monitor and how they look on your TV.
If you spend a lot of time properly calibrating your TV, chances are you won't want to use these features. Thankfully they can be disabled. However, if you do like similar functions on your TV, then you may just be pleased by what the 9600 GT has to offer.
The Card and The Test
Palit provided us with a rather amazing little 9600 GT, but we received it a little later than our EVGA parts and we didn't have two of the Palit cards to test SLI either. We've done our testing here with the parts EVGA sent us, but there are some very interesting 9600 GT parts coming out.
The Palit part absolutely deserves a mention, and we will be testing it out as soon as we get a chance. Among the notable features is the fact that Palit has provided not only 2 dual-link DVI ports, but interfaces for both HDMI and DisplayPort. There is also an optical SPDIF input on the back enabling audio to be sent over HDMI.
But let's get back to the hardware at hand. NVIDIA reports that the GeForce 9600 GT will draw about 95W in real world apps. This means it does require a PCIe power connector to provide the added juice over the 75W available from the motherboard via the PCIe x16 slot.
The reference design used by EVGA is single slot and makes use of a fan shroud that covers the entire front face of the card. The Palit card we received is a two slot solution, but the main reason for this seems to be the inclusion of all the added I/O. The EVGA part isn't very loud, but we will be interested in comparing the noise levels between the reference design and Palit's larger solution to see if there is any real advantage from going with the wider model.
Our Test Setup
All of today's tests were performed on the 64-bit version of Windows Vista running on a monster of a system. We test all of our graphics cards on high end hardware in order to eliminate the bottlenecks associated with anything but graphics. This means it isn't likely that our numbers will reflect what our readers will see when actually playing a game, but what it does show is which video card is actually capable of providing a better experience when in a situation where graphics processing is the bottle neck.
Isolating the graphics subsystem is important for a few reasons. We can't know what is in your system and we haven't (yet) been able to test every graphics card with every CPU and system memory configuration out there. If you run a system slower than our test bed, you may run into CPU or system memory limited situations, in which case average frame rate won't be governed only by the GPU.
So why is graphics card selection still important? Because the graphics card has the most impact on graphics quality and performance of any single component in the system, and because even in highly CPU limited situation we can still see slow frames come along and throw the GPU a curve ball. Having a more powerful GPU in your system will provide a smoother experience even in CPU limited situations that show less difference between two competing solutions. In a system limited case, dropping in a higher performance GPU will also enable you to turn on more features. We can't decide what eye candy is "better" as every gamer is different and will make their own trade offs. And for new GPU launches, we don't have the time to benchmark every permutation of every setting in every game we test.
The bottom line is that better performance from a GPU in a high end system will translate to more flexibility with options and smoother performance in a lower end system.
Here is our test configuration:
|CPU||2x Intel Core 2 Extreme QX9775 @ 3.20GHz
|Motherboard||Intel D5400XS (Skulltrail)|
|Video Cards||ATI Radeon HD 3870
ATI Radeon HD 3850 256MB
ATI Radeon X1950 XTX
NVIDIA GeForce 8800 GT 256MB
NVIDIA GeForce 9600 GT 512MB
NVIDIA GeForce 8600 GTS
|Video Drivers|| Catalyst 8.2
ForceWare 174.12 (9600 GT only)
|Hard Drive||Seagate 7200.9 120GB 8MB 7200RPM|
|RAM||2xMicron 2GB FB-DIMM DDR2-8800|
|Operating System||Windows Vista Ultimate 64-bit|
We did run some power tests, but keep in mind that they will be a little high due to the fact that this is, afterall, Skulltrail we are running.
AMD's hardware shines at idle power with CrossFire even coming in below the 9600 GT. NVIDIA absolutley remains competitive in terms of power consumption under load, which is good to see.
Settings: All medium quality settings.
Crysis is the go to game for performance hungry graphics testing today. With beautiful graphics around every corner, even on lower quality settings, this is definitely a game we need to consider when looking at hardware. Unfortunately, the class of card exemplified by the GeForce 9600 GT (in spite of the fact that it offers a nice performance for the money) is unable to handle more than medium quality settings at 1600x1200 without performance hiccups.
For this test, we recorded our own demo using the record and demo console commands. Each test was run three times, and we took the highest score of the three (usually the second and third runs were the same or very nearly so). Our recorded demo consisted of a 20 second run through the woods in the level "rescue" and we verified the performance of our timedemo using FRAPS. The run was near the beginning of the level and we stayed clear of enemies in order to reduce the impact of AI on our graphics benchmark.
For the GeForce 9600 GT and the Radeon HD 3850, performed within 0.5 fps of each other. This is less than our standard 3% margin of error under nominal conditions. The fact that we saw performance this similar between our timedemo and FRAPS is a good indication that the Crysis demo playback feature is fairly indicative of graphics hardware performance in this particular situation. Keep in mind that our numbers will be higher than what readers see in gameplay situations, as physics, AI, and other overhead will come into play.
The GeForce 9600 GT comes out swinging with performance between AMD's Radeon HD 3850 and 3870. In addition, it leads the performance of the $200 version of the 8800 GT (the 256MB version). SLI and CrossFire scaling looks to taper off at lower resolutions, so it is likely that we could enable some higher detail settings in those cases without incurring a huge performance hit.
The Elder Scrolls IV: Oblivion Performance
Version: 1.2.0416 Shivering Isles
Settings: Ultra High Quality settings defaults with vsync disabled. No AA or AF.
Our Oblivion test takes place in the south of the Shivering Isles, running through woods over rolling hills toward a lake. This is a straight line run that lasts around 20 seconds and uses FRAPS to record framerate. This benchmark is very repeatable, but the first run is the most consistent between cards, so we only run the benchmark once through and take that number.
The only thing we change from the default Ultra High Quality settings is that we disable vsync and change the resolution. With higher performance cards, we might want to look into some of the user mods out there or hand tuning some of the quality values in the oblivion.ini file in order to push systems, but this generation of midrange cards seems to be able to handle Ultra High fairly well at the resolution games will likely run on these cards. Of course, we disable sound and don't run into any enemies in this test, so actual game play experience will likely be a bit lower. But cards that perform better in our test will be able to handle more under those conditions as well.
The AMD Radeon HD 3850 does much better with respect to the GeForce 9600 GT in this test than in others. The parts are still in the same class, but AMD has come out on top in this benchmark. Interestingly, this is also one of the only benchmarks where 256MB framebuffer on the 8800 GT doesn't totally trash its performance, and thus it also performs better much better than the 9600 GT.
Enemy Territory: Quake Wars Performance
Settings: Everything maxed out without AA. Soft particles enabled on DX10 class hardware.
For this benchmark, we created a new timedemo based on multiplayer action in the island level. Our old timedemo no longer works after the 1.4 update. This timedemo is about 10000 frames long and covers a lot of ground so many aspects of gameplay are incorporated. We run it with the timenetdemo and take the output. This is our only OpenGL benchmark.
The NVIDIA GeForce 9600 GT out paces its competition in this benchmark. OpenGL has long been a strong suit for NVIDIA, but AMD has made some gains in this area with their current generation of cards. It isn't enough this time around to put AMD on top with the NVIDIA hardware clearly leading. Of course, it's also hard to miss the abysmal performance of the 8800 GT 256MB card here. Clearly in a situation where the memory size is not an issue it can shine (1280x1024), but performance drops to half or less that of the 9600 GT when pushed higher.
Either NVIDIA really needs to start optimizing for lower memory situations, or it should just not make parts like this. Hopefully we won't see 256MB 9600 GT parts with similar characteristics.
Settings: full dynamic lighting, everything maxed without AA and no grass shadows.
With the graphics setting turned as far up as we could get them, video memory does seem to be a very important factor in performance. Our 256MB parts simply tanked this benchmark. Getting playability out of this game involves turning down the lighting distance at least (as it doesn't have a huge visual impact) and possibly turning off or down some of the shadow settings.
For this test, we walk in a straight line for about 30 seconds and use FRAPS to measure performance. We use the same save game every time and the path doesn't change. Our performance measurements are very consistent between runs. We do two runs and take the second.
The 256mb Radeon HD 3850 and GeForce 8800 GT clearly suffer from lack of memory in this case. Of course, the 9600 GT actually outperforms the 3870, so we know it isn't all about the framebuffer, but we would absolutely expect the 512MB 8800 GT to outperform the 9600 GT here as well. Crossfire doesn't seem to help out ATI much here, but SLI provides as close to linear scaling as is possible, which is a nice thing if S.T.A.L.K.E.R. is your game of choice (and with Clear Sky on the horizon).
World in Conflict Performance
Settings: Medium quality plus Heat Haze, Debris Physics, and DX10 (where available)
This game, like Crysis, is a resource hog, and only incredibly powerful systems can run this game with all the settings cranked all the way up. The sub $200 market is not going to tackle this game with high settings, but in our play tests medium seemed a little too low on the detail (or left more performance head room, which ever perspective you prefer). We added a few features to the list set by the medium quality defaults and enabled DX10 for cards that supported it. It is important to note that the X1950 XTX doesn't run with DX10 here so its performance is more of a reference for previous generation cards.
We tested this game using the built in benchmark feature of the game. In our experience this does a good job of testing the different graphical scenarios that can be encountered in the game.
The 3850 would not make it through a benchmark run above 1280x1024. We would always get a hard lock and need to power down the system in order to deal with it. This also happened a couple times with the 8800 GT, but not at all with the 3850s in CrossFire.
The 9600 GT more than doubles the performance of the 8600 GT here, and also leads the 256MB 8800 GT. No matter how it's sliced, the GeForce 9600 GT is the best thing we tested at this price point under World in Conflict. The Radeon HD 3870 does match performance (and pulls ahead at high resolution), and (in addition to S.T.A.L.K.E.R.) we would like to follow up and see how the 512mb versions of the 8800 GT and the 3850 perform.
Thankfully, those who spend less than $200 on their new graphics hardware will finally have a reason to upgrade. AMD's introduction of the Radeon HD 3850 handled that nicely on their end a few months back, and NVIDIA has now followed suit with a part that brings competition back to another market segment. Something we haven't had a good amount of for a very long time now and we are certainly thankful for its return.
We have heard murmurs that AMD will be lowering prices on their HD 3000 series, but we don't have any firm details as of yet. If this is the case, then we may see stronger competition at the lower end of the spectrum when looking at the high end Radeon HD 3850 parts. We will be doing a follow up next week looking at the 512MB versions of the 3850 and the GeForce 8800 GT in order to answer some questions we have following these tests.
From what we have seen, price, clock speed, memory size, and features are going to be the selling points here rather than which company designed the GPU. Of course, if AMD does drop its price, they could very likely have a winner on their hand. Especially if we find out that the 512MB part helps to smooth over some of the rough spots we've seen with the 3850 so far. Before we can wrap this up with a neat little bow, we simply have to answer a couple more questions and wait and see what happens with price.
Rather than seeing the fact that we need more info as a bad thing, we are very grateful that we have this problem: the competition is hot enough to push both NVIDIA and AMD to do all they can to provide the best value for the consumer. And the real winner in that situation is everyone in the market for a graphics card under $200.
Update: AMD has cut prices on its Radeon HD 3800 series, to see how this changes things take a look at our price-performance comparison here.