Introduction

For three months now, NVIDIA's 8800 series has enjoyed the distinction of being the only DX10 graphics hardware on the market and the GTX is absolutely the fastest option out there enabling gamers to achieve huge resolutions and framerates with all the eye candy enabled. The downside is that the features and performance come with a price: the top of the line runs at least $550. Even the 8800 GTS weighs in at about $400.

While we would love to have a top to bottom line up from NVIDIA based on their new architecture, we will have to be content with a gradual introduction of parts. It does make sense to introduce the high end parts first, keeping the high profit margin cards on the market for as long as possible helps recoup development expenses. Also, lower performing chips can be binned and saved for later use in lower end parts. When the rest of the lineup is eventually introduced, the combination of low performance G80 silicon with models specifically designed for a cheaper product will provide high enough volumes to meet the increased demand the market places on less expensive hardware.

Today, NVIDIA is introducing the next part in its GeForce 8 Series lineup, the GeForce 8800 GTS 320MB. As the name implies, this is a lower memory part, and thus also less expensive than the current 640MB GTS. NVIDIA expects the new GTS to sell for between $300 and $330. We certainly hope the $300 mark will stick, and we will try to track this as we start to see cards for sale. The $300 price point is particularly interesting, as more than just the hardcore gamers will start to take a look at the new 8800 GTS 320MB as a good fit for their rig.

There are other very important factors at play here as well. The first half of this year should be very exciting in terms of the competition NVIDIA will have to face. While we don't know any of the specifics of AMD's next part, we are very excited to see what it have in store to compete with NVIDIA in the first round of DX10 class hardware. In the meantime, NVIDIA will certainly want to ship as many 8 series parts as possible before it has a true competitor in terms of feature set out there.

The games scheduled to come out over the next few months look quite impressive as well, which should inspire more people to upgrade their hardware for that must have title. Among the most anticipated software, Crysis and Unreal Tournament 3 will be headed our way. Both of these games are from developers who have produced ground breaking titles in the past, and the screenshots and videos on the web have us drooling. And it is almost certain that, in order to experience the incredible graphics that go along with the (hopefully) amazing gameplay, graphics hardware will need to pack a punch.

While we can't test the next generation of games yet, we are very interested in how the new GeForce 8800 GTS 320MB stacks up against the competition in currently available games. First, we'll take a look at the hardware and just how much cutting down the memory on the new GTS will affect performance.

The 8800 GTS 320MB and The Test
Comments Locked

55 Comments

View All Comments

  • tacoburrito - Monday, February 12, 2007 - link

    With all the eye candy turned on, the 320mb card seems to be only on par with the previous gen 79xx cards, but costs almost twice as much. I'd much rather cough up the extra $200 and get the full GTS version.
  • DerekWilson - Monday, February 12, 2007 - link

    Actually, the 320MB card blows away the 7 series in our tests. Why would you say that it's only on par? At 16x12, the 8800 GTS 320MB is 60% faster, and the difference in performance only gets larger from there.
  • tacoburrito - Monday, February 12, 2007 - link

    With the exception of Half Life 2, at 4x AA, wouldn't you say that the 8800 GTS 320 is only marginally better than 7950 GT, but would costs twice a much?
  • tacoburrito - Monday, February 12, 2007 - link

    Whoops, I meant to say 7900 GTX
  • DerekWilson - Monday, February 12, 2007 - link

    From the context of the thread, I assumed you were talking about Oblivion.

    Without AA, the 8800 320MB is much better than the 7900 GTX. With AA, there is an argument to be made, but the price of the 7900 GTX (as Jarred pointed out) is higher.

  • JarredWalton - Monday, February 12, 2007 - link

    I'd be very curious to find out where you're seeing 7900 GTX cards for "half the price". I don't see any in stock when taking a quick look at major resellers, and our http://labs.anandtech.com/products.php?sfilter=462">Pricing Engine confirms that. I'm pretty sure the 7900 GTX is discontinued now, and prices never got below $400.
  • Wwhat - Monday, February 12, 2007 - link

    It still remains to be seen how DX10 games (or future OpenGL games that use geometry shaders?) run on the various incarnations of the new cards, you should have put that in the conclusion as a caveat, it's not just textures anymore you know.

    I don't thinks there's anything at all currently that uses geometry shaders, you wonder why some developer doesn't throw together a quick test utility, billions of people on the planet and nobody can do that little effort? geez.
    Surely someone at crytek or Id or something can write a small looping thing with a framecounter? anand should send out some mails, get someone on his feet.

  • DerekWilson - Monday, February 12, 2007 - link

    There are some dx10 sample apps that make use of geometry shaders ... I've been working on testing these, but it is more difficult than it may seem as FRAPS has trouble with DX10 apps.

    You do have a point though -- DX10 performance will be important. The problem is that we can't really make a recommendation based on DX10 performance.

    The 8 series parts do have more value than the 7 series and x1k series parts in that they support DX10. But this is as far as we can take it. Performance in the games we have does matter, and it is much more prudent to make a purchase only based on the information we know.

    Sure, if the cost and performance of an 8 series part is the same or very near some DX9 class hardware, the features and DX10 support are there to recommend it over the competition. But it's hard to really use this information in any other capacity without knowing how good their DX10 support really is.
  • Awax - Monday, February 12, 2007 - link

    The main point for me is the low impact of memory size on modern games.

    On previous generation game, like Quake4, developers had to use a lot of high resolution texture/bump map/lookup map to achieve advanced effect with the limited capacity in raw performances and flexibility of the cards available.

    With DX9 and more in DX10, the new way is to _CALCULATE_ things completely instead of having them interpolated with tricks using intermediary results or already computed lookup tables stored in textures.
  • DerekWilson - Monday, February 12, 2007 - link

    But new ways to calculate things will also benefit from having huge amounts of data to calculate things from.

    It's really hard to speculate on the direction DX10 games will take at this point. Certianly we will see more use of programmable features and a heavier impact on processing power. But memory usage will also increase. We'll just have to wait and see what happens.

Log in

Don't have an account? Sign up now