Introduction

Incredibly high priced and high powered graphics cards are exciting. For geeks like us, learning about the newest and best hardware out there is like watching a street race between a Lambo and a Ferrari. We are able to see what the highest achievable performance really offers when put to the test. Standing on the bleeding edge of technology and looking out at what is currently possible towards the next great advancement inspires us. The volume and depth of knowledge required to build a GPU humbles us while physically demonstrating the potential of the human race.

Unfortunately, this hardware, while attainable for a price, is often out of reach for most of its admirers. Sometimes it isn't owning the thing which we set apart that gives us joy, but knowing of its existence and relishing the fact that some time, in the not too distant future, that kind of performance will be available for a reasonable price as a mainstream staple.

While the performance of the 8800 GTX is still a ways off from making it to the mainstream, we finally have the feature set (and then some) of the GeForce 8 series in a very affordable package. While we only have the fastest of the new parts from NVIDIA today, the announcement today includes these new additions to the lineup:

GeForce 8600 GTS
GeForce 8600 GT
GeForce 8500 GT
GeForce 8400 (OEM only)
GeForce 8300 (OEM only)

The OEM only parts will not be available as add-in cards but will only be included in pre-built boxes by various system builders. While the 8600 GTS should be available immediately, we are seeing a little lag time between now and when the 8600 GT and 8500 GT will be available (though we are assured both will be on shelves before May 1). While NVIDIA has been very good about sticking to a hard launch strategy for quite a while now, we were recently informed that this policy would be changing.

Rather than coordinate hardware availability with their announcement, NVIDIA will move to announcing a product with availability to follow. Our understanding is that availability will happen within a couple weeks of announcements. There are reasons NVIDIA would prefer to do things this way, and they shared a few with us. It's difficult for all the various card manufacturers to meet the same schedule for availability, and giving them more time to get their hardware ready for shipment will even the playing field. It's hard to keep information from leaking when parts are already moving into the channel for distribution. With some sites able to get their hands on this information without talking to NVIDIA (and thus can avoid abiding by embargo dates on publication), taking measures to slow or stop leaks helps NVIDIA control information flow to the public and appease publishers who don't like getting scooped.

The bottom line, as we understand it, is that hard launches are difficult. It is our stance that anything worth doing right justifies the trouble it takes. NVIDIA stated that, as long as the information is accurate, there is no issue with delayed launches (or early announcements depending on how we look at things). On the surface this is true, but the necessity of a hard launch has reared its ugly head time and time again. We wouldn't need hard launches if we had infinite trust in hardware makers. The most blatant example in recent memory is the X700 XT from ATI. This product was announced, tested, reviewed (quite positively), but never saw the light of day. This type of misinformation can lead people to put off upgrading while waiting for the hardware or, even worse, trick people into buying hardware that does not match the performance of products we review.

So many people get confused by the fact that we still love hard launches even if only a handful of parts are available from a couple retailers. Sure, high availability at launch is a nice pipe dream, but the real meat of a hard launch is in the simple fact that we know the hardware is available, we know the hardware has the specs a company says it will, and we know the street price of the product. Trust is terrific, but this is business. NVIDIA, AMD, Intel, and everyone else are fighting an information war. On top of that, the pace of our industry is incredible and can cause plans to change at the drop of a hat. Even if a company is completely trustworthy, no one can predict the future and sometimes the plug needs to be pulled at the very last second.

In spite of all this, NVIDIA will do what they will do and we will continue to publish information on hardware as soon as we have it and are able. Just expect us to be very unforgiving when hardware specs don't match up exactly with what we are given to review.

For now, we have new hardware at hand, some available now and some not. While the basic architecture is the same as the 8800, there have been some tweaks and modifications. Before we get to performance testing, let's take a look at what we're working with.

Under the Hood of G84
POST A COMMENT

60 Comments

View All Comments

  • shabby - Tuesday, April 17, 2007 - link

    3dmark vista edition of course! Reply
  • munky - Tuesday, April 17, 2007 - link

    Why did you not include the x1950xt in the test lineup? It can also be had for about $200 now, like the 7950gt. You didn't want to make the 8600 series look worse than they already do, or what? Reply
  • DerekWilson - Tuesday, April 17, 2007 - link

    Wow, sorry for the ommission -- I was trying to include specific comparison points -- 3 from AMD and 3 from NVIDIA, but this one just slipped through the cracks. Sorry. It will be included in my performance update. Reply
  • Elwe - Tuesday, April 17, 2007 - link

    Now, now guys. True that these cards are not going to be what many of you want (there are some good reasons to stay with what you have considering the performance differential of several of the last generation cards). And it is clear that these cards will not touch the 8800 cards (from what I can tell, the only these these do better are are 100% Pure HD on the card, which I guess is because these might be paired with not-so-great cpus.

    But for some of us, they might work. I recently bought a Dell 390 workstation. I packed it with fast drives, QX6700 cpu, and 4gb ram. There were very few BTO graphics choices, and most centered around the Pro market (this is a "workstation" after all). These is a new machine, and quite powerful! I want to work and play on this box. Because of the relatively week power supply (rated at 375 watts or something like that) and because I need both available non-graphics PCIe slots (if you put it an 8800 GTS, even if you changed the power supply, this type of dual slot card will cover one of those slots), I have been waiting for something reasonably powerful to come along (again, I am not going to just work on this box; I would like to play UT2k7, too:). Since I run Linux, I was trying to stick with the Nvidia line (my experience is that they have better drivers for this platform, but perhaps ATI has stepped up in the last half year or so). I could have gone with the 79xx line (single slot), but I wanted to see what the new generation would bring. Depending on what you need/want, I think either a slightly-used 7950GT OC or a 8600 GTS would work just fine for me. It does not seem unreasonable to me that in some things the older higher end card is faster than the newer mid range card, and vice versa. But I did not see any benchmark where the 79xx line whooped the 8600 GTS thoroughly (like what happend with several benchmarks comparing the 8800 and 8600).

    I would say that the only immediate problem I might have with using the 8600 GTS is for gaming at high resolutions. I have a Dell 2407, and Anandtech's benchmarks make it clear I should not be gaming at that high a resolution. Bummer. The 7950 GT OC might very well be the better option here.

    In an ideal world, I really would like the power of an 8800 (and, fortunately, I can pay for it). But I really need the PCIe slot more, and changing out the power supply would add even more cost. I could have gotten another Dell model (like the XPS 710 or the Precision 490)--and I am thinking about just that. But I got the 390 for what I considered good reasons (a damned sight cheaper than the 490, and I have no need of another cpu socket when I can have 4 cores in one socket), and the XPS 710 did not have BTO storage options that I wanted (not sure why they could not design that thing to have more than two internal drives--the thing is big enough; maybe most games do not need it, as that is what the machine was designed for). I bet I am not the only one.
    Reply
  • GhandiInstinct - Tuesday, April 17, 2007 - link

    Masses would include AGP cards...

    I see no AGP DX10 cards...
    Reply
  • aka1nas - Tuesday, April 17, 2007 - link

    The "masses" don't build their own computers, and thus have long since stopped purchasing machines with AGP slots. Reply
  • GhandiInstinct - Tuesday, April 17, 2007 - link

    The "masses" also don't go hunting for DX10 cards to add FPS to their hardcore Dell and Gateway gaming rigs.

    Be honest with yourself, the people going for these cards are custom riggers.

    AGP DX10 please, theres hundreds of thousands with Pentium 3.4 Northwoods that know their processors will run BioShock well, but they need DX10 without paying for a new motherboard, DDR2, and everything else, including Vista!!!
    Reply
  • JarredWalton - Tuesday, April 17, 2007 - link

    Actually, I don't think anyone "knows" whether or not any current system will run BioShock well or not. Let's wait for the game to appear at least. We're still at least four months away (assuming they hit the current release date).

    While I can understand people complaining about the lack of AGP cards, let's be honest: why should either company invest a lot of money in an old platform? It takes time to make the AGP cards and more time to make sure the drivers all work right. At some point, the old tech has to be left behind. The cost to transition from an AGP setup to a PCIe setup is often under $100, so if the AGP cards had a $50 price premium you'd only save yourself $50 and still be stuck with the older platform.

    I figure AMD/ATI and NVIDIA basically ignored the complaints with X1900/7900 class hardware (the best was several notches below what was available on PCIe), and at this point I think they're done. I'd even go so far as to say we're probably now at the point where an AGP platform would start to be a bottleneck with current hardware - maybe not midrange stuff, but certainly the high-end offerings.

    Let's put it another way: why can't I get something faster than a single core 2.4GHz 1MB cache Athlon 64 3700+ for socket 754? Why can't I get Pentium D or Core 2 Duo for socket 478? Why do we need new motherboards for Core 2 Duo when socket 775 is used sing 915/925? Intel and AMD have forced transitions on users for years, and after a long run it's tough to say that AGP hasn't fulfilled its purpose. Such is the life of PC hardware.
    Reply
  • GhandiInstinct - Tuesday, April 17, 2007 - link

    Good points, I agree with them all.

    Basically, I feel that my 3.2 northwood, 2GB ram is worth salvaging for BioShock and Hellgate, obviously not Crysis, but it's convenient that it will be released in 08.

    I figure I can hold out 8 more months, save up during this time, and switch to quad and DDR3.

    I service hundreds of clients a week in tech support that have AGP setups, and I don't think Nvidia and ATi will abandon AGP with DX10, especially since there is speculation to believe they will be releasing this cards in the future: http://www.theinquirer.net/default.aspx?article=37...">http://www.theinquirer.net/default.aspx?article=37...

    :)
    Reply
  • LoneWolf15 - Tuesday, April 17, 2007 - link

    While it would be nice to have this hardware in NVIDIA's higher end offerings, this technology arguably makes more sense in mainstream parts. High end, expensive graphics cards are usually paired with high end expensive CPUs and lots of RAM. The decode assistance that these higher end cards offer is more than enough to enable a high end CPU to handle the hardest hitting HD videos. With mainstream graphics hardware providing a huge amount of decode assistance, the lower end CPUs that people pair with this hardware will benefit greatly.

    IMO, this is absolute bollocks.

    If I'm paying for nVidia's high-end stuff, I expect high-end everything. And this is at least the second time nVidia has only improved video on their second-round or midrange parts (anybody remember NV40/45?).

    I game some, and I want good performance for that. But, I also have a 1920x1200 display, and I want the best video playback experience I can get on it. I also want the lower CPU-usage so I can playback video while my system is left to do other processor-intensive tasks in the background.

    Once again, nVidia has really disappointed me in this area. In comparison, ATI seems to be much better at making sure their full range of cards supports their best video technologies. This (along with nVidia's driver development) continues to make the G80 seem like a "rushed-out-the-door" product.
    Reply

Log in

Don't have an account? Sign up now