Final Words

On a technical level, we really like the TurboCache. The design solves price/performance problems that have been around quite a while. Making some real use of the bandwidth offered by the PCI Express bus is a promising move that we didn't expect to see happen this early on or in this fashion.

At launch, NVIDIA's marketing position could have been a little bit misleading. Since our review originally hit the web, the wording that will be on the packages for TurboCache products has gone through a bit of a change. The original positioning was centered around a setup along these lines:


NVIDIA has defined a strict set of packaging standards around which the GeForce 6200 with TurboCache supporting 128MB will be marketed. The boxes must have text, which indicates that a minimum of 512MB of system RAM is necessary for the full 128MB of graphics RAM support. A discloser of the actual amount of onboard RAM must be displayed as well, which is something that we strongly support. It is understandable that board vendors are nervous about how this marketing will go over, no matter what wording or information is included on the package. We feel that it's advantageous for vendors to have faith in the intelligence of their customers to understand the information given to them. The official names of the TurboCache boards will be:

GeForce 6200 w/ TurboCache supporting 128MB, including 16MB of local TurboCache
GeForce 6200 w/ TurboCache supporting 128MB, including 32MB of local TurboCache
GeForce 6200 w/ TurboCache supporting 256MB, including 64MB of local TurboCache

It is conceivable that game developers could want to use the bandwidth of PCI Express for their own concoctions. Depending on how smart the driver is and how tricky the developer is, this may prove somewhat at odds on a TurboCache chip. Reading from and writing to the framebuffer from the CPU hasn't been an option in the past, but as systems and processors become faster, there are some interesting things that can be done with this type of processing. We'll have to see if anyone comes up with a game that uses technology like this, because TurboCache could either hurt or help. We'll just have to see.

The final topic we need to address with the new 6200 TurboCache part is price. We are seeing GeForce 6200 128-bit parts with 400MHz data rate RAM going for about $110 dollars on Newegg. With NVIDIA talking about bringing the new 32MB 64-bit TurboCache part out at $99 and the 16MB 32-bit part out at $79, we see them right on target with price/performance. There will also be a 64MB 64-bit TC part (supporting 256MB) available for $129 coming down the pipeline at some point, though we don't have that part in our labs just yet.

When Anand initially reviewed the GeForce 6200 part, he mentioned that pricing would need to fall closer to $100 to be competitive. Now that it has, and we have admittedly lower performance parts coming out, we are glad to see NVIDIA pricing its new parts to match.

We will have to wait and see where street prices end up falling, but at this point the 32-bit memory bus version of the 6200 with TurboCache is the obvious choice over the X300 SE. The 32MB 64-bit 6200 TC part is also the clear winner over the standard X300. When we get our hands on the 64MB version of the TurboCache part, we'll have to take another look at how the 128-bit 6200 stacks up with it's current street price.

The GeForce 6200 with TurboCache supporting 128MB will be available in OEM systems in January. It won't be available for retail purchase online until sometime in February. Since this isn't really going to be an "upgrade part", but rather a PCI Express only card, it will likely sell to more OEM customers first any way. As budget PCI Express components become more readily available to the average consumer, we may see these parts move off of store shelves, but the majority of sales are likely to remain OEM.

AGP versions of 6600 cards are coming down the pipe, but the 6200 is still slated to remain PCI Express only. As TurboCache parts require less in the way of local memory, and thus power, heat and size, it is only logical to conclude where they will end up in the near future.

At the end of the day, with NVIDIA's revised position on marketing, leadership over ATI in performance, and full support of the GeForce 6 series feature set, the 6200 with TurboCache is a very nice fit for the value segment. We are very impressed with what has been done with this edgy idea. The next thing that we're waiting to see is a working implementation of virtual memory for the graphics subsystem. The entire graphics industry has been chomping at the bit for that one for years now.

Unreal Tournament 2004 Performance
Comments Locked

43 Comments

View All Comments

  • sphinx - Wednesday, December 15, 2004 - link

    I think this is a good offering from NVIDIA. Passively cooled is a VERY good solution in my line of work. One less thing I have to worry about silencing. As I use my PC to make money, not for playing games. Although I like to play an occasional game from time to time don't get me wrong. I use my XBOX for gaming. When this card comes out I'll get one.
  • DerekWilson - Wednesday, December 15, 2004 - link

    #9, It'll only use 128Mb if a full 128 is needed at the same time -- which isn't usually the case, but we haven't done an indept study on this yet. Also, keep in mind that we still tested at the absolute highest quality settings with noAA/AF (excpet doom 3 even used 8x AF as well). We were not seeing slide show framerates. The FX5200 doesn't even support all the features of the FX5900, let alone the 6200TC. Nor does the FX5200 perform as well at equivalent settings.

    IGP is something I talked to NVIDIA about. This solution really could be an Intel Extreme Graphics killer (in the integrated market). In fact, with the developments in the mareketplace, Intel may finally get up and start moving to create a graphics solution that actually works. There are other markets to look for TurboCache solutions to show up as well.

    #11 ... The packaging issue is touchy. We'll see how vendors pull it off when it happens. The cards do run as if they has a full 128MB of ram, so that's very important to get across. We do feel that talking about the physical layout of the card and the method of support is important as well.

    #8, 1600x1200x32 only requires that 7.5MB be stored locally. As was mentioned in the artile, only the FRONT buffer needs to be local to the graphics card. This means that the depth buffer, back buffer and other render surfaces can all be in system memory. I know it's kind of hard to believe, but this card can actually draw everything diectly into system RAM from the pixel pipes and ROPs. When the buffers are swapped to display the back buffer, what's in system memory is copied into graphics memory.

    It really is very cool for a low performance budget part.

    And we might see higher performance version of turbo cache in the future ... though NVIDIA isn't talking about them yet. It might be nice to have the possibility of an expanded framebuffer with more system RAM if the user wanted to enable that feature.

    TurboCache is actually a performance enahancing feature. It's just that it's enhancing the performance of a card with either 16MB or 32MB of on board ram and either a 32 or 64 bit memory bus ... :-)
  • DAPUNISHER - Wednesday, December 15, 2004 - link

    "NVIDIA has defined a strict set of packaging standards around which the GeForce 6200 with TurboCache supporting 128MB will be marketed. The boxes must have text, which indicates that a minimum of 512MB of system RAM is necessary for the full 128MB of graphics RAM support. It doesn't seem to require that a discloser of the actual amount of onboard RAM be displayed, which is not something that we support. It is understandable that board vendors are nervous about how this marketing will go over, no matter what wording or information is included on the package."

    More bullsh!t deceptive advertising to bilk uninformed consumers out of their money.
  • MAValpha - Wednesday, December 15, 2004 - link

    #7, I was thinking the same thing. This concept seems absolutely perfect for nForce5 IGP, should NVidia decide to go that route. And, once again, NVidia's approach to budget seems superior to ATI's, at least from an initial glance. A heavily-castrated 6200TC running off SHARED RAM STILL manages to outperform a full X300? Come on, ATI, get with it!
    I gotta wonder, though: this solution seems unbelievably dependent on "proper implementation of the PCIe architecture." This means that the card can never be coupled with HSI for older systems, and transitional boards will have trouble running the card (Gigabyte's PT880 with converted PEG, for example- PT880 natively supports AGP). Does this mean that a budget card on a budget motherboard will suffer significantly?
  • mindless1 - Wednesday, December 15, 2004 - link

    IMO, even (as low as) $79 is too expensive. Taking 128MB of system memory away on a system budgetized to include one of these, would typically be leaving 384MB, robbing the system of memory to pay nVidia et al. for a part without (much) memory.

    I tend to disagree with the slant of the article too, that it's not necessarily a good thing to try pushing modern gaming eyecandy at expense of performance. What looks good isn't a crisp and anti-aliased slideshow, but a playable game. even someone just beginning at gaming can discern a lag when fragging it out.

    We're only looking at current games now, the bar for performance needs will be raised but the cards are memory bandwidth limited due to the architecture. These might look like a good alternative for someone who went and paid $90 for an FX5200 from BestBuy last year but in a budget system it's going to be tough to justify ~ $80-100 when a few bucks more won't rob one of system memory or as much performance.

    Even so, historically we've seen that initial price-points do fall, better to see modern support than a rehash of a FX5xxx.
  • PrinceGaz - Wednesday, December 15, 2004 - link

    nVidia's marketing department must be really pleased with coming up with the name "TurboCache". It makes it sounds like its faster than a normal card without TurboCache, whereas in reality the opposite is true. Uninformed customers would probably choose a TurboCache version over a normal version, even if they were priced the same!
    ----

    Derek- does the 16MB 6200 have limitations on what resolutions can be used in games? I know you wouldn't want to run it at 1600x1200x32 in Far Cry for instance, but in older games like Quake 3 it should be fast enough.

    Thing is that the frame-buffer at 1600x1200x32 requires 7.3MB, so with double-buffering you're using up a total of 14.65MB leaving just 1.35MB for the Z-buffer and anything else it needs to keep in local memory, which might not be enough. I'm assuming the frame the card is currently displaying must be held in local memory, as well as the frame being worked on.

    The situation is even worse with anti-aliasing as the frame-buffer size of the frame being worked on is multiplied in size by the level of AA. At 1280x960x32 with 4xAA, the single frame-buffer alone is 18.75MB meaning it won't fit in the 16MB 6200. It might not even manage 1024x768 with 4xAA as the two frame buffers would total 15MB (12MB for the one being worked on, 3MB for the one being displayed).

    It will be interesting to know what the resolution limits for the 16MB (and 32MB) cards are, with and without anti-aliasing.
  • Spacecomber - Wednesday, December 15, 2004 - link

    I may be way off base with this question, but would this sort of GPU lend it self well to some sort of integrated, onboard graphics solution? Even if it is isn't integrated directly into the main chipset (or chip for Nvidia), could it simply be soldered to the motherboard somewhere?

    Somehow this seems to make more sense to me for what to do with this technology than use it on a dedicated video card, especially if the price point is not that much less than a regular 6200.
  • bamacre - Wednesday, December 15, 2004 - link

    Great review.

    Wow, almost 50 fps on HL2 at 10x7, that is pretty good for a budget card.

    I'd like to see MS, ATI, and Nvidia get more people into PC gaming, that would make for better and cheaper games for those of us who are already loving it.
  • DerekWilson - Wednesday, December 15, 2004 - link

    Actually, nForce 4 + AMD systems are looking better than Intel non-925xe based systems for TurboCache parts. We haven't looked at the 925xe yet though ... that could be interesting. But overhead hurts utilization alot on a serial bus, and having more than 6.4GB/s from memory might not be that useful.

    The efficiency of getting bandwidth across the PCI Express bus will still be the main bottleneck in systems though. Chipsets need to impliment PCI Express properly and well. That's really the important part. The 915 chipset is an example of what not to do.
  • jenand - Wednesday, December 15, 2004 - link

    Turbo cache and Hyper memory cards should do better on Intel based systems as they do not need to go via the HTT to det to the memory. So I agree with #3 show us som i925X(E) tests. I'm not expecting higher scores on the Intel systems however. Just a larger gain from this type of technology.

Log in

Don't have an account? Sign up now