Final Words

On a technical level, we really like the TurboCache. The design solves price/performance problems that have been around quite a while. Making some real use of the bandwidth offered by the PCI Express bus is a promising move that we didn't expect to see happen this early on or in this fashion.

At launch, NVIDIA's marketing position could have been a little bit misleading. Since our review originally hit the web, the wording that will be on the packages for TurboCache products has gone through a bit of a change. The original positioning was centered around a setup along these lines:


NVIDIA has defined a strict set of packaging standards around which the GeForce 6200 with TurboCache supporting 128MB will be marketed. The boxes must have text, which indicates that a minimum of 512MB of system RAM is necessary for the full 128MB of graphics RAM support. A discloser of the actual amount of onboard RAM must be displayed as well, which is something that we strongly support. It is understandable that board vendors are nervous about how this marketing will go over, no matter what wording or information is included on the package. We feel that it's advantageous for vendors to have faith in the intelligence of their customers to understand the information given to them. The official names of the TurboCache boards will be:

GeForce 6200 w/ TurboCache supporting 128MB, including 16MB of local TurboCache
GeForce 6200 w/ TurboCache supporting 128MB, including 32MB of local TurboCache
GeForce 6200 w/ TurboCache supporting 256MB, including 64MB of local TurboCache

It is conceivable that game developers could want to use the bandwidth of PCI Express for their own concoctions. Depending on how smart the driver is and how tricky the developer is, this may prove somewhat at odds on a TurboCache chip. Reading from and writing to the framebuffer from the CPU hasn't been an option in the past, but as systems and processors become faster, there are some interesting things that can be done with this type of processing. We'll have to see if anyone comes up with a game that uses technology like this, because TurboCache could either hurt or help. We'll just have to see.

The final topic we need to address with the new 6200 TurboCache part is price. We are seeing GeForce 6200 128-bit parts with 400MHz data rate RAM going for about $110 dollars on Newegg. With NVIDIA talking about bringing the new 32MB 64-bit TurboCache part out at $99 and the 16MB 32-bit part out at $79, we see them right on target with price/performance. There will also be a 64MB 64-bit TC part (supporting 256MB) available for $129 coming down the pipeline at some point, though we don't have that part in our labs just yet.

When Anand initially reviewed the GeForce 6200 part, he mentioned that pricing would need to fall closer to $100 to be competitive. Now that it has, and we have admittedly lower performance parts coming out, we are glad to see NVIDIA pricing its new parts to match.

We will have to wait and see where street prices end up falling, but at this point the 32-bit memory bus version of the 6200 with TurboCache is the obvious choice over the X300 SE. The 32MB 64-bit 6200 TC part is also the clear winner over the standard X300. When we get our hands on the 64MB version of the TurboCache part, we'll have to take another look at how the 128-bit 6200 stacks up with it's current street price.

The GeForce 6200 with TurboCache supporting 128MB will be available in OEM systems in January. It won't be available for retail purchase online until sometime in February. Since this isn't really going to be an "upgrade part", but rather a PCI Express only card, it will likely sell to more OEM customers first any way. As budget PCI Express components become more readily available to the average consumer, we may see these parts move off of store shelves, but the majority of sales are likely to remain OEM.

AGP versions of 6600 cards are coming down the pipe, but the 6200 is still slated to remain PCI Express only. As TurboCache parts require less in the way of local memory, and thus power, heat and size, it is only logical to conclude where they will end up in the near future.

At the end of the day, with NVIDIA's revised position on marketing, leadership over ATI in performance, and full support of the GeForce 6 series feature set, the 6200 with TurboCache is a very nice fit for the value segment. We are very impressed with what has been done with this edgy idea. The next thing that we're waiting to see is a working implementation of virtual memory for the graphics subsystem. The entire graphics industry has been chomping at the bit for that one for years now.

Unreal Tournament 2004 Performance
Comments Locked

43 Comments

View All Comments

  • paulsiu - Tuesday, March 1, 2005 - link

    I am not sure I like this product at the price point. If it was $50, then it would make sense, but as another poster pointed out, the older and faster 6200 with real memory is about $10 more.

    The marketing is also deceptive. 6200 Turbo cache sounds like it would be faster than the 6200.

    In addition, this so call innovative use of system memory sounds like nothing more than integrated video. OK, it's faster, but aren't you increasing cpu load.

    The review also use an Athlon 64 4000+, I am doubtful that users who buy an A64 4000+ is going to skip on the video card.

    Paul
  • guarana - Thursday, January 27, 2005 - link

    I was forced to go for a 6200TC 64MB (up to 256Mb) solution about a week ago. Had to upgrade my MoBo to a PCI-X version and had to get the cheapest card that i could find in the PCI-X flavour.

    I must say its a lot better than the FX5200 card i used to have ... I am running it with only 256Mb of system RAM so its not running at optimal performance , but i can run UT2003 with everything set to HIGH and in a 1280x1024 rez :)
    A few stutters when the game actually starts , but after about 10seconds , the game runs smooth and without any issues ... dont know the exact FPS though :)

    I score about 12000 points on 3dMark2001 with stock clocks (yeah 3DM2001 old, but its all i could download over-night)

    Will let you know what happens when i finally get another 256Mb in the damn thing.
  • Jeff7181 - Wednesday, December 22, 2004 - link

    I don't like this... why would I want the one that costs over $100 when I can get the 6200 for $110-210 that has it's own dedicated memory and performs better. It's stupid to replace the current 6200 with this pile. It would be fine as a $50-75 card, or for use in a laptop, or a HTPC... but don't replace the current 6200 with this.
  • icarus4586 - Friday, December 17, 2004 - link

    I have a laptop with a 64MB Mobility Radeon 9600 (350MHz GPU, 466MHz DDR MHz 128bit RAM), and I can run Far Cry at 1280x800, high settings, Doom 3 1024x768 high settings, Halo 1024x768 high settings, Half-Life 2 1280x800 high settings, all at around 30fps.
    This is, obviously, an AGP solution. I don't really know how it does it. I was very surprised at what it could pull off, especially the high resolutions, with only 64MB onboard.
    What's going on?
  • Rand - Friday, December 17, 2004 - link

    Have you heard whether the limited PCI-E X16 bandwidth of the I915 is true for the I925/825XE chipsets also?

    Also, I'm curious whether you've done any testing on the nForce 4 with only one DIMM so as to limit the system bandwidth and get some indication of how the GeForce6200TC scales in performance with greater/lesser system memory bandwidth available?
  • Rand - Friday, December 17, 2004 - link

    DerekWilson-

    "As far as I understand Hypermemory, it is not capable of rendering directly to system memory."

    In the past ATI has indicated all of the R300 derived cores are capable of writing directly to a texture in system memory.
    At the very least HyperMemory implementation on the Radeon Express 200G chipset must be able to do so, as ATI supports implementations without any local RAM they have to be capable of rendering to system memory to operate.

    The only difference I've noticed in the respective implementations thus far is that nVidia's Turbocache lowest local bus size if 32-bit, whereas ATI's implementation only supports as low as 64bit so the smallest local RAM they can use is 32MB. (Well, they can use no local RAM also, though that would obviously be considerably slower)
  • Rand - Friday, December 17, 2004 - link

  • DerekWilson - Thursday, December 16, 2004 - link

    And you can bet that NVIDIA's Intel chipset will have a nice, speedy, optimized for SLI and TurboCache PCIe implimentation as well.
  • PrinceGaz - Thursday, December 16, 2004 - link

    Yeah, this does all seem to make some sort of sense now. But not much sense as I can't see why Intel would delibrately limit the bandwidth of the PCIe bus they were pushing so heavily. Unless the 925 chipset has a full bi-directional 4GB/s, and the 3 down/1 up is something they decided to impose on the cheaper 915 to differentiate it from the high-end 925.

    I guess it's safe to assume nVidia implemented bi-directional 4GB/s in the nForce4, given that they were also working on graphics cards that would be dependent on PCIe bandwidth. And unless there was a good reason for VIA, ATI, and SiS not to do so; I would imagine the K8T890, RX480/RS480, and SiS756 will also be full 4GB/s both ways.
  • DerekWilson - Thursday, December 16, 2004 - link

    NVIDIA tells us its a limitation of 915. Looking back, they also heavily indicated that "some key chipsets" would support the same bandwidth as NVIDIA's own bridge solution at the 6 series launch. If you remember, their solution was really a 4GB total bandwidth solution (overclocked AGP 8x to "16x" giving half the PCIe bandwidth) ... Their diagrams all showed a 3 down 1 up memory flow. But they didn't explicitly name 915 at the time.

Log in

Don't have an account? Sign up now