Unreal Tournament 2004 Performance

Here, we see the GeForce 6200, the 6200 TurboCache 64b, and the X300 all doing well at the top end in UT2K4. The 32b TurboCache holds a good lead over the X300 SE on the low end of this performance test.

Unreal Tournament 2004 Performance
Unreal Tournament 2004 Resolution Scaling

The three high end parts run a very tight race across the board. Interestingly, the 32-bit and 64-bit TurboCache parts don't scale the same way from 640x480 to 800x600.

Half-Life 2 Performance Final Words
Comments Locked

43 Comments

View All Comments

  • paulsiu - Tuesday, March 1, 2005 - link

    I am not sure I like this product at the price point. If it was $50, then it would make sense, but as another poster pointed out, the older and faster 6200 with real memory is about $10 more.

    The marketing is also deceptive. 6200 Turbo cache sounds like it would be faster than the 6200.

    In addition, this so call innovative use of system memory sounds like nothing more than integrated video. OK, it's faster, but aren't you increasing cpu load.

    The review also use an Athlon 64 4000+, I am doubtful that users who buy an A64 4000+ is going to skip on the video card.

    Paul
  • guarana - Thursday, January 27, 2005 - link

    I was forced to go for a 6200TC 64MB (up to 256Mb) solution about a week ago. Had to upgrade my MoBo to a PCI-X version and had to get the cheapest card that i could find in the PCI-X flavour.

    I must say its a lot better than the FX5200 card i used to have ... I am running it with only 256Mb of system RAM so its not running at optimal performance , but i can run UT2003 with everything set to HIGH and in a 1280x1024 rez :)
    A few stutters when the game actually starts , but after about 10seconds , the game runs smooth and without any issues ... dont know the exact FPS though :)

    I score about 12000 points on 3dMark2001 with stock clocks (yeah 3DM2001 old, but its all i could download over-night)

    Will let you know what happens when i finally get another 256Mb in the damn thing.
  • Jeff7181 - Wednesday, December 22, 2004 - link

    I don't like this... why would I want the one that costs over $100 when I can get the 6200 for $110-210 that has it's own dedicated memory and performs better. It's stupid to replace the current 6200 with this pile. It would be fine as a $50-75 card, or for use in a laptop, or a HTPC... but don't replace the current 6200 with this.
  • icarus4586 - Friday, December 17, 2004 - link

    I have a laptop with a 64MB Mobility Radeon 9600 (350MHz GPU, 466MHz DDR MHz 128bit RAM), and I can run Far Cry at 1280x800, high settings, Doom 3 1024x768 high settings, Halo 1024x768 high settings, Half-Life 2 1280x800 high settings, all at around 30fps.
    This is, obviously, an AGP solution. I don't really know how it does it. I was very surprised at what it could pull off, especially the high resolutions, with only 64MB onboard.
    What's going on?
  • Rand - Friday, December 17, 2004 - link

    Have you heard whether the limited PCI-E X16 bandwidth of the I915 is true for the I925/825XE chipsets also?

    Also, I'm curious whether you've done any testing on the nForce 4 with only one DIMM so as to limit the system bandwidth and get some indication of how the GeForce6200TC scales in performance with greater/lesser system memory bandwidth available?
  • Rand - Friday, December 17, 2004 - link

    DerekWilson-

    "As far as I understand Hypermemory, it is not capable of rendering directly to system memory."

    In the past ATI has indicated all of the R300 derived cores are capable of writing directly to a texture in system memory.
    At the very least HyperMemory implementation on the Radeon Express 200G chipset must be able to do so, as ATI supports implementations without any local RAM they have to be capable of rendering to system memory to operate.

    The only difference I've noticed in the respective implementations thus far is that nVidia's Turbocache lowest local bus size if 32-bit, whereas ATI's implementation only supports as low as 64bit so the smallest local RAM they can use is 32MB. (Well, they can use no local RAM also, though that would obviously be considerably slower)
  • Rand - Friday, December 17, 2004 - link

  • DerekWilson - Thursday, December 16, 2004 - link

    And you can bet that NVIDIA's Intel chipset will have a nice, speedy, optimized for SLI and TurboCache PCIe implimentation as well.
  • PrinceGaz - Thursday, December 16, 2004 - link

    Yeah, this does all seem to make some sort of sense now. But not much sense as I can't see why Intel would delibrately limit the bandwidth of the PCIe bus they were pushing so heavily. Unless the 925 chipset has a full bi-directional 4GB/s, and the 3 down/1 up is something they decided to impose on the cheaper 915 to differentiate it from the high-end 925.

    I guess it's safe to assume nVidia implemented bi-directional 4GB/s in the nForce4, given that they were also working on graphics cards that would be dependent on PCIe bandwidth. And unless there was a good reason for VIA, ATI, and SiS not to do so; I would imagine the K8T890, RX480/RS480, and SiS756 will also be full 4GB/s both ways.
  • DerekWilson - Thursday, December 16, 2004 - link

    NVIDIA tells us its a limitation of 915. Looking back, they also heavily indicated that "some key chipsets" would support the same bandwidth as NVIDIA's own bridge solution at the 6 series launch. If you remember, their solution was really a 4GB total bandwidth solution (overclocked AGP 8x to "16x" giving half the PCIe bandwidth) ... Their diagrams all showed a 3 down 1 up memory flow. But they didn't explicitly name 915 at the time.

Log in

Don't have an account? Sign up now