Linux 3D AGP GPU Roundup: More Cutting Edge Penguin Performanceby Kristopher Kubicki on October 4, 2004 12:05 AM EST
- Posted in
The TestBelow, you can see our test rig configuration.
|Performance Test Configuration|
|Processor(s):||AMD Athlon 64 3800+ (130nm, 2.4GHz, 512KB L2 Cache)|
|RAM:||2 x 512MB Mushkin PC-3200 CL2 (400MHz)|
|Motherboard(s):||MSI K8T Neo2 (Socket 939)|
|Hard Drives:||Seagate 7200.7 120GB SATA|
|Video Cards:||GeForce 6800 Ultra 256MB
GeForce 6800 128MB
GeForceFX 5950 Ultra 256MB
GeForceFX 5900 Ultra 128MB
GeForce 5900 128MB
GeForceFX 5700 Ultra
Radeon X800 Pro 256MB
Radeon 9800XT 256MB
Radeon 9700 Pro 128MB
Radeon 9600XT 128MB
|Operating System(s):||SuSE 9.1 Professional
|Driver:||(ATI) SuSE 9.1 Supplement fglrx 3.12.0
You may have noticed that we are running an extremely new version of the Linux kernel, and very new ATI and NVIDIA drivers as well. For all intents and purposes, we are running a completely default SuSE 9.1 Professional install with the SuSE 9.2-RC3 kernel and brand new drivers. This was not an easy accomplishment, but was unfortunately the only manner in which we could install a platform compatible with both ATI and NVIDIA video cards on the Socket 939 architecture.
Our testing procedure is very simple. We take our various video cards and run respective time demos while using our AnandTech FrameGetter tool. We rely on in-game benchmarks for some of our tests as well - since FG will not run on Wine games. We post the average frames per second scores calculated by the utility. Remember, FG calculates the frames per second every second, but it also tells us the time that our demo ran, and how many frames it took. This average is posted for most benchmarks.
However, when testing our games, we find that some interesting patterns sometimes occur. For these instances, we have specially crafted the FG program to record our timedemo by taking the frames per second every second and dumping this data into a text file. We explained this in our initial FG announcement. Some graphs, particularly Wolfenstein and Unreal Tournament, have particularly fascinating trends, which we explore more in the evaluation.
All of our benchmarks are run three times and the highest scores obtained are taken - and as a general trend, the highest score is usually the second or third pass at the timedemo. Why don't we take the median values and standard deviation? For one, IO bottlenecks tend to occur due to the hard drive and memory, even though they "theoretically" should behave the same every time that we run the program. Memory hogs like UT2004, which tend to also load a lot of data off the hard drive, are notorious for behaving strangely on the first few passes.
Since we had issues with the ATI driver running Anisotropic Filtering, we did not run any tests with AF on. However, many of our games have sets of benchmarks with 4X Anti Aliasing disabled and enabled. At the end of this analysis, we also have a small section showing some of the differences with the various AA and anisotropic filters enabled.