CPU Scaling of Graphics Cards
In order to find out the dependency of various GPUs on fast CPUs, we took our Athlon 64 test bed and while keeping all variables the same, adjusted the clock speed from 1GHz all the way up to 2.6GHz. We ran two groups of cards across the entire spectrum of clock speeds in order to get CPU scaling curves on a per card basis.
The first group of cards were classified as "High End" cards and were all run at 1600 x 1200. All of the cards are graphed on the same graph as to help users determine at what CPU speed it makes sense to purchase a faster card.
The second group of cards were classified as "Mid Range" cards and were all run at 1280 x 1024. All of the cards are graphed on the same graph as to help users determine at what CPU speed it makes sense to purchase a faster card.
By no means were these charts meant to be all inclusive, but they should serve as a good guide for weighing CPU vs. GPU purchases. We also included a benchmark of Half Life 2 deathmatch in the results here using our own custom demo - at_mp_3.
All cards were run at the same high detail settings we performed our Half Life 2 GPU comparison under.
High End Graphics Card CPU Scaling
For our High End cards we’re looking at the Radeon X850 XT Platinum Edition, the Radeon X800 XT and Radeon X800 Pro from ATI. From NVIDIA we have the GeForce 6800 Ultra and GeForce 6800GT. We realize that this isn’t an all encompassing list of GPUs, but you should be able to extrapolate scaling of similar high end cards based on their performance relative to these five.
The most important graph is the first one as it is an average of all six of our Half Life 2 timedemos and gives you the best overall indication of CPU scaling of these GPUs on a holistic level. We have, however, included the individual charts for each timedemo for reference.
Interestingly enough, at 1600 x 1200, all of these high end GPUs scale quite similarly with CPU speed. We see that at lower speeds, the X850 XT PE performs identically to the X800 XT, it is only after you get above 2.2GHz on the Athlon 64 that the two even begin to separate. The same is not true for the GeForce 6800GT and Ultra, those two really begin separating much earlier on in the game.
68 Comments
View All Comments
dderidex - Wednesday, February 2, 2005 - link
Quick question...On the 'cache comparison' on page 5, where they compare an A64 with 1mb cache to an A64 with 512k cache...
What CPUs are they comparing?
512k Socket 754 (single channel)
vs
1mb Socket 754 (single channel
or
512k Socket 939 (dual channel)
vs
1mb Socket 939 (dual channel)
or
512k Socket 754 (single channel)
vs
1mb Socket 939 (dual channel)
etc.
No info is provided, so it's hard to really say what the numbers are showing.
doughtree - Tuesday, September 6, 2005 - link
great article, next game you should do is battlefield 2!dsorrent - Monday, January 31, 2005 - link
How come in all of the CPU comparisons, the AMD FX-53 is left out of the comparisons?PsharkJF - Monday, January 31, 2005 - link
That has no bearing to half-life. Nice job, fanboy.levicki - Saturday, January 29, 2005 - link
Btw, I have Pentium 4 520 and 6600 GT card and I prefer that combo over AMD+ATI anytime. I had a chance to work on AMD and I didn't like it -- no hyperthreading = bad feeling when working with few things at once. With my P4 I can compress DVD to DivX and play Need For Speed Underground 2 without a hitch. I had ATI (Sapphire 9600 Pro) and didn't like that crap too especially when OpenGL and drivers are concerned = too much crashing.Intel .vs. AMD -- people can argue for ages about that but my 2 cents are that musicians using Pentium 4 with HT get 0.67 ms latency with latest beta kX drivers for Creative cards and AMD owners get close to 5.8 ms. From a developer point of view Intel is much better choice too due to great support, compiler and documentation. So my next CPU will be LGA775 with EM64T (I already have a compatible mainboard) and not AMD which by the way has troubles with Winchester cores failing Prime 95 at stock speed.
Carfax - Saturday, January 29, 2005 - link
Yeah, developers are so lazy that they will still use x87 for FP rather than SSE2, knowing that the latter will give better performance.Thats why the new 64-bit OS from MSoft will be a good thing. It will force developers to use SSE2/SSE3, because they have access to twice as many registers and the OS itself won't recognize x87 for 64-bit operations.
Barneyk - Saturday, January 29, 2005 - link
I would've liked to se some benchmarks on older CPUs to, kinda dissapointed...levicki - Friday, January 28, 2005 - link
I just wonder how would this test look like if it was made with 6800 Ultra instead with ATI X850 XT.Disabling SSE/SSE2 on Athlon and getting the same results as if they were enabled means that game is NOT OPTIMIZED. Using FPU math instead of SSE/SSE2 today is a sin. It could have been 3-4 times faster if they cared about optimizing the code.
Phantronius - Friday, January 28, 2005 - link
#53Its because the Prescotts wern't better then the Northwoods to begin with, hence why don't see squat performance differences between them.
maestroH - Friday, January 28, 2005 - link
Thx for your reply #56. Apologies for false '@9700pro' statement. Meant to say 'soft-modded with Omega driver to 9700pro'. Cheers.