It's now been two months since Half Life 2's release and much to everyone's surprise, the game was far from a GPU hog. The more powerful your graphics card, the better everything looked and the smoother everything ran, but even people with GeForce4 MXs are able to enjoy Valve's long awaited masterpiece.

Immediately upon its release we looked closely at the impacts of GPUs on Half Life 2 performance in Parts I and II of our Half Life 2 coverage. Part I focused on the performance of High End DirectX 9 class GPUs, while Part II focused on mid-range GPUs as well as the previous generation of DirectX 8 class GPUs.

The one area we had not covered up to this point was the impact of CPUs on Half Life 2 performance. In a 3D game, the CPU is responsible mainly for the physics of the environment as well as the artificial intelligence of the NPC elements of the game. There is also a good deal of graphics driver overhead that taxes the CPU, and thus with more complicated games we get higher dependencies on fast CPUs.

Half Life 2 was an intriguing case in itself simply because the game boasted the most sophisticated physics engines that had been seen in a game to date. Elements of the game such as the gravity gun would prove to be extremely taxing on your CPU. In fact, we found that even the fastest $500+ video cards can still be CPU bound in Half Life 2 at normally GPU limited resolutions.

Although much delayed, today we are able to bring you the third and final part of our Half Life 2 coverage focusing entirely on CPU performance as it relates to graphics performance in Half Life 2. After all, a $500 graphics card is worthless if it is bound by a slow CPU.

All of the tests in this article use the same test beds and testing methodology as our first two Half Life 2 articles. You can download all of the demos used in this article here.

We apologize for the delay in the publication of this article, but as often the case, we get busy and things such as this article get postponed and postponed. Rather than shelve it, we decided to publish it - better late than never. Now on to the benchmarks...

AMD vs. Intel Performance
POST A COMMENT

68 Comments

View All Comments

  • dderidex - Wednesday, February 02, 2005 - link

    Quick question...

    On the 'cache comparison' on page 5, where they compare an A64 with 1mb cache to an A64 with 512k cache...

    What CPUs are they comparing?

    512k Socket 754 (single channel)
    vs
    1mb Socket 754 (single channel

    or

    512k Socket 939 (dual channel)
    vs
    1mb Socket 939 (dual channel)

    or

    512k Socket 754 (single channel)
    vs
    1mb Socket 939 (dual channel)

    etc.

    No info is provided, so it's hard to really say what the numbers are showing.
    Reply
  • doughtree - Tuesday, September 06, 2005 - link

    great article, next game you should do is battlefield 2! Reply
  • dsorrent - Monday, January 31, 2005 - link

    How come in all of the CPU comparisons, the AMD FX-53 is left out of the comparisons? Reply
  • PsharkJF - Monday, January 31, 2005 - link

    That has no bearing to half-life. Nice job, fanboy. Reply
  • levicki - Saturday, January 29, 2005 - link

    Btw, I have Pentium 4 520 and 6600 GT card and I prefer that combo over AMD+ATI anytime. I had a chance to work on AMD and I didn't like it -- no hyperthreading = bad feeling when working with few things at once. With my P4 I can compress DVD to DivX and play Need For Speed Underground 2 without a hitch. I had ATI (Sapphire 9600 Pro) and didn't like that crap too especially when OpenGL and drivers are concerned = too much crashing.
    Intel .vs. AMD -- people can argue for ages about that but my 2 cents are that musicians using Pentium 4 with HT get 0.67 ms latency with latest beta kX drivers for Creative cards and AMD owners get close to 5.8 ms. From a developer point of view Intel is much better choice too due to great support, compiler and documentation. So my next CPU will be LGA775 with EM64T (I already have a compatible mainboard) and not AMD which by the way has troubles with Winchester cores failing Prime 95 at stock speed.
    Reply
  • Carfax - Saturday, January 29, 2005 - link

    Yeah, developers are so lazy that they will still use x87 for FP rather than SSE2, knowing that the latter will give better performance.

    Thats why the new 64-bit OS from MSoft will be a good thing. It will force developers to use SSE2/SSE3, because they have access to twice as many registers and the OS itself won't recognize x87 for 64-bit operations.
    Reply
  • Barneyk - Saturday, January 29, 2005 - link

    I would've liked to se some benchmarks on older CPUs to, kinda dissapointed... Reply
  • levicki - Friday, January 28, 2005 - link

    I just wonder how would this test look like if it was made with 6800 Ultra instead with ATI X850 XT.

    Disabling SSE/SSE2 on Athlon and getting the same results as if they were enabled means that game is NOT OPTIMIZED. Using FPU math instead of SSE/SSE2 today is a sin. It could have been 3-4 times faster if they cared about optimizing the code.
    Reply
  • Phantronius - Friday, January 28, 2005 - link

    #53

    Its because the Prescotts wern't better then the Northwoods to begin with, hence why don't see squat performance differences between them.
    Reply
  • maestroH - Friday, January 28, 2005 - link

    Thx for your reply #56. Apologies for false '@9700pro' statement. Meant to say 'soft-modded with Omega driver to 9700pro'. Cheers. Reply

Log in

Don't have an account? Sign up now