Multitasking-

The vast majority of our benchmarks are single task events that utilize anywhere from 23MB up to 1.4GB of memory space during the course of the benchmark. Obviously, this is not enough to fully stress test our 6GB or 8GB memory configurations. We devised a benchmark that would simulate a typical home workstation and consume as much of the 6GB/8GB as possible without crashing the machine.

We start by opening two instances of Internet Explorer 8.0 each with six tabs opened to flash intensive websites followed by Adobe Reader 9.1 with a rather large PDF document open, and iTunes 8 blaring the music selection of the day loudly. We then open two instances of Lightwave 3D 9.6 with our standard animation, Cinema 4D R11 with the benchmark scene, Microsoft Excel and Word 2007 with large documents, and finally Photoshop CS4 x64 with our test image.



Before we start the benchmark process, our idle state memory usage is 4.05GB. Sa-weet!



We wait two minutes for system activities to idle and then start playing Pinball Wizard via iTunes, start the render scene process in Cinema 4D R11, start a resize of our Photoshop image, and finally the render frame benchmark in Lightwave 3D. Our maximum memory usage during the benchmark is 5.62GB with 100% CPU utilization across all four or eight threads.

Application Performance - MultiTask Test - Lightwave 3D

 

Application Performance - MultiTask Test - Cinema 4D

 

Application Performance - MultiTask Test - Total Time

 

So far, our results have pretty much been a shampoo, rinse, and repeat event. I believe multitasking is what separates good systems from the not so good systems. I spend very little time using my system for gaming and when I do game, everything else is shutdown to maximize frame rates. Otherwise, I usually have a dozen or so browser windows open, music playing, several IM programs open and in use, Office apps, and various video/audio applications open in the background.

One or two of those primary applications are normally doing something simultaneously, especially when working. As such, I usually find this scenario to be one of the most demanding on a computer that is actually utilized for something besides trying to get a few benchmark sprints run before the LN2 pot goes dry.

The i5/750 results actually surprised me. The system never once felt “slow” but the results do not lie. The i5/750 had its head served on a platter at stock speeds, primarily due to the lack of Hyper-Threading when compared to the other choices. The 965 BE put up very respectable numbers and scaled linearly based on clock speed. An 11% increase in clock speed resulted in a 10% improvement in the total benchmark score for the 965 BE. You cannot ask for more than that.

At 3.8GHz clock speeds, it is once again a tossup between the 920 and 860 processors with HT enabled. The 920 did hold a slight advantage over the 860 at stock clock settings, attributable to slightly better data throughput when under load conditions. Otherwise, on the Intel side the i7/870 provided excellent results based on its aggressive turbo mode, although at a price.

Gaming-

We utilize the Ranch Small demo file along with the FarCry 2 benchmark utility. This particular demo offers a balance of both GPU and CPU performance.

Gaming Performance - Far Cry 2


We utilize FRAPS to capture our results in a very repeatable section of the game and report the median score of our five benchmark runs. H.A.W.X. responds well to memory bandwidth improvements and scales linearly with CPU and GPU clock increases.

Gaming Performance - H.A.W.X.


Your eyes are not deceiving you. After 100+ clean OS installs, countless video card, motherboard, memory and driver combinations, we have results that are not only repeatable, but appear to be valid. We also tracked in-game performance with FRAPS and had similar results. Put simply, unless we have something odd going on with driver optimizations, a BIOS bug, or a glitch in the OS, our NV cards perform better on the AMD platform than they do on the Intel platform. The pattern reverses itself when we utilize the AMD video cards.

It is items like this that make you lose hair and delay articles. Neither of which I can afford to have happen. However, we have several suppliers assisting us with the problem (if it is a problem) and hope to have an answer shortly. These results also repeat themselves in other games like H.A.W.X. and Left 4 Dead but not in Crysis Warhead or Dawn of War II. So, besides the gaming situation, we also see a similar pattern in AutoCad 2010 and other 3D rendering applications where GPU acceleration is utilized, it is just not as pronounced.

Applications Quick Thoughts
Comments Locked

77 Comments

View All Comments

  • GeorgeH - Tuesday, September 15, 2009 - link

    *Puts on tinfoil hat*

    Intel won't let Nvidia make chipsets for 1156/1366.

    Nvidia GPU's perform conspicuously poorly only on 1156/1366.

    Coincidence???? You decide!!!!!!!

    *Takes off tinfoil hat*


    Random question: Does the Nvidia+Intel performance thing correlate at all to how multithreaded a game is?
  • CB434 - Tuesday, September 15, 2009 - link

    You know it's an honest review when unexpected results like this pop up.

    Purely coincidentally, I have emailed a few reviewers in the last few weeks about this..

    All the video card reviews for the last 12 months show i7's as the test rig for all the video cards. It's so stupid that no one has ever thought to use an Nvidia card on a Phenom. Until now! Good work. It's always either AMD + Ati or I7 + Ati or Nvidia. All of the perceptions and ideas of performance for the current gen of Nvidia video cards is all based on how they work on an INTEL. Not overall.

    It's just a shame the Phenom doesn't have SLI/CF in the same board. I'd be using Phenom II with 275GTX SLI and looks like it would be a kickass solution for gaming.

    I don't think it's necessarily a bug or a problem. Maybe it's just the lottery/fluke of when you combine different parts that have different ways of working. Ati and Nvidia go two seperate ways to reach the same goal. Different memory bus bandwidth, shaders clocks etc and maybe something in there gels with the way the Phenom II works.
  • FlanK3r - Tuesday, September 15, 2009 - link

    AMD X4 965 with 2200 MHz uncore

    2200 MHz uncore is whorse than default 2000 MHz! Its one bad for review :(...I wrote review about Phenoms overcloking and made some comparsion uncores vs CPU cloks. In all performance, 2200 MHz uncore is very bad choice. More better is 2000MHz and of course 2400 Mhz or 2600 MHz (and higher for CPU clocks 3700 MHz up)
  • daydr3am3r - Tuesday, September 15, 2009 - link

    I have to ask, and pardon me for the triviality and/or ignorance but,
    why is the article title

    Topic: Motherboard
    Manufacturer: ASRock

    ?

    The accompanying picture also displays an ASRock mobo..
  • Ben90 - Tuesday, September 15, 2009 - link

    uhh, so AMD pretty much wiped the floor BIG time with the gtx 275. Ive heard several reports that AMD chips perform better on higher resolutions then i7s, but after research i never found the results to be that drastic

    I hope to hear the answer, maybe Nvidia released a driver that absolutely loves the deneb architecture, or possibly AMD just got a lucky two games as the ones benched.

    If its something like the first one, and phenom can do this consistently game to game, we might see some big changes soon.
  • MadMan007 - Tuesday, September 15, 2009 - link

    Thanks for the exploration of overclocks but unfortunately this article is worthless to me, and perhaps many others, without any Core 2 CPUs. Unless I want to try to extrapolate back to older articles, and that's a guessing game plus I'd need to find overclocked results, I am unable to tell how much benefit a new system would be.
  • ggathagan - Wednesday, September 16, 2009 - link

    As Gary clearly stated on the first page of the article and reiterated in the third comment, he had not yet finished with the P45/Q9550 testing at the time of his update.
    He also clearly stated that he would be adding those results at a later point.
  • coconutboy - Tuesday, September 15, 2009 - link

    Good update Gary, this is the kinda article/update I like to see. Comparisons are tough, but using dollar-for-dollar or clockrates helps me as a consumer.

    I notice that for walk-in customers, Microcenter (at least the one near me here in SoCal) has the following prices

    $230 i7 860
    $200 i7 920
    $160 i5 750

    I can get a 920 for less than an 860, then combine it with the ASRock X58 extreme that was recommended back in the July article ($170 at Newegg and it is getting high marks in the comments) and do a moderate overclock to 3.2-3.8GHz to achieve amazing performance for the price. Alternatively I can pick up the i5 750 and a ~$100-130 mobo and have a very low-cost outlay for a great gaming box. Hopefully AMD will also drop their CPU prices soon to give us yet another option.

    With the new AMD and Nvidia cards coming out in the near future, all these choices are very inexpensive for the performance and will save $$ to be spent on a brand-spanking new vid card. In the meantime, an ATI 4850 or Nvidia 9600GT can be had for under $100 to conserve $$ and tide gamers over till the new hardware hits. I'm building two new systems in the next few weeks, and the above is my gameplan.
  • MadMan007 - Tuesday, September 15, 2009 - link

    Anyone who is a gamer/enthusiast who doesn't have at *least* an HD4850 or 9600GT shouldn't be buying either one right now. Only if they're building a complete new system and are too desparate to wait.
  • cactusdog - Tuesday, September 15, 2009 - link

    Not worth running 3.8Ghz if your temps are 90 degrees with the best air cooler money can buy. The 1366/1156 are great but we are hitting a temp ceiling now. People should be made aware of this because they will buy it, take it home and realise they cant really run at those settings even with the best air cooler. If you cant run at those settings the gap widens between the 920.

    This issue is being completely ignored or glossed over.

Log in

Don't have an account? Sign up now