Quad Core with Hyperthreading versus Quad Core

Back in April we launched our first set of benchmarks relating to which CPU we should choose for gaming.  To that list we now add results from several Intel CPUs, including the vital data point of the quad core i5-4670K, some other Haswell CPUs, the new extreme i7-4960X processor and some vintage Nehalem CPUs we could not get hold of for the first round of results.

Many thanks go to GIGABYTE for the loan of the Haswell+Nehalem CPUs for this update and for use of an X58A-UD9.

The i5-4670K provides a salient data point in our testing – the question is always asked about whether having more cores makes a difference.  Hyperthreading allows the processor to simulate extra cores, though sometimes at the expense of single thread speed of the secondary logical threads.  The i5-4670K also lands on the budget side of the equation if we are talking pricing, currently retailing for $240 compared to the i7-4770K which is at $340.  It is often suggested that the i5 overclockable equivalent should offer similar performance, and our inquisitive minds at AnandTech always want to set the important questions straight in our testing.

This Update

Alongside the i5-4670K, this update also tests an i5-4430, which at the time of testing is Intel’s slowest quad core part from the initial Haswell release.  We are also waiting for the dual core parts to reach our testbeds so we can run our tests.  We are also testing the ultimate high end processor, the newly released i7-4960X, offering six hyperthreaded cores at a 4.0 GHz turbo frequency.  On the back of our Crystalwell testing, the CPU results from the i7-4750HQ are included, and at the request of some of our readers, I was also able to source a pair of Haswell Xeons for testing – the E3-1280 v3 and the E3-1285 v3.  The difference between these two chips is solely the presence of the IGP on the 1285, which causes the official TDP to be raised by two watts.  For users who need neither overclocking nor an IGP, the E3-1280 v3 is a potential choice with a slightly higher clock speed and all the benefits of a Xeon and with a $50 price difference.

Due to the time it takes to test any CPU for this article, it was near on impossible to go through all previous generations of processors from both AMD and Intel, let alone a wide variety to show where clock speeds and cache levels are important.  However for this Intel update, three 1366 CPUs managed to pass my way for a few weeks.  The top selling i7-920 is part of this trio, along with the i7-950 which acted as a slightly more expensive upgrade and the full-fat i7-990X which is the modern equivalent of the i7-4960X in terms of busting a wallet buckle or two.  The first two in that list are quad cores with hyperthreading, whereas the i7-990X sits as a hexa-core.  Clearly Nehalem (and Westmere) suffer an IPC disadvantage when it comes to Sandy, Ivy and Haswell, but it is important to test where such a ‘performance platform’ sits in the grand scheme of things.

WHERE IS THE AMD?!?

Next update!  I currently several AMD CPUs in to test (Richland, Trinity, even a Sempron or two and a Llano) and have requested at least a half dozen more from various sources (Piledriver dual/quad module, Athlon II X4) as well as a CPU or two from AM2/AM2+.  The Intel testing landed in my office first for testing, and it made sense to split them up into two separate articles.  But rest assured, I hope that FX-6xxx, FX-4xxx and A10-6xxx numbers will be on their way soon.  Of course, the FX-9590 and counterpart is also on my list as and when we can get hold of a media sample.

Your Games are Old and do not Consider Multiplayer!

This is not an uncommon criticism with this article and the format it takes.  In order to be honest with my results, I have chosen titles which have ceased to be boosted by regular driver updates.  Due to the level of testing (one CPU can be 20+ hours including setup, CPU tests and GPU tests) we need a stable platform for comparison.  I go into detail on the next page on our testing procedure, but one important aspect for our testing is consistency and repeatability.  Almost no MP scenario can offer this, while at the same time maintain a throughput of testing to at least remain partially relevant.

My next big update for games and drivers will be in 2014, hopefully with a GPU update.  I hope this will entail more thorough testing (minimum FPS + average FPS), along with updates from our 580s to something powerful and PCIe 3.0 on the NVIDIA side.  We are currently looking at Bioshock Infinite/Tomb Raider as possible avenues, and a couple of other titles look interesting. 

Format Of This Article

On the next couple of pages, I will start by going through the reasons for this article.  Many of the reasons are the same as the previous Haswell Update, but for consistency and clarity it makes sense to at least repeat them for new readers coming to read the results.

I will also list in detail our hardware for this review, including CPUs, motherboards, GPUs and memory.  Then we will move to the actual hardware setups, with CPU speeds and memory timings detailed. 

Also important to note are the motherboards being used – for completeness I have tested several CPUs in two different motherboards because of GPU lane allocations.  We are living in an age where PCIe switches and additional chips are used to expand GPU lane layouts, so much so that there are up to 20 different configurations for Z77/Z87 motherboards alone.  Sometimes the lane allocation makes a difference, and it can make a large difference using three or more GPUs (x8/x4/x4 vs. x16/x8/x8 with PLX), even with the added latency sometimes associated with the PCIe switches.  Our testing over time will include the majority of the PCIe lane allocations on modern setups – for our first article we are looking at the major ones we are likely to come across.

The results pages will start with a basic CPU analysis, running through my regular motherboard tests on the CPU.  This should give us a feel for how much power each CPU has in dealing with mathematics and real world tests, both for integer operations (important on Bulldozer/Piledriver/Radeon) and floating point operations (where Intel/NVIDIA seem to perform best).

We will then move to each of our four gaming titles in turn, in our six different GPU configurations.  As mentioned in the next page, in GPU limited scenarios it may seem odd if a sub-$100 CPU is higher than one north of $300, but we hope to explain the tide of results as we go.

This will be an ongoing project here at AnandTech, and over time we can add more CPUs, indepth testing, perhaps even show an extreme four-way setup should that be available to us.  The only danger is that on a driver or game change, it takes another chunk of time to get data!  Any suggestions of course are greatly appreciated – drop me an email at ian@anandtech.com

The Importance of Data
Comments Locked

137 Comments

View All Comments

  • BrightCandle - Thursday, October 3, 2013 - link

    So again we see tests with games that are known not to scale with more CPU cores. There are games however that show clear benefits, your site simply doesn't test them. Its not universally true that more cores or HT make a difference but maybe it would be a good idea to focus on those games we know do benefit like Metro last light, Hitman absolution, Medal of honour warfighter and some areas of Crysis 3.

    The problem here is that its the games that support more multithreading, so to give true impression you need to test a much wider and modern set of games. To do otherwise is pretty misleading.
  • althaz - Thursday, October 3, 2013 - link

    To test only those games would be more misleading, as the vast majority of games are barely multithreaded at all.
  • erple2 - Thursday, October 3, 2013 - link

    Honestly, if the stats for single GPU's weren't all at about the same level, this would be an issue. It isn't until you get to multiple GPU's - an area that you start to see some differentiation. But that level begins to become very expensive very quickly. I'd posit that if you're already into multiple high-end video cards, the price difference between dual and quad core is relatively insignificant anyway, so the point is moot.
  • Pheesh - Thursday, October 3, 2013 - link

    appreciate the review, but it seems like the choice of games and settings makes the results primarily reflect a GPU constrained situation (1440p max settings for a CPU test?). It would be nice to see some of the newer engines which utilize more cores as most people will be buying CPU for titles in the future. I'm personally more interested in the delta between the CPU's when in CPU bound situations. Early benchmarks of next gen engines have shown larger differences between 8 threads vs 4 threads.
  • cbrownx88 - Friday, October 4, 2013 - link

    amen
  • TheJian - Sunday, October 6, 2013 - link

    Precisely. Also only 2% of us even own 1440p monitors and I'm guessing the super small % of us in a terrible economy that have say $550 to blow on a PC (the price of the FIRST 1440p monitor model you'd actually recognize the name of on newegg-asus model (122reviews) – and the only one with more than 12reviews) would buy anything BUT a monitor that would probably require 2 vid cards to fully utilize anyway. Raise your hand if you're planning on buying a $550 monitor instead of say, buying a near top end maxwell next year? I see no hands. Since 98% of us are on 1920x1200 or LESS (and more to the point a good 60% are less than 1920x1080), I'm guessing we are all planning on buying either a top vid card, or if you're in the 60% or so that have UNDER 1080p, you'll buy a $100-200 monitor (1080p upgrade to 22in-24in) and a $350-450 vid card to max out your game play.

    Translation: These results affect less than 2% of us and are pointless for another few years at the very least. I'm planning on buying a 1440p monitor but LONG after I get my maxwell. The vid card improves almost everything I'll do in games. The monitor only works well if I have the VID CARD muscle ALREADY. Most people of the super small 2% running 1440p or up have two vid cards to push the monitors (whatever they have). I don't want to buy a monitor and say "oh crap, all my games got super slow" for the next few years (1440p for me is a purchase once a name brand is $400 at 27in – only $150 away…LOL). I refuse to run anything but native and won't turn stuff off. I don't see the point in buying a beautiful monitor if I have to turn in into crap to get higher fps anyway... :)

    Who is this article for? Start writing articles for 98% of your readers, not 2%. Also you'll find the cpu's are far more important where that 98% is running as fewer games are gpu bound. I find it almost stupid to recommend AMD these days for cpus and that stupidity grows even more as vid cards get faster. So basically if you are running 1080p and plan to for a while look at the cpu separation on the triple cards and consider that what you'll see as cpu results. If you want a good indication of what I mean, see the first or second 1440p article here and CTRL-F my nick. I listed all the games previously that part like the red sea leaving AMD cpus in the dust (it's quite a bit longer than CIV 5...ROFL). I gave links to the benchmarks showing all those games.
    http://www.anandtech.com/comments/6985/choosing-a-...
    There's the comments section on the 2nd 1440p article for the lazy people :)

    Note that even here in this article TWO of the games aren't playable on single cards...LOL. 34fps avg in metro 2033 means you'll be hitting low 20's or worse MINIMUM. Sleeping dogs is already under 30fps AVG, so not even playable at avg fps let alone MIN fps you will hit (again TEENS probably). So if you buy that fancy new $550+ monitor (because only a retard or a gambler buys a $350 korean job from ebay etc...LOL) get used to slide shows and stutter gaming for even SINGLE 7970's in a lot of games never mind everything below sucking even more. Raise your hand if you have money for a $550 monitor AND a second vid card...ROFL. And these imaginary people this article is for, apparently should buy a $110 CPU from AMD to pair with this setup...ROFLMAO.

    REALISTIC Recommendations:
    Buy a GPU first.
    Buy a great CPU second (and don't bother with AMD unless you're absolutely broke).
    Buy that 1440p monitor if your single card is above 7970 already or you're planning shortly on maxwell or some such card. As we move to unreal 4 engine, cryengine 3.5 (or cryengine 4th gen…whatever) etc next year, get ready to feel the pain of that 1440p monitor even more if you're not above 7970. So again, this article should be considered largely irrelevant for most people unless you can fork over for the top end cards AND that monitor they test here. On top of this, as soon as you tell me you have the cash for both of those, what they heck are you doing talking $100 AMD cpus?...LOL.

    And for AMD gpu lovers like the whole anandtech team it seems...Where's the NV portal site? (I love the gpus, just not their drivers):
    http://hothardware.com/News/Origin-PC-Ditching-AMD...
    Origin AND Valve have abandoned AMD even in the face of new gpus. Origin spells it right out exactly as we already know:
    "Wasielewski offered a further clarifying statement from Alvaro Masis, one of Origin’s technical support managers, who said, “Primarily the overall issues have been stability of the cards, overheating, performance, scaling, and the amount of time to receive new drivers on both desktop and mobile GPUs.”

    http://www.pcworld.com/article/2052184/whats-behin...
    More data, nearly twice the failure rate at another vendor confirming why the first probably dropped AMD. I’d call a 5% rate bad, never mind AMD’s 1yr rate of nearly 8% failure (and nearly 9% over 3yrs). Cutting RMAs nearly in half certainly saves a company some money, never mind all the driver issues AMD still has and has had for 2yrs. A person adds a monitor and calls tech support about AMD eyefinity right? If they add a gpu they call about crossfire next?...ROFL. I hope AMD starts putting more effort into drivers, or the hardware sucks no matter how good the silicon is. As a boutique vendor at the high end surely the crossfire and multi-monitor situation affects them more than most who don't even ship high end stuff really (read:overly expensive...heh)

    Note one of the games here performs worse with 3 cards than 2. So I guess even anandtech accidentally shows AMD's drivers still suck for triple's. 12 cpus in civ5 post above 107fps with 2 7970's, but only 5 can do over 107fps with 3 cards...Talk about going backwards. These tests, while wasted on 98% of us, should have at the least been done with the GPU maker who has properly functioning drivers WITH 2 or 3 CARDS :)
  • CrispySilicon - Thursday, October 3, 2013 - link

    What gives Anand?

    I may be a little biased here since I'm still rocking a Q6600 (albeit fairly OC'd). But with all the other high end platforms you used, why not use a DDR3 X48/P45 for S775?

    I say this because NOBODY who reads this article would still be running a mobo that old with pcie 1.1, especially in multi-gpu configuration.
  • dishayu - Friday, October 4, 2013 - link

    I share you opinion on the matter, although I'm myself still running a Q6600 on a MSI P965 Platinum with an AMD HD6670. :P
  • ThomasS31 - Thursday, October 3, 2013 - link

    Please also add Battlefield 4 to the game tests in the next update(s)/2014.

    I think it will be very relevant, based on the beta experience.
  • tackle70 - Thursday, October 3, 2013 - link

    I know you dealt with this criticism in the intro, and I understand the reasoning (consistency, repeatability, etc) but I'm going to criticize anyways...

    These CPU results are to me fairly insignificant and not worth the many hours of testing, given that the majority of cases where CPU muscle is important are multiplayer (BF3/Crysis 3/BF4/etc). As you can see even from your benchmark data, these single player scenarios just don't really care about CPU all that much - even in multi-GPU. That's COMPLETELY different in the multiplayer games above.

    Pretty much the only single player game I'm aware of that will eat up CPU power is Crysis 3. That game should at least be added to this test suite, in my opinion. I know it has no built in benchmark, but it would at least serve as a point of contact between the world of single player CPU-agnostic GPU-bound tests like these and the world of CPU-hungry multiplayer gaming.

Log in

Don't have an account? Sign up now