Comparing the Quad Cores: CPU Tests

As a straight up comparison between what Intel offered in terms of quad cores, here’s an analysis of all the results for the 2600K, 2600K overclocked, and Intel’s final quad-core with HyperThreading chip for desktop, the 7700K.

On our CPU tests, the Core i7-2600K when overclocked to a 4.7 GHz all-core frequency (and with DDR3-2400 memory) offers anywhere from 10-24% increase in performance against the stock settings with Intel maximum supported frequency memory. Users liked the 2600K because of this – there were sizable gains to be had, and Intel’s immediate replacements to the 2600K didn’t offer the same level of boost or difference in performance.

However, when compared to the Core i7-7700K, Intel’s final quad-core with HyperThreading processor, users were able to get another 8-29% performance on top of that. Depending on the CPU workload, it would be very easy to see how a user could justify getting the latest quad core processor and feeling the benefits for more modern day workloads, such as rendering or encoding, especially given how the gaming market has turned more into a streaming culture. For the more traditional workflows, such as PCMark or our legacy tests, only gains of 5-12% are seen, which is what we would have seen back when some of these newer tests were no longer so relevant.

As for the Core i7-9700K, which has eight full cores and now sits in the spot of Intel’s best Core i7 processor, performance gains are very much more tangible, and almost double in a lot of cases against an overclocked Core i7-2600K (and more than double against one at stock).

The CPU case is clear: Intel’s last quad core with hyperthreading is an obvious upgrade for a 2600K user, even before you overclock it, and the 9700K which is almost the same launch price parity is definitely an easy sell. The gaming side of the equation isn’t so rosy though.

Comparing the Quad Cores: GPU Tests

Modern games today are running at higher resolutions and quality settings than the Core i7-2600K did when it was first launch, as well as new physics features, new APIs, and new gaming engines that can take advantage of the latest advances in CPU instructions as well as CPU-to-GPU connectivity. For our gaming benchmarks, we test with four tests of settings on each game (720p, 1080p, 1440p-4K, and 4K+) using a GTX 1080, which is one of last generations high-end gaming cards, and something that a number of Core i7 users might own for high-end gaming.

When the Core i7-2600K was launched, 1080p gaming was all the rage. I don’t think I purchased a monitor bigger than 1080p until 2012, and before then I was clan gaming on screens that could have been as low as 1366x768. The point here is that with modern games at older resolutions like 1080p, we do see a sizeable gain when the 2600K is overclocked. A 22% gain in frame rates from a 34% overclock sounds more than reasonable to any high-end focused gamer. Intel only managed to improve on that by 12% over the next few years to the Core i7-7700K, relying mostly on frequency gains. It’s not until the 9700K, with more cores and running games that actually know what to do with them, do we see another jump up in performance.

However, all those gains are muted at a higher resolutions setting, such as 1440p. Going from an overclocked 2600K to a brand new 9700K only gives a 9% increase in frame rates for modern games. At an enthusiast 4K setting, the results across the board are almost equal. As resolutions are getting higher, even with modern physics and instructions and APIs, the bulk of the workload is still on the GPU, and even the Core i7-2600K is powerful enough for it. There is the odd title where having the newer chip helps a lot more, but it’s in the minority.

That is, at least on average frame rates. Modern games and modern testing methods now test percentile frame rates, and the results are a little different.

Here the results look a little worse for the Core i7-2600K and a bit better for the Core i7-9700K, but on the whole the broad picture is the same for percentile results as it is for average frame results. In the individual results, we see some odd outliers, such as Ashes of the Singularity which was 15% down on percentiles at 4K for a stock 2600K, but the 9700K was only 6% higher than an overclocked 2600K, but like the average frame rates, it is really title dependent.

Power Consumption Conclusions
POST A COMMENT

207 Comments

View All Comments

  • just4U - Sunday, May 12, 2019 - link

    A interesting read there Ian. I started to notice a slow down on 2600K class systems a few years ago when I worked on them.. (I hadn't used one since 2014) For me.. If I can notice those slowdowns in real time then it's time to move away from that CPU. The 4790K appears to still be holding up ok but older 3000/2000 chips not so well. Reply
  • crotach - Sunday, May 12, 2019 - link

    Still running 3930k Sandy Bridge.

    Maybe Ryzen 3000 will give me a reason to upgrade.
    Reply
  • AndrewJacksonZA - Sunday, May 12, 2019 - link

    Best quote out of the entire article:
    "In 2019, the landscape has changed: gamers gonna stream, designers gonna design, scientists gonna simulate, and emulators gonna emulate" :-)

    But seriously though, for me, when I upgraded from a Core2Duo E6750 with 4GB of RAM to an i7-6700 (non-K) with 16GB of RAM, it was simply amazing. I was fully expecting that going from an i7-2600K to an i7-9700K would be similar - and it is for things like compiling but not for things like gaming.

    Thanks for the aricle, Ian! Dig the LAN setup. :-)
    Reply
  • Targon - Sunday, May 12, 2019 - link

    Why would you test a CPU and use a framerate test from Civilization 6, rather than the turn length benchmark which is a true test of the CPU rather than the GPU? Turn based games SHOULD be there as CPU tests, and only caring about the framerates seems to be wrong. Reply
  • Oxford Guy - Sunday, May 12, 2019 - link

    When your overclock fails in one test you're unstable.

    When it fails in four, as in this article, you're both unstable and laughable.

    "Had issues". "For whatever reason". I will assume this is all intended to be humor.
    Reply
  • DeltaIO - Monday, May 13, 2019 - link

    Interesting article to read. I've only recently upgraded from my 2600k to the 9700k, even that was begrudgingly as the 2600k itself still works fine, however the motherboard simply decided to give up on me.

    I've got to say though, the difference in the subsystems (NVMe vs SSD makes for some great load times for pretty much everything) as well as other tangible benefits (gaming at higher frame rates) is quite apparent now I have upgraded.

    I would have upgraded far sooner had Intel not chosen to keep changing the sockets, swapping out just a CPU is far simpler than rebuilding the entire system.
    Reply
  • Tedaz - Monday, May 13, 2019 - link

    Expecting i9-9900K joins the article. Reply
  • Badelhas - Monday, May 13, 2019 - link

    I an still with a 2500K overclocked to 4.8Ghz, 8Gb of DDR3 1600Mhz RAM and, a 850 Evo SSD and a Nvidia 1070. I honestly see no reason to upgrade.
    IAN: All your testing basically demonstrated that there is no real reason that justifies spending 400 bucks for a new CPU, 200 bucks for a new Motherboard and 100 bucks for new DDR4 Ram - This totals 700 dollars. But your conclusion is that we should upgrade?! I dont get it.
    Reply
  • tmanini - Monday, May 13, 2019 - link

    Go ahead and re-read his "Bottom Line" concluding articles: gives a few specific recommendations where is may and may not be to your advantage. And if you aren't desiring/needing all of the other new bells/whistles that go along with newer boards and architecture, then you are set (he says).
    Seems pretty clear.
    Reply
  • Midwayman - Monday, May 13, 2019 - link

    I think the biggest thing I noticed moving to a 8700k from a 2600k was the same thing I noticed moving from a core 2 duo to a 2600k. Less weird pauses. The 2600k would get weird hitches in games. System processes would pop up and tank the frame rate for an instant, or just an explosion would trigger a physics event that would make it stutter. I see that a lot less with a couple extra cores and some performance overhead. Reply

Log in

Don't have an account? Sign up now