Core i3-7350K vs Core i7-2600K

When I started testing for this review, finding out how close the Core i3-7350K was to the favored Core i7-2600K was anticipated to be interesting. Several years of IPC increases in favor of the Core i3, plus the extra frequency and overclockability, were to be put against brute force in the form of more, but older, cores. The Core i7-2600K is certainly no slouch. It's a good overclocking chip in its own right, hitting almost 5.0 GHz on a good sample, which has made it a very difficult processor for Intel to get users to upgrade from in this recent area of minimal IPC increases per generation.

That’s not to say that IPC increases are entirely Intel’s fault – at some point there is only so many knobs you can turn and twist in a given CPU microarchitecture. But the lack of IPC gain can come from the big questions: if we have GPUs at 250-300W, why not CPUs? Is Intel’s process focused on performance, power, or die size? If you speak to a few analysts, you get a wide variety of reasoning as to how Intel does things, but money is usually the key factor. This is perhaps a topic to discuss on a podcast!

But to the matter at hand: there will be a time where a Core i3 based CPU will match the performance of the older Core i7-2600K. This is what the testing was about: at what point will we have enough frequency and IPC gain to match the older chip, potentially saving a bunch of power in the process? It’s not necessarily a question of upgrading – high-performance enthusiasts rarely upgrade to a system that gives equal performance at lower power, but it does offer that understanding that perhaps the Core i7-2600K might be showing its age.

At a high level, the two CPUs have the following:

Kaby Lake i3 vs Sandy Bridge i7 Comparison
Launch Pricing
CPU Year Cores
Threads
Freq Turbo L3 IGP DRAM TDP Price
Core i3-7350K 2017 2/4 4.2 - 4MB HD 630 DDR4-2400 60W $168
Core i7-2600K 2011 4/8 3.4 3.8 8MB HD 3000 DDR3-1333 95W $317
 

The Core i7-2600K has more cores and more cache. The Core i3-7350K has a higher frequency, better memory controller, better integrated graphics, is more efficient, and was released at almost half the cost. The big question is if frequency plus IPC can match the brute force of more cores.

Throughout the results, each benchmark that has results between the two processors has commentary on the Kaby Lake-i3 vs Sandy Bridge-i7 performance.  Here’s a brief summary:

Kaby Lake i3 vs Sandy Bridge i7 Performance Difference
  Average Peak Minimum
CPU Single Thread +25% +44% +4%
CPU Multi-Thread -18% -40% +3%
Low GPU (R7 240) +2% +5% +1%
Mid GPU (R9 285/GTX 770) +10% +24% 0%
High GPU (R9 290X/GTX 980) +2% +16% -5%
Integrated Graphics +200% +296% +120%

As expected, the Core i3-7350K takes the single threaded performance win.  In every benchmark the Core i3 was ahead up to +44%, with an average of +25%.

With the multithreaded CPU tests, the extra cores and threads of the older Core i7-2600K were often more than enough to overcome the newer part. The Core i3-7350K does, on average, -18% less work than the Core i7-2600K, which manifests itself as anything from being near equal to a -40% loss in throughput (or taking almost 1.8x the time to finish the same task). Benchmarks that fall into the latter category are mostly tasks that have large threads competing for size in the cache, meaning the contest essentially becomes two cores vs four cores. Even with CPU and IPC, that’s a tough mark to jump over.

For the gaming tests, putting aside the obvious win on integrated graphics for Kaby Lake, in our game tests we saw parity at the high end. Only one serious high-end GPU test saw a win for Kaby Lake, giving +16% performance gain, but the interesting results occur on mid-range cards. For mid-range, the newer Core i3 test out-performed the older Core i7 by an average of 10%, which ranged anywhere from equal to +24% (Mordor at 1080p Ultra).

Ultimately the Core i3 parts are marketed towards the mid-range GPU gaming community, and the addition of overclocking may be enough to push a user to get a Core i3-K instead of a Core i5, migrating money over to a different part of the PC. But for our gaming tests, the Core i3-7350K certainly matches the Core i7-2600K.

Ultimately, however, the users who fully stretch the legs of the Core i7 through other means, such as compute-heavy work, already need the power of all the cores. As shown in our multithreaded tests, while the Core i3 is hot on the heels of the Core i7-2600K, the deficit of having two fewer cores is very difficult to overcome. That becomes more difficult as we overclock both CPUs into the 4.8 GHz region, where the 400-600 MHz frequency advantage of the Core i3 is lost and the contest is purely on IPC, cores and stability.

The Core i3-7350K, as a CPU, is not enough to encourage users who already push the Core i7-2600K to the absolute limit and want to stay there. For everyone else though, where responsiveness is more important, or in casual gaming, the Core i3-7350K becomes an adequate side step from the i7-2600K to decrease power consumption and move onto a new platform (and have the joy of building a new PC again). Windows 10 users also get the benefit of Speed Shift, affording more ‘free’ performance for responsive tasks.

It is worth noting that moving to the Core i3-7350K affords non-CPU related benefits. Moving from a P67/Z68 motherboard to a Z270 motherboard means:

  • GPU slots operate at PCIe 3.0, rather than PCIe 2.0 (on Z68)
  • More chipset PCIe lanes (24 lanes at PCIe 3.0 rather than 8 lanes at PCIe 2.0)
  • Support for bootable NVMe storage and PCIe storage
  • Newer audio codecs and networking controllers
  • Updated Intel RST (RAID/Caching)
  • A move from DDR3-1333 to DDR4-2400
  • Native USB 3.0 Ports from the chipset rather than from controllers
  • USB 3.1 (10 Gbps) on board via controllers rather than a add-in PCIe card
  • Potential Thunderbolt 3 support (depending on motherboard)
  • USB Type-C
  • This RGB fad that apparently sells like hot cakes
  • More SATA 6 Gbps ports (six on Z270 vs two on Z68)

At What Point Will the Performance of A Core i3 Beat the Core i7-2600K?

If we’re only speaking performance (I’m sure Intel would rather happily speak efficiency), judging by our benchmark results, we’re almost there already. For all but the most strenuous tasks that require large caches, the Core i3 already wins out. Unfortunately these are the tasks where the Core i3 needs a clear 2x performance gain per core to match the Core i7-2600K. For these tasks, anywhere from -30% to -40% loss to the Core i3 in our test can be a deciding factor in purchasing.

Let’s take a score based test, such as POV-Ray. Here the Core i3-7350K scored 1009 points, compared to the Core i7-2600K which scored 1323 points (or +31.1%). Based on the overclocking tests, each 100 MHz on the Core i3 resulted in +25 points, and that rise was essentially linear from 4.0 GHz to 4.8 GHz.

With extrapolation, the 314 point deficit requires an extra ~1250 MHz for the Core i3 to match that result.

So, all we need is a 5.45 GHz dual-core processor with Kaby Lake IPC.

Or, think of it another way. If we assume Intel creates a 5% IPC increase year-on-year, a 4.2 GHz Core i3 will score 1059 next generation, then 1112, then 1168, and so on, until we hit a score of 1352 in six more generations. Roll on 2025?

(As an aside, the best tested Kaby Lake CPUs from the world’s best overclockers can reach a semi-stable 5.3-5.4 GHz using water cooling, using AVX offset, but beyond that (and for 100% stability) requires sub-zero cooling.) 

Power and Overclocking Conclusion: Casual Enthusiasts Rejoice
Comments Locked

186 Comments

View All Comments

  • Michael Bay - Saturday, February 4, 2017 - link

    >competition
    >AMD
  • Ranger1065 - Sunday, February 5, 2017 - link

    You are such a twat.
  • Meteor2 - Sunday, February 5, 2017 - link

    Ignore him. Don't feed trolls.
  • jeremynsl - Friday, February 3, 2017 - link

    Please consider abandoning the extreme focus on average framerates. It's old-school and doesn't really reflect the performance differences between CPUs anymore. Frame-time variance and minimum framerates are what is needed for these CPU reviews.
  • Danvelopment - Friday, February 3, 2017 - link

    Would be a good choice for a new build if the user needs the latest tech, but I upgraded my 2500K to a 3770 for <$100USD.

    I run an 850 for boot, a 950 for high speed storage on an adapter (thought it was a good idea at the time but it's not noticeable vs the 850) and an RX480.

    I don't feel like I'm missing anything.
  • Barilla - Friday, February 3, 2017 - link

    "if we have GPUs at 250-300W, why not CPUs?"

    I'm very eager to read a full piece discussing this.
  • fanofanand - Sunday, February 5, 2017 - link

    Those CPUs exist but don't make sense for home usage. Have you noticed how hard it is to cool 150 watts? Imagine double that. There are some extremely high powered server chips but what would you do with 32 cores?
  • abufrejoval - Friday, February 3, 2017 - link

    I read the part wasn't going to be available until later, did a search to confirm and found two offers: One slightly more expensive had "shipping date unknown", another slightly cheaper read "ready to ship", so that's what I got mid-January, together with a Z170 based board offering DDR3 sockets, because it was to replace an A10-7850K APU based system and I wanted to recycle 32GB of DDR3 RAM.

    Of course it wouldn't boot, because 3 out of 3 mainboards didn't have KabyLake support in the BIOS. Got myself a Skylake Pentium part to update the BIOS and returned that: Inexcusable hassle that, for me, the dealer and hopefully for the manufacturers which had advertised "Kaby Lake" compatibility for moths, but shipped outdates BIOS versions.

    After that this chips runs 4.2GH out of the box and overclocks to 4.5 without playing with voltage. Stays cool and sucks modest Watts (never reaching 50W according to the onboard sensors, which you can't really trust, I gather).

    Use case is a 24/7 home-lab server running quite a mix of physical and virtual workloads on Win 2008R2 and VMware workstation, mostly idle but with some serious remote desktop power, Plex video recoding ummp if required and even a game now and then at 1080p.

    I want it to rev high on sprints, because I tend to be impatient, but there is a 12/24 core Xeon E5 at 3 GHz and a 4/8 Xeon E3 at 4GHz sitting next to it, when I need heavy lifting and torque: Those beasts are suspended when not in use.

    Sure enough, it is noticible snappier than the big Xeon 12 core on desktop things and still much quieter than the Quad, while of course any synthetic multi-core benchmark or server load leaves this chip in the dust.

    I run it with an Nvidia GTX 1050ti, which ensures a seamless experience with the Windows 7 generation Sever 2008R2 on all operating systems, including CentOS 7 virtual or physical which is starting to grey a little on the temples, yet adds close to zero power on idle.

    At 4.2 GHz the Intel i3-7350K HT dual is about twice as fast as the A10-7850K integer quad at the same clock speed (it typically turbos to 4.2 GHz without any BIOS OC pressure) for all synthetic workloads I could throw at it, which I consider rather sad (been running AMD and Intel side by side for decades).

    I overclocked mine easily to 4.8 GHz and even to 5 GHz with about 1.4V and leaving the uncore at 3.8 GHz. It was Prime95 stable, but my simple slow and quiet Noctua NH-L9x65 couldn't keep temperatures at safe levels so I stopped a little early and went back to an easy and cool 4.6 GHz at 1.24V for "production".

    I'm most impressed running x265 video recodes on terabytes of video material at 800-1200FPS on this i3-7350K/GTX 1050ti combo, which seems to leave both CPU and GPU oddly bored and able to run desktop and even gaming workloads in parallel with very little heat and noise.

    The Xeon monsters with their respective GTX 1070 and GTX 980ti GPUs would that same job actually slower while burning more heat and there video recoding has been such a big sales argument for the big Intel chips.

    Actually Handbrake x265 software encodes struggle to reach double digits on 24 threads on the "big" machine: Simply can't beat ASIC power with general purpose compute.

    I guess the Pentium HT variants are better value, but so is a 500cc scooter vs. a Turbo-Hayabusa. And here the difference is less than a set of home delivered pizzas for the family, while this chip will last me a couple of years and the pizza is gone in minutes.
  • Meteor2 - Sunday, February 5, 2017 - link

    Interesting that x265 doesn't scale well with cores. The developers claim to be experts in that area!
  • abufrejoval - Sunday, February 12, 2017 - link

    Sure the Handbrake x265 code will scale with CPU cores, but the video processing unit (VPU) withing the GTX 10x series provides several orders of magnitude better performance at much lower energy budgets. You'd probably need downright silly numbers of CPU cores (hundreds) with Handbrake to draw even in performance and by then you'd be using several orders of magnitude more energy to get it done.

    AFAIK the VPU all the same on all (consumer?) Pascal GPUs and not related to GPU cores, so a 1080 or even a Titan-X may not be any faster than a 1050.

    When I play around with benchmarks I typically have HWinfo running on a separate monitor and it reports the utilization and power budget from all the distinct function blocks in today's CPUs and GPUs.

    Not only does the GTX 1050ti on this system deliver 800-1200FPS when transcoding 1080p material from x264 to x265, but it also leaves CPU and GPU cores rather idle so I actually felt it had relatively little impact on my ability to game or do production work, while it is transcoding at this incredible speed.

    Intel CPUs at least since Sandy Bridge have also sported VPUs and I have tried to them similarly for the MPEG to x264 transitions, but there from my experience compression factor, compression quality and speed have fallen short of Handbrake, so I didn't use them. AFAIK x265 encoding support is still missing on Kaby Lake.

    It just highlights the "identity" crisis of general purpose compute, where even the beefiest CPUs suck on any specific job compared to a fully optimized hardware solution.

    Any specific compute problem shared by a sufficiently high number of users tends to be moved into hardware. That's how GPUs and DSPs came to be and that's how VPUs are now making CPU and GPU based video transcoding obsolete via dedicated function blocks.

    And that explains why my smallest system really feels fastest with just 2 cores.

    The only type of workload where I can still see a significant benefit for the big Xeon cores are things like a full Linux kernel compile. But if the software eco-system there wasn't as bad as it is, incremental compiles would do the job and any CPU since my first 1MHz 8-Bit Z80 has been able to compile faster than I was able to write code (especially with Turbo Pascal).

Log in

Don't have an account? Sign up now