Test Bed and Setup

As with every CPU launch, there are a number of different directions to take the review. We have dedicated articles comparing the IPC of the new Kaby Lake line of CPUs, as well as a look into overclocking performance as a whole. We have had almost every desktop-class CPU family since Sandy Bridge tested in our benchmark suite, although only the latest have been retested. Due to timing, we were able to test all three of the new Kaby Lake-K processors, and retest the several Skylake processors, however we do have some CPU data for comparison for Haswell, Ivy Bridge, and Sandy Bridge. It will interesting to see how the CPU performance out-of-the-box has adjusted over the last five generations.

As per our testing policy, we take each CPU and place it in a suitable high-end motherboard and equip the system with a suitable amount of memory running at the processor maximum supported frequency. This is also typically run at JEDEC sub-timings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
Processor Intel Core i3-7350K (ES, Retail Stepping), 60W, $157
2 Cores, 4 Threads, 4.2 GHz
Motherboards MSI Z270 Gaming M7
Cooling Cooler Master Nepton 140XL
Power Supply OCZ 1250W Gold ZX Series
Corsair AX1200i Platinum PSU
Memory G.Skill Ripjaws 4 DDR4-2400 C15 2x16 GB 1.2V
Memory Settings DDR4-2400 C15
Video Cards ASUS GTX 980 Strix 4GB
MSI R9 290X Gaming 8GB
ASUS R7 240 2GB
Hard Drive Crucial MX200 1TB
Optical Drive LG GH22NS50
Case Open Test Bed
Operating System Windows 7 64-bit SP1

Readers of our reviews will have noted the trend in modern motherboards to implement a form of MultiCore Enhancement / Acceleration / Turbo (read our report here) on their motherboards. This does several things, including better benchmark results at stock settings (not entirely needed if overclocking is an end-user goal) at the expense of heat and temperature. It also gives an automatic overclock which may be against what the user wants. Our testing methodology is ‘out-of-the-box’, with the latest public BIOS installed and XMP enabled, and thus subject to the whims of this feature. It is ultimately up to the motherboard manufacturer to take this risk – and manufacturers taking risks in the setup is something they do on every product (think C-state settings, USB priority, DPC Latency / monitoring priority, overriding memory sub-timings at JEDEC). Processor speed change is part of that risk, and ultimately if no overclocking is planned, some motherboards will affect how fast that shiny new processor goes and can be an important factor in the system build.

Many thanks to...

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Thank you to AMD for providing us with the R9 290X 4GB GPUs. These are MSI branded 'Gaming' models, featuring MSI's Twin Frozr IV dual-fan cooler design and military class components. Bundled with the cards is MSI Afterburner for additional overclocking, as well as MSI's Gaming App for easy frequency tuning.

The R9 290X is a second generation GCN card from AMD, under the Hawaii XT codename, and uses their largest Sea Islands GPU die at 6.2 billion transistors at 438mm2 built at TSMC using a 28nm process. For the R9 290X, that means 2816 streaming processors with 64 ROPs using a 512-bit memory bus to GDDR5 (4GB in this case). The official power rating for the R9 290X is 250W.

The MSI R9 290X Gaming 4G runs the core at 1000 MHz to 1040 MHz depending on what mode it is in (Silent, Gaming or OC), and the memory at 5 GHz. Displays supported include one DisplayPort, one HDMI 1.4a, and two dual-link DVI-D connectors.

Further Reading: AnandTech's AMD R9 290X Review

Thank you to ASUS for providing us with GTX 980 Strix GPUs. At the time of release, the STRIX brand from ASUS was aimed at silent running, or to use the marketing term: '0dB Silent Gaming'. This enables the card to disable the fans when the GPU is dealing with low loads well within temperature specifications. These cards equip the GTX 980 silicon with ASUS' Direct CU II cooler and 10-phase digital VRMs, aimed at high-efficiency conversion. Along with the card, ASUS bundles GPU Tweak software for overclocking and streaming assistance.

The GTX 980 uses NVIDIA's GM204 silicon die, built upon their Maxwell architecture. This die is 5.2 billion transistors for a die size of 298 mm2, built on TMSC's 28nm process. A GTX 980 uses the full GM204 core, with 2048 CUDA Cores and 64 ROPs with a 256-bit memory bus to GDDR5. The official power rating for the GTX 980 is 165W.

The ASUS GTX 980 Strix 4GB (or the full name of STRIX-GTX980-DC2OC-4GD5) runs a reasonable overclock over a reference GTX 980 card, with frequencies in the range of 1178-1279 MHz. The memory runs at stock, in this case 7010 MHz. Video outputs include three DisplayPort connectors, one HDMI 2.0 connector and a DVI-I.

Further Reading: AnandTech's NVIDIA GTX 980 Review

Thank you to Cooler Master for providing us with Nepton 140XL CLCs. The Nepton 140XL is Cooler Master's largest 'single' space radiator liquid cooler, and combines with dual 140mm 'JetFlo' fans designed for high performance, from 0.7-3.5mm H2O static pressure. The pump is also designed to be faster, more efficient, and uses thicker pipes to assist cooling with a rated pump noise below 25 dBA. The Nepton 140XL comes with mounting support for all major sockets, as far back as FM1, AM2 and 775.

Further Reading: AnandTech's Cooler Master Nepton 140XL Review

Thank you to Corsair for providing us with an AX1200i PSU. The AX1200i was the first power supply to offer digital control and management via Corsair's Link system, but under the hood it commands a 1200W rating at 50C with 80 PLUS Platinum certification. This allows for a minimum 89-92% efficiency at 115V and 90-94% at 230V. The AX1200i is completely modular, running the larger 200mm design, with a dual ball bearing 140mm fan to assist high-performance use. The AX1200i is designed to be a workhorse, with up to 8 PCIe connectors for suitable four-way GPU setups. The AX1200i also comes with a Zero RPM mode for the fan, which due to the design allows the fan to be switched off when the power supply is under 30% load.

Further Reading: AnandTech's Corsair AX1500i Power Supply Review

Thank you to Crucial for providing us with MX200 SSDs. Crucial stepped up to the plate as our benchmark list grows larger with newer benchmarks and titles, and the 1TB MX200 units are strong performers. Based on Marvell's 88SS9189 controller and using Micron's 16nm 128Gbit MLC flash, these are 7mm high, 2.5-inch drives rated for 100K random read IOPs and 555/500 MB/s sequential read and write speeds. The 1TB models we are using here support TCG Opal 2.0 and IEEE-1667 (eDrive) encryption and have a 320TB rated endurance with a three-year warranty.

Further Reading: AnandTech's Crucial MX200 (250 GB, 500 GB & 1TB) Review

Thank you to G.Skill for providing us with memory. G.Skill has been a long-time supporter of AnandTech over the years, for testing beyond our CPU and motherboard memory reviews. We've reported on their high capacity and high-frequency kits, and every year at Computex G.Skill holds a world overclocking tournament with liquid nitrogen right on the show floor. One of the most recent deliveries from G.Skill was their 4x16 GB DDR4-3200 C14 Kit, which we are planning for an upcoming review.

Further Reading: AnandTech's Memory Scaling on Haswell Review, with G.Skill DDR3-3000

Thank you to Corsair for providing us with memory. Similarly, Corsair (along with PSUs) is also a long-time supporter of AnandTech. Being one of the first vendors with 16GB modules for DDR4 was a big deal, and now Corsair is re-implementing LEDs back on its memory after a long hiatus along with supporting specific projects such as ASUS ROG versions of the Dominator Platinum range. We're currently looking at our review pipeline to see when our next DRAM round-up will be, and Corsair is poised to participate.

Further Reading: AnandTech's Memory Scaling on Haswell-E Review

Kaby Lake, Intel's 7th Generation, Has New Features Office and Web Performance
Comments Locked

186 Comments

View All Comments

  • Michael Bay - Saturday, February 4, 2017 - link

    >competition
    >AMD
  • Ranger1065 - Sunday, February 5, 2017 - link

    You are such a twat.
  • Meteor2 - Sunday, February 5, 2017 - link

    Ignore him. Don't feed trolls.
  • jeremynsl - Friday, February 3, 2017 - link

    Please consider abandoning the extreme focus on average framerates. It's old-school and doesn't really reflect the performance differences between CPUs anymore. Frame-time variance and minimum framerates are what is needed for these CPU reviews.
  • Danvelopment - Friday, February 3, 2017 - link

    Would be a good choice for a new build if the user needs the latest tech, but I upgraded my 2500K to a 3770 for <$100USD.

    I run an 850 for boot, a 950 for high speed storage on an adapter (thought it was a good idea at the time but it's not noticeable vs the 850) and an RX480.

    I don't feel like I'm missing anything.
  • Barilla - Friday, February 3, 2017 - link

    "if we have GPUs at 250-300W, why not CPUs?"

    I'm very eager to read a full piece discussing this.
  • fanofanand - Sunday, February 5, 2017 - link

    Those CPUs exist but don't make sense for home usage. Have you noticed how hard it is to cool 150 watts? Imagine double that. There are some extremely high powered server chips but what would you do with 32 cores?
  • abufrejoval - Friday, February 3, 2017 - link

    I read the part wasn't going to be available until later, did a search to confirm and found two offers: One slightly more expensive had "shipping date unknown", another slightly cheaper read "ready to ship", so that's what I got mid-January, together with a Z170 based board offering DDR3 sockets, because it was to replace an A10-7850K APU based system and I wanted to recycle 32GB of DDR3 RAM.

    Of course it wouldn't boot, because 3 out of 3 mainboards didn't have KabyLake support in the BIOS. Got myself a Skylake Pentium part to update the BIOS and returned that: Inexcusable hassle that, for me, the dealer and hopefully for the manufacturers which had advertised "Kaby Lake" compatibility for moths, but shipped outdates BIOS versions.

    After that this chips runs 4.2GH out of the box and overclocks to 4.5 without playing with voltage. Stays cool and sucks modest Watts (never reaching 50W according to the onboard sensors, which you can't really trust, I gather).

    Use case is a 24/7 home-lab server running quite a mix of physical and virtual workloads on Win 2008R2 and VMware workstation, mostly idle but with some serious remote desktop power, Plex video recoding ummp if required and even a game now and then at 1080p.

    I want it to rev high on sprints, because I tend to be impatient, but there is a 12/24 core Xeon E5 at 3 GHz and a 4/8 Xeon E3 at 4GHz sitting next to it, when I need heavy lifting and torque: Those beasts are suspended when not in use.

    Sure enough, it is noticible snappier than the big Xeon 12 core on desktop things and still much quieter than the Quad, while of course any synthetic multi-core benchmark or server load leaves this chip in the dust.

    I run it with an Nvidia GTX 1050ti, which ensures a seamless experience with the Windows 7 generation Sever 2008R2 on all operating systems, including CentOS 7 virtual or physical which is starting to grey a little on the temples, yet adds close to zero power on idle.

    At 4.2 GHz the Intel i3-7350K HT dual is about twice as fast as the A10-7850K integer quad at the same clock speed (it typically turbos to 4.2 GHz without any BIOS OC pressure) for all synthetic workloads I could throw at it, which I consider rather sad (been running AMD and Intel side by side for decades).

    I overclocked mine easily to 4.8 GHz and even to 5 GHz with about 1.4V and leaving the uncore at 3.8 GHz. It was Prime95 stable, but my simple slow and quiet Noctua NH-L9x65 couldn't keep temperatures at safe levels so I stopped a little early and went back to an easy and cool 4.6 GHz at 1.24V for "production".

    I'm most impressed running x265 video recodes on terabytes of video material at 800-1200FPS on this i3-7350K/GTX 1050ti combo, which seems to leave both CPU and GPU oddly bored and able to run desktop and even gaming workloads in parallel with very little heat and noise.

    The Xeon monsters with their respective GTX 1070 and GTX 980ti GPUs would that same job actually slower while burning more heat and there video recoding has been such a big sales argument for the big Intel chips.

    Actually Handbrake x265 software encodes struggle to reach double digits on 24 threads on the "big" machine: Simply can't beat ASIC power with general purpose compute.

    I guess the Pentium HT variants are better value, but so is a 500cc scooter vs. a Turbo-Hayabusa. And here the difference is less than a set of home delivered pizzas for the family, while this chip will last me a couple of years and the pizza is gone in minutes.
  • Meteor2 - Sunday, February 5, 2017 - link

    Interesting that x265 doesn't scale well with cores. The developers claim to be experts in that area!
  • abufrejoval - Sunday, February 12, 2017 - link

    Sure the Handbrake x265 code will scale with CPU cores, but the video processing unit (VPU) withing the GTX 10x series provides several orders of magnitude better performance at much lower energy budgets. You'd probably need downright silly numbers of CPU cores (hundreds) with Handbrake to draw even in performance and by then you'd be using several orders of magnitude more energy to get it done.

    AFAIK the VPU all the same on all (consumer?) Pascal GPUs and not related to GPU cores, so a 1080 or even a Titan-X may not be any faster than a 1050.

    When I play around with benchmarks I typically have HWinfo running on a separate monitor and it reports the utilization and power budget from all the distinct function blocks in today's CPUs and GPUs.

    Not only does the GTX 1050ti on this system deliver 800-1200FPS when transcoding 1080p material from x264 to x265, but it also leaves CPU and GPU cores rather idle so I actually felt it had relatively little impact on my ability to game or do production work, while it is transcoding at this incredible speed.

    Intel CPUs at least since Sandy Bridge have also sported VPUs and I have tried to them similarly for the MPEG to x264 transitions, but there from my experience compression factor, compression quality and speed have fallen short of Handbrake, so I didn't use them. AFAIK x265 encoding support is still missing on Kaby Lake.

    It just highlights the "identity" crisis of general purpose compute, where even the beefiest CPUs suck on any specific job compared to a fully optimized hardware solution.

    Any specific compute problem shared by a sufficiently high number of users tends to be moved into hardware. That's how GPUs and DSPs came to be and that's how VPUs are now making CPU and GPU based video transcoding obsolete via dedicated function blocks.

    And that explains why my smallest system really feels fastest with just 2 cores.

    The only type of workload where I can still see a significant benefit for the big Xeon cores are things like a full Linux kernel compile. But if the software eco-system there wasn't as bad as it is, incremental compiles would do the job and any CPU since my first 1MHz 8-Bit Z80 has been able to compile faster than I was able to write code (especially with Turbo Pascal).

Log in

Don't have an account? Sign up now