Power Consumption

For our regular readers, the topic of power consumption has been an interesting one as of late. For the most part, Intel’s consumer processors have been under their expected power consumption, but the recent Skylake-X processors seem to have put that notion out to sea, with numbers almost 20% above what is expected at full load.

The Thermal Design Power (TDP) of a processor is the capability required to adequately cool that processor - while it is not the exact power consumption, is a rough indication of how much power a processor is likely to consume. Higher cooling requirements give to a higher TDP, which naturally fit into a chip that consumes more power. Our last review of consumer processors, the Kaby Lake 7th Generation chips, showed that the Core i7-7700K consumed pretty much exactly the TDP of the chip, while the Core i5 processors came in under their TDP rating by a large margin. The Coffee Lake processors follow this trend

Power: Total Package (Full Load)

The Core i7-8700K has a TDP of 95W, but consumes 86.2W at full load, of which the cores account for 78.6W. The rest of the power is consumed mostly by the uncore and the memory controller.

The Core i5-8400 is rated at 65W, and consumes only 49.3W at full load, of which 41.7W is from the cores. That leaves 7.6W on the table for the uncore and memory controller, which is almost identical to that of the Core i7-8700K, showing the similarity in design.

Test Bed and Setup

As per our processor testing policy, we take a premium category motherboard suitable for the socket, and equip the system with a suitable amount of memory running at the manufacturer's maximum supported frequency. This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend out testing to include faster memory modules either at the same time as the review or a later date.

Test Setup
Processor Intel Core i7-8700K (6C/12T, 95W, 3.8 GHz)
Intel Core i5-8400 (6C/6T, 65W, 2.8 GHz)
Motherboards GIGABYTE Z370 Gaming 7
Cooling Silverstone Argon AR10-115XS
Power Supply Corsair AX760i PSU
Memory Corsair Vengeance Pro DDR4-2666 4x8 GB
Video Cards MSI GTX 1080 Gaming 8GB
Hard Drive Crucial MX200 1TB
Optical Drive LG GH22NS50
Case Open Test Bed
Operating System Windows 10 Pro 64-bit

Many thanks to...

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this test bed specifically, but is used in other testing.

Thank you to Sapphire for providing us with several of their AMD GPUs. We met with Sapphire back at Computex 2016 and discussed a platform for our future testing on AMD GPUs with their hardware for several upcoming projects. As a result, they were able to sample us the latest silicon that AMD has to offer. At the top of the list was a pair of Sapphire Nitro R9 Fury 4GB GPUs, based on the first generation of HBM technology and AMD’s Fiji platform. As the first consumer GPU to use HDM, the R9 Fury is a key moment in graphics history, and this Nitro cards come with 3584 SPs running at 1050 MHz on the GPU with 4GB of 4096-bit HBM memory at 1000 MHz.

Further Reading: AnandTech’s Sapphire Nitro R9 Fury Review

Following the Fury, Sapphire also supplied a pair of their latest Nitro RX 480 8GB cards to represent AMD’s current performance silicon on 14nm (as of March 2017). The move to 14nm yielded significant power consumption improvements for AMD, which combined with the latest version of GCN helped bring the target of a VR-ready graphics card as close to $200 as possible. The Sapphire Nitro RX 480 8GB OC graphics card is designed to be a premium member of the RX 480 family, having a full set of 8GB of GDDR5 memory at 6 Gbps with 2304 SPs at 1208/1342 MHz engine clocks.

Further Reading: AnandTech’s AMD RX 480 Review

With the R9 Fury and RX 480 assigned to our gaming tests, Sapphire also passed on a pair of RX 460s to be used as our CPU testing cards. The amount of GPU power available can have a direct effect on CPU performance, especially if the CPU has to spend all its time dealing with the GPU display. The RX 460 is a nice card to have here, as it is powerful yet low on power consumption and does not require any additional power connectors. The Sapphire Nitro RX 460 2GB still follows on from the Nitro philosophy, and in this case is designed to provide power at a low price point. Its 896 SPs run at 1090/1216 MHz frequencies, and it is paired with 2GB of GDDR5 at an effective 7000 MHz.

We must also say thank you to MSI for providing us with their GTX 1080 Gaming X 8GB GPUs. Despite the size of AnandTech, securing high-end graphics cards for CPU gaming tests is rather difficult. MSI stepped up to the plate in good fashion and high spirits with a pair of their high-end graphics. The MSI GTX 1080 Gaming X 8GB graphics card is their premium air cooled product, sitting below the water cooled Seahawk but above the Aero and Armor versions. The card is large with twin Torx fans, a custom PCB design, Zero-Frozr technology, enhanced PWM and a big backplate to assist with cooling.  The card uses a GP104-400 silicon die from a 16nm TSMC process, contains 2560 CUDA cores, and can run up to 1847 MHz in OC mode (or 1607-1733 MHz in Silent mode). The memory interface is 8GB of GDDR5X, running at 10010 MHz. For a good amount of time, the GTX 1080 was the card at the king of the hill.

Further Reading: AnandTech’s NVIDIA GTX 1080 Founders Edition Review

Thank you to ASUS for providing us with their GTX 1060 6GB Strix GPU. To complete the high/low cases for both AMD and NVIDIA GPUs, we looked towards the GTX 1060 6GB cards to balance price and performance while giving a hefty crack at >1080p gaming in a single graphics card. ASUS lended a hand here, supplying a Strix variant of the GTX 1060. This card is even longer than our GTX 1080, with three fans and LEDs crammed under the hood. STRIX is now ASUS’ lower cost gaming brand behind ROG, and the Strix 1060 sits at nearly half a 1080, with 1280 CUDA cores but running at 1506 MHz base frequency up to 1746 MHz in OC mode. The 6 GB of GDDR5 runs at a healthy 8008 MHz across a 192-bit memory interface.

Further Reading: AnandTech’s ASUS GTX 1060 6GB STRIX Review

Thank you to Crucial for providing us with MX200 SSDs. Crucial stepped up to the plate as our benchmark list grows larger with newer benchmarks and titles, and the 1TB MX200 units are strong performers. Based on Marvell's 88SS9189 controller and using Micron's 16nm 128Gbit MLC flash, these are 7mm high, 2.5-inch drives rated for 100K random read IOPs and 555/500 MB/s sequential read and write speeds. The 1TB models we are using here support TCG Opal 2.0 and IEEE-1667 (eDrive) encryption and have a 320TB rated endurance with a three-year warranty.

Further Reading: AnandTech's Crucial MX200 (250 GB, 500 GB & 1TB) Review

Thank you to Corsair for providing us with an AX860i PSU. The AX860i commands a 860W rating at 50C with 80 PLUS certification. This allows for a minimum 89-92% efficiency at 115V and 90-94% at 230V. The AX860i is completely modular, running the larger 200mm design, with a dual ball bearing 140mm fan to assist high-performance use. The AX860i is designed to be a workhorse, with plenty of PCIe connectors for suitable GPU setups. The AX860i also comes with a Zero RPM mode for the fan, which due to the design allows the fan to be switched off when the power supply is under 30% load.

Further Reading: AnandTech's Corsair AX1500i Power Supply Review

Thank you to G.Skill for providing us with memory. G.Skill has been a long-time supporter of AnandTech over the years, for testing beyond our CPU and motherboard memory reviews. We've reported on their high capacity and high-frequency kits, and every year at Computex G.Skill holds a world overclocking tournament with liquid nitrogen right on the show floor.

Further Reading: AnandTech's Memory Scaling on Haswell Review, with G.Skill DDR3-3000

Intel vs AMD: The Start of Core Wars Benchmark Overview
Comments Locked

222 Comments

View All Comments

  • mkaibear - Saturday, October 7, 2017 - link

    Well, I'd broadly agree with that!

    There are latency issues with that kind of approach but I'm sure they'd be solvable. It'll be interesting to see what happens with Intel's Mesh when it inevitably trickles down to the lower end / AMD's Infinity Fabric when they launch their APUs.
  • mapesdhs - Tuesday, October 10, 2017 - link

    Such an idea is kinda similar to SGI's shared memory designs. Problem is, scalable systems are expensive, and these days the issue of compatibility is so strong, making anything new and unique is very difficult, companies just don't want to try out anything different. SGI got burned with this re their VW line of PCs.
  • boeush - Saturday, October 7, 2017 - link

    I think it's a **VERY** safe bet that most systems selling with an i7 8700/k will also include some sort of a discrete GPU. It's almost unimaginable that anyone would buy/build a system with such a CPU but no better GPU than integrated graphics

    Which makes the iGPU a total waste of space and a piece of useless silicon that consumers are needlessly paying for (because every extra square inch of die area costs $$$).

    For high-end CPUs like the i7s, it would make much more sense to ditch the iGPU and instead spend that extra silicon to add an extra couple of cores, and a ton more cache. Then it would be a far better CPU for the same price.

    So I'm totally with the OP on this one.
  • mkaibear - Sunday, October 8, 2017 - link

    You need a better imagination!

    Of the many hundreds of computers I've bought or been responsible for speccing for corporate and educational entities, about half have been "performance" oriented (I'd always spec a decent i5 or i7 if there's a chance that someone might be doing something CPU limited - hardware is cheap but people are expensive...) Of those maybe 10% had a discrete GPU (the ones for games developers and the occasional higher-up's PC). All the rest didn't.

    From chatting to my fellow managers at other institutions this is basically true across the board. They're avidly waiting for the Ryzen APUs to be announced because it will allow them to actually have competition in the areas they need it!
  • boeush - Sunday, October 8, 2017 - link

    It's not surprising to see business customers largely not caring about graphics performance - or about the hit to CPU performance that results from splitting the TDP budget with the iGPU...

    In my experience, business IT people tend to be either penny-wise and pound-foolish, or obsessed with minimizing their departmental TCO while utterly ignoring company performance as a whole. If you could get a much better-performing CPU for the same money, and spend an extra $40 for a discrete GPU that matches or exceeds the iGPU's capabilities - would you care? Probably not. Then again, that's why you'd stick with an i5 - or a lower-grade i7. Save a hundred bucks on hardware per person per year; lose a few thousand over the same period in wasted time and decreased productivity... I've seen this sort of penny-pinching miscalculation too many times to count. (But yeah, it's much easier to quantify the tangible costs of hardware, than to assess/project the intangibles of sub-par performance...)

    But when it comes specifically to the high-end i7 range - these are CPUs targeted specifically at consumers, not businesses. Penny-pinching IT will go for i5s or lower-grade i7s; large-company IT will go for Xeons and skip the Core line altogether.

    Consumer builds with high-end i7s will always go with a discrete GPU (and often more than one at a time.)
  • mkaibear - Monday, October 9, 2017 - link

    That's just not true dude. There are a bunch of use cases which spec high end CPUs but don't need anything more than integrated graphics. In my last but-one place, for example, they were using a ridiculous Excel spreadsheet to handle the manufacturing and shipping orders which would bring anything less than an i7 with 16Gb of RAM to its knees. Didn't need anything better than integrated graphics but the CPU requirements were ridiculous.

    Similarly in a previous job the developers had ludicrous i7 machines with chunks of RAM but only using integrated graphics.

    Yes, some it managers are penny wise and pound foolish, but the decent ones who know what they're doing they spend the money on the right CPU for the job - and as I say a serious number of use cases don't need a discrete GPU.

    ...besides it's irrelevant because the integrated GPU has zero impact on performance for modern Intel chips, as I said the limit is thermal not package size.

    If Intel whack an extra 2 cores on and clock them at the same rate their power budget is going up by 33% minimum - so in exchange for dropping the integrated GPU you get a chip which can no longer be cooled by a standard air cooler and has to have something special on there, adding cost and complexity.

    Sticking with integrated GPUs is a no-brainer for Intel. It preserves their market share in that environment and has zero impact for the consumer, even gaming consumers.
  • boeush - Monday, October 9, 2017 - link

    Adding 2 cores to a 6-core CPU drives the power budget up by 33% if and **ONLY IF** all cores are actually getting fully utilized. If that is the case, then the extra performance from those extra 2 cores would be indeed actually needed! (at least on those occasions, and would be, therefore, sorely missed in a 6-core chip.). Otherwise, any extra cores would be mostly idle, not significantly impacting power utilization, cooling requirements, or maximum single-thread performance.

    Equally important to the number of cores is the amount of cache. Cache takes up a lot of space, doesn't generate all that much heat (compared to the actual CPU pipeline components), but can boost performance hugely, especially on some tasks that are memory-constrained. Having more L1/L2/L3 cache would provide a much better bang for the buck when you need the CPU grunt (and therefore a high-end i7), than the waste of an iGPU (eating up ~50% of die area) ever could.

    Again, when you're already spending top dollar on an i7 8700/k (presumable because you actually need high CPU performance), it makes little sense that you go, "well, I'd rather have **LOWER** CPU performance, than be forced to spend an extra $40 on a discrete GPU (that I could then reuse on subsequent system builds/upgrades for many years to come)"...
  • mkaibear - Tuesday, October 10, 2017 - link

    Again, that's not true. Adding 2 cores to a 6 core CPU means that unless you find some way to prevent your OS from scheduling threads on it then all those cores are going to end up used somewhat - which means that you have to plan for your worst case TDP not your best case TDP - which means you have to engineer a cooling solution which will work for the full 8 core CPU, increasing costs to the integrator and the end user. Why do you think Intel's worked so hard to keep the 6-core CPU within a few watts of the old 4-core CPU?

    In contrast an iGPU can be switched on or off and remain that way, the OS isn't going to assign cores to it and result in it suddenly dissipating more power.

    And again you're focussing on the extremely limited gamer side of things - in the real world you don't "reuse the graphics card for many years to come", you buy a machine which does what you need it to and what you project you'll need it to, then replace it at the end of whatever period you're amortising the purchase over. Adding a $40 GPU and paying the additional electricity costs to run that GPU over time means your TCO is significantly increased for zero benefits, except in a very small number of edge cases in which case you're probably better off just getting a HEDT system anyway.

    The argument about cache might be a better one to go down, but the amount of cache in desktop systems doesn't have as big an impact on normal workflow tasks as you might expect - otherwise we'd see greater segmentation in the marketplace anyway.

    In short, Intel introducing desktop processors without iGPUs makes no sense for them at all. It would benefit a small number of enthusiasts at a cost of winding up a large number of system integrators and OEMs, to say nothing of a huge stack of IT Managers across the industry who would suddenly have to start fitting and supporting discrete GPUs across their normal desktop systems. Just not a good idea, economically, statistically or in terms of customer service.
  • boeush - Tuesday, October 10, 2017 - link

    The TDP argument as you are trying to formulate it is just silly. Either the iGPU is going to be in fact used on a particular build, or it's going to be disabled in favor of headless operation or a discrete GPU. If the iGPU is disabled, then it is the very definition of all-around WASTE - a waste of performance potential for the money, conversely/accordingly a waste of money, and a waste in terms of manufacturing/materials efficiency. On the other hand, if the iGPU is enabled, it is actually more power-dense that the CPU cores - meaning you'll have to budget even more heavily for its heat and power dissipation, than you'd have for any extra CPU cores. So in either case, your argument makes no sense.

    Remember, we are talking about the high end of the Core line. If your build is power-constrained, then it is not high-performance and you have no business using a high-end i7 in it. Stick to i5/i3, or the mobile variants, in that case. Otherwise, all these CPUs come with a TDP. Whether the TDP is shared with an iGPU or wholly allocated to CPU is irrelevant: you still have to budget/design for the respective stated TDP.

    As far as "real-world", I've seen everything from companies throwing away perfectly good hardware after a year of use, to people scavenging parts from old boxes to jury-rig a new one in a pinch.

    And again, large companies with big IT organizations will tend to forego the Core line altogether, since the Xeons provide better TCO economy due to their exclusive RAS features. The top-end i7 really is not a standard 'business' CPU, and Intel really is making a mistake pushing it with the iGPU in tow. That's where they've left themselves wide-open to attack from AMD, and AMD has attacked them precisely along those lines (among others.)

    Lastly, don't confuse Intel's near-monopolistic market segmentation engineering with actual consumer demand distribution. Just because Intel has chosen to push an all-iGPU lineup at any price bracket short of exorbitant (i.e. barring the so-called "enthusiast" SKUs), doesn't mean the market isn't clamoring for a more rational and effective alternative.
  • mkaibear - Wednesday, October 11, 2017 - link

    Sheesh. Where to start?

    1) Yes, you're right, if the iGPU isn't being used then it will be disabled, and therefore you don't need to cool it. Conversely, if you have additional cores then your OS *will* use them, and therefore you *do* need to cool them.

    iGPU doesn't draw very much power at all. HD2000 drew 3W. The iGPU in the 7700K apparently draws 6W so I assume the 8700K with a virtually identical iGPU draws just as much (figures available via your friendly neighbourhood google). Claiming the iGPU has a higher power budget than the CPU cores is frankly ridiculous. (in fact it also draws less than .2W when it's shut down which means that having it in there is far outweighed by the additional thermal sink available, but anyway)

    2) Large companies with big IT organisations don't actually forego the Core line altogether and go with Xeons. They could if they wanted to, but in general they still use off-the shelf Dells and HPs for everything except extremely bespoke setups - because, as I previously mentioned, "hardware is cheap, people are expensive" - getting an IT department to build and maintain bespoke computers is hilariously expensive. No-one is arguing that for an enthusiast building their own computer that the option of the extra cores would be nice, but my point all along has been that Intel isn't going to risk sacrificing their huge market share in the biggest market to gain a slice of a much smaller market. That would be extremely bad business.

    3) The market isn't "clamoring for a more rational and effective alternative" because if it was then Ryzen would have flown off the shelves much faster than it did.

    Bottom line: business IT wants simple solutions, the fewer parts the better. iGPUs on everything fulfil far more needs than dGPUs for some and iGPUs for others. iGPUs make designing systems easier, they make swapouts easier, they make maintenance easier, they reduce TCO, they reduce RMAs and they just make IT staff's lives easier. I've run IT for a university, a school and a manufacturing company, and for each of them the number of computers which needed a fast CPU outweighed the number of computers which needed a dGPU by a factor of at least 10:1 - and the university I worked for had a world-leading art/media/design dept and a computer game design course which all had dGPUs. The average big business has even less use for dGPUs than the places I've worked.

    If you want to keep trying to argue this then can you please answer one simple question: why do you think it makes sense for Intel to prioritise a very small area in which they don't have much market share over a very large area in which they do? That seems the opposite of what a successful business should do.

Log in

Don't have an account? Sign up now