Core i9-9980XE Conclusion: A Generational Upgrade

Users (and investors) always want to see a year-on-year increase in performance in the products being offered. For those on the leading edge, where every little counts, dropping $2k on a new processor is nothing if it can be used to its fullest. For those on a longer upgrade cycle, a constant 10% annual improvement means that over 3 years they can expect a 33% increase in performance. As of late, there are several ways to increase performance: increase core count, increase frequency/power, increase efficiency, or increase IPC. That list is rated from easy to difficult: adding cores is usually trivial (until memory access becomes a bottleneck), while increasing efficiency and instructions per clock (IPC) is difficult but the best generational upgrade for everyone concerned.

For Intel’s latest Core i9-9980XE, its flagship high-end desktop processor, we have a mix of improved frequency and improved efficiency. Using an updated process has helped increase the core clocks compared to the previous generation, a 15% increase in the base clock, but we are also using around 5-6% more power at full load. In real-world terms, the Core i9-9980XE seems to give anywhere from a 0-10% performance increase in our benchmarks.

However, if we follow Intel’s previous cadence, this processor launch should have seen a substantial change in design. Normally Intel follows a new microarchitecture and socket with a process node update on the same socket with similar features but much improved efficiency. We didn’t get that. We got a refresh.

An Iteration When Intel Needs Evolution

When Intel announced that its latest set of high-end desktop processors was little more than a refresh, there was a subtle but mostly inaudible groan from the tech press at the event. We don’t particularly like generations using higher clocked refreshes with our graphics, so we certainly are not going to enjoy it with our processors. These new parts are yet another product line based on Intel’s 14nm Skylake family, and we’re wondering where Intel’s innovation has gone.

These new parts involve using larger silicon across the board, which enables more cache and PCIe lanes at the low end, and the updates to the manufacturing process afford some extra frequency. The new parts use soldered thermal interface material, which is what Intel used to use, and what enthusiasts have been continually requesting. None of this is innovation on the scale that Intel’s customer base is used to.

It all boils down to ‘more of the same, but slightly better’.

While Intel is having another crack at Skylake, its competition is trying to innovate, not only by trying new designs that may or may not work, but they are already showcasing the next generation several months in advance with both process node and microarchitectural changes. As much as Intel prides itself on its technological prowess, and has done well this decade, there’s something stuck in the pipe. At a time when Intel needs evolution, it is stuck doing refresh iterations.

Does It Matter?

The latest line out of Intel is that demand for its latest generation enterprise processors is booming. They physically cannot make enough, and other product lines (publicly, the lower power ones) are having to suffer when Intel can use those wafers to sell higher margin parts. The situation is dire enough that Intel is moving fab space to create more 14nm products in a hope to match demand should it continue. Intel has explicitly stated that while demand is high, it wants to focus on its high performance Xeon and Core product lines.

You can read our news item on Intel's investment announcement here.

While demand is high, the desire to innovate hits this odd miasma of ‘should we focus on squeezing every cent out of this high demand’ compared to ‘preparing for tomorrow’. With all the demand on the enterprise side, it means that the rapid update cycles required from the consumer side might not be to their liking – while consumers who buy one chip want 10-15% performance gains every year, the enterprise customers who need chips in high volumes are just happy to be able to purchase them. There’s no need for Intel to dip its toes into a new process node or design that offers +15% performance but reduces yield by more, and takes up the fab space.

Intel Promised Me

In one meeting with Intel’s engineers a couple of years back, just after the launch of Skylake, I was told that two years of 10% IPC growth is not an issue. These individuals know the details of Intel’s new platform designs, and I have no reason to doubt them. Back then, it was clear that Intel had the next one or two generations of Core microarchitecture updates mapped out, however the delays to 10nm seem to put a pin into those +10% IPC designs. Combine Intel’s 10nm woes with the demand on 14nm Skylake-SP parts, and it makes for one confusing mess. Intel is making plenty of money, and they seem to have designs in their back pocket ready to go, but while it is making super high margins, I worry we won’t see them. All the while, Intel’s competitors are trying to do something new to break the incumbents hold on the market.

Back to the Core i9-9980XE

Discussions on Intel’s roadmap and demand aside, our analysis of the Core i9-9980XE shows that it provides a reasonable uplift over the Core i9-7980XE for around the same price, albeit for a few more watts in power. For users looking at peak Intel multi-core performance on a consumer platform, it’s a reasonable generation-on-generation uplift, and it makes sense for those on a longer update cycle.

A side note on gaming – for users looking to push those high-frame rate monitors, the i9-9980XE gave a 2-4% uplift over our games at our 720p settings. Individual results varied from a 0-1% gain, such as in Ashes or Final Fantasy, up to a 5-7% gain in World of Tanks, Far Cry 5, and Strange Brigade. Beyond 1080p, we didn’t see much change.

When comparing against the AMD competition, it all depends on the workload. Intel has the better processor in most aspects of general workflow, such as lightly threaded workloads on the web, memory limited tests, compression, video transcoding, or AVX512 accelerated code, but AMD wins on dedicated processing, such as rendering with Blender, Corona, POV-Ray, and Cinema4D. Compiling is an interesting one, because in for both Intel and AMD, the more mid-range core count parts with higher turbo frequencies seem to do better.

Power Consumption
POST A COMMENT

145 Comments

View All Comments

  • Atari2600 - Tuesday, November 13, 2018 - link

    I wouldn't call them very "professional" when they are sacrificing 50+% productivity for mobility.

    Anyone serious about work in a serious work environment* has a workstation/desktop and at least 2 of UHD/4k monitors. Anything else is just kidding yourself thinking you are productive.
    Reply
  • TEAMSWITCHER - Tuesday, November 13, 2018 - link

    I never said that we didn't have external monitors, keyboards, and mice for desktop work. However, from 25 years of personal experience in this industry I can tell you emphatically .. productivity isn't related to the number of pixels on your display. Reply
  • HStewart - Tuesday, November 13, 2018 - link

    Exactly - I work with 15 in IBM Thinkpad 530 that screen is never used - but I have 2 24in 1980p monitors on my desk at home - if I need to go home office - hook it up another monitor - always with external monitor.

    It is really not the number of pixels but size of work sapace. I have 4k Dell XPS 15 2in1 and I barely use the 4k on laptop - I mostly use it hook to LG 38U88 Ultrawide. I have option to go to 4k on laptop screen but in reality - I don't need it.
    Reply
  • Atari2600 - Tuesday, November 13, 2018 - link

    I'd agree if you are talking about going from 15" 1080p laptop screen to 15" 4k laptop screen.

    But, if you don't see significant changes in going from a single laptop screen to a 40" 4k or even just dual SD monitors - any arrangement that lets you put up multiple information streams at once, whatever you are doing isn't very complicated.
    Reply
  • twtech - Thursday, November 15, 2018 - link

    Maybe not necessarily the number of pixels. I don't think you'd be a whole lot more productive with a 4k screen than a 2k screen. But screen area on the other hand does matter.

    From simple things like being able to have the .cpp, the .h, and some other relevant code file all open at the same time without needing to switch windows, to doing 3-way merges, even just being able to see the progress of your compile while you check your email. Why wouldn't you want to have more screen space?

    If you're going to sit at a desk anyway, and you're going to be paid pretty well - which most developers are - why sacrifice even 20, 10, even 5% productivity if you don't have to? And personally I think it's at the higher end of that scale - at least 20%. Every time I don't have to lose my train of thought because I'm fiddling with Visual Studio tabs - that matters.
    Reply
  • Kilnk - Tuesday, November 13, 2018 - link

    You're assuming that everyone who needs to use a computer for work needs power and dual monitors. That just isn't the case. The only person kidding themselves here is you. Reply
  • PeachNCream - Tuesday, November 13, 2018 - link

    Resolution and the presence or absence of a second screen are things that are not directly linked to increased productivity in all situations. There are a few workflows that might benefit, but a second screen or a specific resolution, 4k for instance versus 1080, doesn't automatically make a workplace "serious" or...well whatever the opposite of serious is in the context in which you're using it. Reply
  • steven4570 - Tuesday, November 13, 2018 - link

    "I wouldn't call them very "professional" when they are sacrificing 50+% productivity for mobility."

    This is quite honestly, a very stupid statement without any real practical views in the real world.
    Reply
  • Atari2600 - Wednesday, November 14, 2018 - link

    Not really.

    The idiocy is thinking that working off a laptop screen is you being as productive as you can be.

    The threshold for seeing tangible benefiting from more visible workspace (when so restricted) is very low.

    I can accept if folks say they dock their laptops and work on large/multiple monitors - but absolutely do not accept the premise that working off the laptop screen should be considered effective working. If you believe otherwise, you've either never worked with multiple/large screens or simply aren't working fast enough or on something complicated enough to have a worthwhile opinion in the matter! [IMO it really is that stark and it boils my piss seeing folks grappling with 2x crap 20" screens in engineering workplaces and their managers squeezing to make them more productive and not seeing the problem right in front of them.]
    Reply
  • jospoortvliet - Thursday, November 15, 2018 - link

    Dude it depends entirely on what you are doing. A writer (from books to marketing) needs nothing beyond a 11" screen... I'm in marketing in a startup and for half my tasks my laptop is fine, writing in particular. And yes as soon as I need to work on a web page or graphics design, I need my two screens and 6 virtual desktops at home.

    I have my XPS 13 for travel and yes I take a productivity hit from the portability, but only when forced to use it for a week. Working from a cafe once or twice a week I simply plan tasks where a laptop screen isn't limiting and people who do such tasks all day (plenty) don't NEED a bigger screen at all.

    He'll I know people who do 80% of their work on a freaking PHONE. Sales folks probably NEVER need anything beyond a 15" screen, and that only for 20% of their work...
    Reply

Log in

Don't have an account? Sign up now