With someone in the press having broken their embargo earlier today, Intel is lifting the lid earlier than planned on their upcoming Cascade Lake-X family of processors for the high-end desktop (HEDT) market. Similar to the way Intel's Cascade Lake based Xeon Scalable processors are a further revision of their Skylake Xeons, offering clock speed increases and security fixes in hardware, the new HEDT processors will grant higher frequencies, more memory capacity, and better protection against side-channel attacks. The key numbers however are the big drop in Intel's pricing: Intel will be releasing its 18-core part, the Core i9-10980XE, for under $1000.

Intel Cascade Lake-X
AnandTech Cores
Threads
Base All
Core
TB2 TB3 TDP Price
(1ku)
Core i9-10980XE 18C / 36T 3.0 3.8 4.6 4.8 165 W $979
Core i9-10940X 14C / 28T 3.3 4.1 4.6 4.8 165 W $784
Core i9-10920X 12C / 24T 3.5 4.3 4.6 4.8 165 W $689
Core i9-10900X 10C / 20T 3.7 4.3 4.5 4.7 165 W $590
Skylake-X (previous generation)
Core i9-9980XE 18C / 36T 3.0   4.5 4.7 165 W $1979
Core i9-9940X 14C / 28C 3.3   4.5   165 W $1387
Core i9-9920X 12C / 24T 3.5   4.5   165 W $1189
Core i9-9900X 10C / 20T 3.5   4.5   165 W $989

This pricing is a significant shift in Intel's strategy, and a number of fingers will be pointed at AMD as having made this happen. Next month AMD is set to launch its 16-core Ryzen 9 3950X at $749, which will offer 16 PCIe 4.0 lanes for slots (+4 for M.2, +4 for chipset) and support for 128 GB of DRAM. So Intel needed something similarly speedy, but with more PCIe lanes and more memory support that they could offer for just a bit more, leading to the 10980XE for $979. Ultimately, the on-shelf price is often just slightly higher than tray price, so don't be surprised if retail prices land at around $1000. 

All the CPUs will support 256 GB of quad-channel memory (up from 128 GB), and have 48 PCIe 3.0 lanes (up from 44). Memory speed support is listed as DDR4-2933 for 1 DIMM per channel, and DDR4-2666 for 2 DIMMs per channel. All these CPUs have a TDP of 165 W, which Intel states will help the CPUs to turbo longer under Intel's recommended settings (as we know, consumer motherboard manufacturers like to ignore these anyway). All these CPUs are supported in X299 motherboards.

There is no 16-core in this stack, with Intel's official reasoning being that they assess the market with each generation and they don't believe there's a suitable price point for such a part when the 14C and 18C parts are so close. Most people will point the finger and say that no-16 core Intel part means no direct comparison with the Ryzen 9 3950X, which is something to think about.

Another point to note is that Intel has stopped this stack at the 10 core and no lower. This means that there will be no cross over between Intel's consumer processor stack and the HEDT stack, with users needing to spend just a little bit more from the Core i9-9900K/KF to reach up to the Core i9-10900X. It will be interesting to see where Intel's Core i9-9900KS fits in, although that still only has dual channel memory and 16 PCIe 3.0 lanes.

Intel lists Wi-Fi 6 and 2.5GbE support on these new processors - to clarify, Intel means external controllers here. For some odd reason when Intel says support, it could mean internal to the chipset or external via a controller; this is messaging I've railed against for a while, as it ends up confusing for enthusiasts, especially when this is an enthusiast platform. It does mean however that we get official information about Intel's 2.5GbE controllers, which we've been waiting on for a couple of years. Intel stated that these controllers will be ready at a later date, and more information to follow. (The controllers are currently listed on Intel's ARK database, but as 1 GbE controllers for some reason.)

These CPUs will have the same security mitigations as the Cascade Lake Xeon processors, with updated hardware mitigations for a number of side channel attacks. We are waiting to hear from Intel if the firmware that supports these processors will also have additional fixes in for Zombieload by default.

One question about this launch is surrounding Intel’s 14nm capacity. Within the last week, there have been reports that despite Intel’s best efforts and promises to match demand, and that Q3 and upcoming for Q4 is going to be busier than expected. We reached out to Intel last week for clarification, and the company said that the bulk of its capacity is focusing on the high-end processors in the market: the Xeon Scalable, the Core i9, Core i7, and Core i5. It will be interesting to see if launching another family of products is going to put additional strain on Intel’s capacity and demand.

With AMD's recent Zen 2 Ryzen 3000 series launch on 7nm earlier in the year, Threadripper 3 coming later this year, and Intel swinging another generation of 14++ into the high-end desktop market, Intel is going to have some tough times. Don't get me wrong, this pricing update from Intel is a good thing for users, especially those looking at implementing things like DL Boost to their workflow, but this market is suddenly turning very aggressive, and it will be interesting to see if Intel can be agile enough to keep pace.

Intel's Cascade Lake-X processors will be available in November. More details should be released nearer to launch.

Related Reading

POST A COMMENT

161 Comments

View All Comments

  • SanX - Wednesday, October 02, 2019 - link

    Can anyone of you guys show advantage of AVX512 on linear algebra as the simplest and most used application. The only cited example showing boost with AVX512 is own Ian Cutress' 3D particle movement test showing doubtful 300%. Our PIC code boost was just 10-20%. Take this simple code for dense matrix solution AX=B with Intel's MKL library which now supports AVX512 and run it on any Intel and any AMD multicore

    Program LinearAlgebra
    implicit none
    integer :: i,j,neq,nrhs=1,lda,ldb, info
    real*8,allocatable :: A(:,:),b(:)
    integer, allocatable :: piv(:)
    Integer count_0, count_1, count_rate, count_max

    do neq=1000,20000,1000
    lda=neq; ldb=neq
    allocate(A(neq,neq),b(neq),piv(neq))
    call random_number(A)
    call random_number(b)
    Call system_clock(count_0, count_rate, count_max)
    CALL dgesv (nEq,nrhs,A,ldA,piv, b, ldb, info)
    Call system_clock(count_1, count_rate, count_max)
    Write (*, '(1x,A,i6,A,2x,F8.3,A)') 'nEqu = ',nEq,' ', &
    dble(count_1-count_0)/count_rate, ' s'
    deallocate(A,b,piv)
    end do
    end program
    Reply
  • SanX - Wednesday, October 02, 2019 - link

    This test will also show if dual, quad, six, etcetc channel RAM is of any importance for the masses Reply
  • peevee - Thursday, October 03, 2019 - link

    Damn, Fortran, I almost forgot you! Reply
  • extide - Wednesday, October 02, 2019 - link

    No, it's not the same as AMD on the lower end units that use the two AVX2 units. They gang together two 256bit wide vector units to do the 512 bits at once, not half on one clock and the second half on another clock. Zen 1 had only a single 128 bit wide unit and ran it twice to get avx2 256 bit work done. Reply
  • eek2121 - Wednesday, October 02, 2019 - link

    Far too many people overestimate how often AVX-512 is used. Outside of benchmarks and very specific use cases, The average user isn't going to use AVX-512 much, if at all. If AMD considered it to be an important factor, they would have implemented it. The fact that Zen 2 is able to beat Intel on an IPC basis in the majority of applications shows that they are correct in that assessment. If you exclude overclocking they are also pretty close clock-per-clock on the higher end parts. I expect the 3950X is going to eat the Core i9-10980XE alive. I think Intel is aware of this as well. Reply
  • nevcairiel - Thursday, October 03, 2019 - link

    New instruction sets suffer naturally from the chicken and egg problem. If no CPU supports it, noone writes code for it, and if there is no code, other CPUs won't adopt it.

    The same has been said for AVX, AVX2, and probably also some of the SSE's. Then today AVX2 actually quite important, as it gets wide use in video decoding from eg. YouTube already. I fully expect the same to happen with AVX512 eventually, it'll just take a generation or two of CPUs.

    I wouldn't say that AMD is "better" in their decision. Their full AVX/AVX2 support honestly came a bit too late, only appearing earlier this year.
    Reply
  • evernessince - Sunday, October 06, 2019 - link

    It makes sense that Intel does it selectively. Given that AVX 256 causes a huge spike in power draw and heat output I can only imagine what AVX 512 does. Reply
  • nevcairiel - Wednesday, October 02, 2019 - link

    This was only true for I believe one CPU in the original SKL-X lineup, the higher-up models had dedicated AVX512 ports.

    The real problem with AVX512 is the downclock. AVX512 can still be incredibly strong, if the workload is a lot of pure math, and the AVX512 units can get busy for a prolonged time. But if you have light math mixed with other stuff, then the downclock can cost too much performance.

    We'll have to see if they managed to tune this behavior a bit in this generation.
    Reply
  • abufrejoval - Wednesday, October 02, 2019 - link

    Just to help me understand: You're saying, that a single AVX512 process brings down the other 17 cores, not just the "control core" for that AVX512 workload?

    That could be a bit of a problem with mixed loads on these high-turbo CPUs, probably much less so for HPC workloads that instruction set was originally aimed at.

    And potentially quite an acceptability issue for Ice Lake notebook chips, when an AVX512 optimized inference workloads would cause a "freeze" at the interaction end.
    Reply
  • dgingeri - Wednesday, October 02, 2019 - link

    AVX takes a LOT of power, even running on just one core, and much of how the Intel chips boost is due to power budget. So, yes, a single core running AVX can cause the other cores to quit boosting and drop to the default clock, depending on the overall budget, and definitely cause them to quit boosting as high. Multiple cores running AVX workloads would increase the likelihood of the other cores quitting their boost clocks as well. Reply

Log in

Don't have an account? Sign up now