Conclusions

Alder Lake is set to come to market for both desktop and mobile, and we’re expecting the desktop hardware to start to appear by the end of the year – perhaps a little later for the rest of the family, but all-in-all we expect Intel is experiencing some serious squeaky bum time regarding how all the pieces will fit in place at that launch. The two main critical factors are operating systems and memory.

Because Alder Lake is Intel’s first full-stack attempt to commercialize a hybrid design, it has had to work closely with Microsoft to enable all the features it needs to make managing a hybrid core design properly beneficial to users. Intel’s new Thread Director Technology couples an integrated microcontroller per P-core and a new API for Windows 11 such that the scheduler in the operating system can take hints about the workflow on the core at a super fine granularity – every 30 microseconds or so. With information about what each thread is doing (from heavy AVX2 down to spin lock idling), the OS can react when a new thread needs performance, and choose which threads need to be relegated down to the E-core or as a hyperthread (which is classified as slower than an E-core).

When I first learned Alder Lake was going to be a hybrid design, I was perhaps one of the most skeptical users about how it was going to work, especially with some of the limits of Windows 10. At this point today however, with the explanations I have from Intel, I’m more confident than not that they’ve done it right. Some side off-the-record conversations I have had have only bolstered the idea that Microsoft has done everything Intel has asked, and users will need Windows 11 to get that benefit. Windows 10 still has some Hardware Guided Scheduling, but it’s akin to only knowing half the story. The only question is whether Windows 11 will be fully ready by the time Alder Lake comes to market.

For memory, as a core design, Alder Lake will have support for DDR4 and DDR5, however only one can be used at any given time. Systems will have to be designed for one or the other – Intel will state that by offering both, OEMs will have the opportunity to use the right memory at the right time for the right cost, however the push to full DDR5 would simplify the platform a lot more. We’re starting to see DDR5 come to the consumer market, but not in any volume that makes any consumer sense – market research firms expect the market to be 10% DDR5 by the end of 2022, which means that consumers might have to be stay with DDR4 for a while, and vendors will have to choose whether to bundle DDR5 with their systems. Either way, there’s no easy answer to the question ‘what memory should I use with Alder Lake’.

Through The Cores and The Atoms

From a design perspective, both the P-core and E-core are showcasing substantial improvements to their designs compared to previous generations.

The new Golden Cove core has upgraded the front-end decoder, which has been a sticking point for analysis of previous Cove and Lake cores. The exact details of how they operate are still being kept under wraps, but having a 6-wide variable length decoder is going to be an interesting talking point against 8-wide fixed-length decoders in the market and which one is better. The Golden Cove core also has very solid IPC figure gains, Intel saying 19%, although the fact there are some regressions is interesting. Intel did compare Golden Cove to Cypress Cove, the backported desktop core, rather than Willow Cove, the Tiger Lake core, which would have been a more apt comparison given that our testing shows Willow Cove slightly ahead. But still, around 19% is a good figure. Andrei highlights in his analysis that the move from a 10-wide to a 12-wide disaggregated execution back-end should be a good part of that performance, and that most core designs that go down this route end up being good.

However, for Gracemont, Intel has taken that concept to the extreme. Having 17 execution ports allows Intel to clock-gate each port when not in use, and even when you couple that with a smaller 5-wide allocation dispatch and 8-wide retire, it means that without specific code to keep all 17 ports fed, a good number are likely to be disabled, saving power. The performance numbers Intel provided were somewhat insane for Gracemont, suggesting +8% performance over Skylake at peak power, or a variety of 40% ST perf/power or 80% MT perf/power against Skylake. If Gracemont is truly a Skylake-beating architecture, then where have you been! I’m advocating for a 64-core HEDT chip tomorrow.

One harsh criticism Intel is going to get back is dropping AVX-512 for this generation. For the talk we had about ‘no transistor left behind’, Alder Lake dropped it hard. That’s nothing to say if the functionality will come back later, but if rumors are believed and Zen 4 has some AVX-512 support, we might be in a situation where the only latest consumer hardware on the market supporting AVX-512 is from AMD. That would be a turn-up. But AMD’s support is just a rumor, and really if Intel wants to push AVX-512 again, it will have a Sisyphean task to convince everyone it’s what the industry needs.

Where We Go From Here

There are still some unanswered questions as to the Alder Lake design, and stuff that we will test when we get the hardware in hand. Intel has an event planned for the end of October called the Intel InnovatiON event (part of the ON series), which would be the right time to introduce Alder Lake as a product to the world. Exactly when it comes to retail will be a different question, but as long as Intel executes this year on the technology, it should make for an interesting competition with the rest of the market.

Instruction Sets: Alder Lake Dumps AVX-512 in a BIG Way
Comments Locked

223 Comments

View All Comments

  • zamroni - Friday, August 20, 2021 - link

    That low power cores for desktop is waste of transistors.
    They are better to be used for more caches or more performance cores
  • mode_13h - Friday, August 20, 2021 - link

    This is what I thought, until I realized that they have better perf/area than the big cores. Not to mention perf/W.

    So, in highly-threaded workloads, their 8+8 core configuration should out-perform 10 cores of Golden Cove. And, when thermally-limited, the little cores will also more than pull their weight.

    It's an interesting experiment they're trying. I'm interested in seeing how it plays out, in the real world.
  • nevcairiel - Friday, August 20, 2021 - link

    > Designed as its third generation of vector instructions (AVX is 128-bit, AVX2 is 256-bit, AVX512 is 512-bit)

    SSE is 128-bit. AVX is 256-bit FP, AVX2 is 256-bit INT.
    And MMX was 64-bit before that. So doesn't this make it the 4th generation, assuming you don't count all the SSE versions separately? (The big ones were SSE1 with 128-bit FP, and SSE2 with 128-bit INT, SSE3/SSSE3/SSE4.1 are only minor extensions)
  • mode_13h - Saturday, August 21, 2021 - link

    Yeah, I came to the same conclusion. It's the 4th major family of vector instructions. Or, another way to clearly demarcate it would be the 4th vector width.
  • abufrejoval - Friday, August 20, 2021 - link

    I wonder how many side channel attacks the power director will enable.

    Also wonder if the lack of details is due to Intel stepping awfully close to some of Apple's patents.

    The battles between the Big little and AVX-512 teams inside Intel must have been epic: I imagine frothing red faces all around...
  • mode_13h - Saturday, August 21, 2021 - link

    > The battles between the Big little and AVX-512 teams inside Intel must have been epic

    : )

    Although, the AVX-512 folks have some egg on their faces from a problematic implementation in Skylake-SP and its derivatives.
  • abufrejoval - Friday, August 20, 2021 - link

    Does Big-little make any sense on a "desktop"?

    And then: Are there actually still any desktops around?

    All around my corporate workplaces, notebooks have become the de-facto desktop for many depreciation cycles, mostly because personal offices got replaced by open space and home-office days became a regular thing far before the pandemic. Since then even 'workstations' just became bigger notebooks.

    Anywhere else I look it's becoming hard to detect desktops, even for big-screen & multi-monitor setups, it's mostly NUCs or in-screen devices these days.

    Those latter machines rarely seem to get turned off any more and I guess many corporate laptops will remain 'turned on' (= stay in some sort of slumber) most of the time, too, so there Big-little overall power consumption might drop vs. Big-only, when both no longer sleep deeply.

    Supposedly that makes all these voice commands possible, but try as I might, I can see no IT admin turning that on in an office, nor would I want that in my living room.

    The only place I still see 'desktops' are really gamer machines and for those it's hard to see how those small cores might have any significant energy budget impact, even while they are used for ordinary 2D stuff.

    For micro-servers Big-little seems much more useful, but Intel typically has gone a long way to ensure that 'desktop' CPUs were not used for that.

    Intel's desire for market differentiation seems the major factor behind this and many other features since MMX, but given an equal price choice, I cannot imagine preferring the use of AVX-512 for dark silicon and two P-core tiles for eight E-cores over a fully enabled ten P-core chip.

    And I'd belive that most 'desktop' users would prefer the same.
  • mode_13h - Saturday, August 21, 2021 - link

    > The only place I still see 'desktops' are really gamer machines

    We still use traditional desktops for software development and VMs for testing. Our software takes long enough to build and the test environment needs to boot a full image. So, a proper desktop isn't hard to justify.
  • abufrejoval - Saturday, August 21, 2021 - link

    Our developers are encouraged to use build servers and the automatic testing pipelines behind them. Those run on machines with hundreds of GB of RAM and dozens of CPU cores, where loads get distribued via the framework. The QA tests will use containers or VMs as required, which are built and torn down to match by the pipeline. With thousands of developers in the company, that tends to give both better performance to any developer and much better economy to the company, while (home-)offices stay cool and quiet. We still give them laptops with what used to be "desktop" specs (32GB RAM, i7 quads), because, well they're cheap enough, and it allows them to play with VMs locally, even offline, should they want to e.g. for education/self-study.

    These days when you're running a build farm on your "desktop", that may really more of a workstation. It may be the "economy" model, which means from a price point it's what used to be a desktop, in my home-lab case a Ryzen 7 5800X 8-core with an RTX 2080ti and 128GB ECC RAM that runs whisper quiet even at full load. It would have been a 16-Core 5950X today, but when I built it, those were impossible to get. It's still an easy upgrade and would get you 16 "P-cores" on the cheap. It's also pretty much a gamer's rig, which is why I also use it after hours.

    My other home-lab workstation is what used to be a "real workstation" some years ago, an 18-core Haswell E5-2696 v3, which has exactly the same performance as the Ryzen 7 5800X on threaded jobs, even uses the same 110 Watts of power, but much lower clocks (2.7 vs. 4.4 GHz all-cores). Also 128GB of ECC RAM and thankfully just as quiet. It's not so great at gaming, because it only clocks to 4 GHz for single/dual core loads with Haswell IPC and I've yet to find a game that's capable of using 18-cores for profit to balance that out.

    Today you would use a Threadripper in that ballpark, with an easy 64 "P-Cores" and matching RAM, pretty much the same computing capacity as a mid-range server, but much quieter and typically tolerable in a desktop/office setup.

    If threaded software builds were all you do, you'd want to use 64 E-Cores on the "economy" variant and 256 E-Cores on the "premium", much like Ian hinted, because as long as you can fully load those 256 cores for your builds, they would be faster overall. But the chances for that happening are vastly bigger on a shared server than on a dedicated desktop, which is why we see all these ARM servers going for extra cores at the price of max single threaded performance.

    As a thought experiment imagine a machine where tiles can be switched dynamically between being a single P-core or four E-cores. For embarrassingly parallel workloads, the E-Cores would give you both better Watt economy (if you can maintain full load or your idle power consumption is perfect) and faster finish times. But as soon as your workload doesn't go beyond the number of P-cores you can configure, finishing times will be better on P-cores, while power effiency very much gets lost in idle power demands.

    The only way to get that re-configurability is to use shared servers, cloud or DC, while a fixed allocation of P vs E cores on a desktop has a much harder time to match your workload.

    I can tell you that I much prefer working on the 5800X workstation these days, even if it's no faster for the builds. Beause it's twice as fast on all those scalar workloads. And no matter how much most stuff tries to go wide and thready, Amdahl's law still holds true and that where P-Cores help.
  • mode_13h - Sunday, August 22, 2021 - link

    > Our developers are encouraged to use build servers

    We use VM servers, but they're old and the VMs are spec'd worse than desktops. So, there's no real incentive to use them for building. And if you're building on a desktop in your home, then testing on a server-based VM means copying the image over the VPN. So, almost nobody does that, either.

    VM servers are a nice idea, but companies often balk at the price tag. New desktops every 4-5 years is an easier pill to swallow, especially because upgrades are staggered.

    > I much prefer working on the 5800X workstation these days,
    > even if it's no faster for the builds. Beause it's twice as fast on all those scalar workloads.

    Exactly. Most incremental compilation involves relatively few files. I do plenty of other sequential tasks, as well.

Log in

Don't have an account? Sign up now