As part of the final segment of Intel’s "Intel Client Open House Keynote" at CES this afternoon, Intel EVP and GM of the Client Computing Group, Michelle Johnston Holthaus, also offered a brief update on Intel’s client chips in the work for the second half of the year. While no demos were run during the relatively short 45 minute keynote, Holthaus did reiterate that both Arrow Lake for desktops and Lunar Lake for mobile were making good progress and were expected to launch later this year.

But in lieu of black box demos we got something more surprising instead: our first look at a finished Lunar Lake chip.

Briefly holding the chip out for viewers to see – and holding the press away lest they get too close – Holthaus pulled out a finished Lunar Lake chip.

While details on Lunar Lake still remain very slim – Intel still hasn’t even confirmed what process nodes it’s using – the company has continually been reiterating that they intend to get it out the door in 2024. And having silicon to show off (and shipping to partners, we’re told) is a very effective way to demonstrate Intel’s ongoing progress.

Of note, seeing the chip in person confirms something we’ve all but been expecting from Intel for a few years now: CPUs with on-package memory. The demo chip has two DRAM packages on one of the edges of the chip (presumably LPDDR5X), making this the first time that Intel has placed regular DRAM on a Core chip package. On-package memory is of particular interest to thin & light laptop vendors, as it allows for further space savings and cuts down on the number of critical traces that need to be routed from the CPU and along the motherboard. The technique is most notably (though far from exclusively) used with Apple’s M series of SoCs.

Beyond showing off the physical chip, Holthaus also briefly talked about expected performance and architecture. Lunar Lake is slated to offer “significant” IPC improvements for the CPU core. Meanwhile the GPU and NPU will each offer three-times the AI performance. How Intel will be achieving this remains unclear, but at least on the GPU side, we know that they’ve yet to offer XMX matrix cores within an integrated GPU.

No doubt this is far from the last time we’ll hear about Lunar Lake ahead of its launch. But for now, it’s a bit of a look into the future while Intel continues to ramp production on Meteor Lake for what is now the latest generation of laptops and other mobile devices.

Comments Locked

14 Comments

View All Comments

  • FWhitTrampoline - Monday, January 8, 2024 - link

    I fine with on Module Memory as long as there's PCIe 5.0/CXL support as well for CXL based DRAM Modules! But the On Module DRAM should have at minimum a 128 Bit wide channel to each DRAM Die on the module so any iGPU is not starved for bandwidth. That and the lowest capacity SKUs should start at 16GB minimum capacity of on Module DRAM.
  • mode_13h - Tuesday, January 9, 2024 - link

    > fine with on Module Memory as long as there's PCIe 5.0/CXL support

    Not in 2024 or in (most) laptops. In future HX laptops, maybe there'll be a couple M.2 slots that support CXL protocol, so you can put memory expansion modules there.
  • Samus - Tuesday, January 9, 2024 - link

    Combine on-package memory with external memory? I don't see it happening but that would be a neat hat trick.
  • 1_rick - Tuesday, January 9, 2024 - link

    Why not? Even $10-20 microcontrollers can do it.
  • do_not_arrest - Tuesday, January 9, 2024 - link

    By all means, don't miss the big picture. Each of these chips is targeted at a particular MARKET SEGMENT. Lunar Lake is clearly for very thin/light and low-power cases where expansion is not an important part of the picture. Think some kind of low-cost tablet that you essentially throw away after a few years. They have other "Lake" chips that are meant for other segments. These product announcements are more for system builders and OEMs
  • Diogene7 - Thursday, January 18, 2024 - link

    I wish the same, but as of 2024, I am not sure but I think that the CXL protocol has been thought primarily to be used in servers, and as such, I doubt it has the technical power saving specifications that would allow it to be usable in a laptop.

    It is really pity because CXL 2.0 and higher is much, much needed to open the door to the integration of innovative 3rd party hardware like dedicated 3rd party AI accelerator, or new emerging low latency Non Volatile Memory (especially MRAM),… that could bring fresh innovation…
  • meacupla - Tuesday, January 9, 2024 - link

    If they are going to do this, then they should go all in and increase memory bandwidth.
    128bit bus (2 channels) is already insufficient. I can understand being space and signal integrity constrained with traditional designs, but this solves that.

    Yes please to 192bit and 256bit memory bus. Especially if they want to shove AI/TOPS as the next "big" thing.
  • mode_13h - Tuesday, January 9, 2024 - link

    > Intel still hasn’t even confirmed what process nodes it’s using – the company has
    > continually been reiterating that they intend to get it out the door in 2024.

    I've seen conflicting reports, with some saying 18A and others saying 20A. If I had to bet, I'd go with 20A for something that's due to launch this year.

    > The demo chip has two DRAM packages on one of the edges of the chip
    > (presumably LPDDR5X), making this the first time that Intel has placed regular
    > DRAM on a Core chip package.

    Not true. There was this: https://wccftech.com/asus-intel-supernova-som-chip...

    > Lunar Lake is slated to offer “significant” IPC improvements for the CPU core.

    Lion Cove is said to be a new architecture.

    > Meanwhile the GPU and NPU will each offer three-times the AI performance.
    > How Intel will be achieving this remains unclear,

    Well, it's rumored to use Battlemage architecture. I thought I saw a rumor that Lunar Lake would have an tGPU with the equivalent of up to 384 EUs (exactly 3x of Meteor Lake), but more recent leaks suggest a much smaller tGPU. Of course, there are probably multiple tGPUs, for addressing different markets and product tiers.

    I think the most exciting implication of these announcements is the possibility that they might be following Apple and widening the memory data path to 256 bits. They're going to need more bandwidth to feed a faster GPU & NPU. Also, energy efficiency tends to be better with a wider, slower interface.
  • 1_rick - Tuesday, January 9, 2024 - link

    It doesn't look like those Asus laptops were ever released (at least, if I go out to Asus' site, there's no buy link for them, not even one that takes me to 3rd-party vendor websites like with actual, shipping products.)
  • JMaca - Wednesday, January 10, 2024 - link

    TSMC N3 is more likely.

    20A is not library complete. It can't be used for the SOC tile and might not be able to do the CPU tile either (since it includes the GPU). Arrowlakes compute tile could be 20A.

    18A could work but it won't be ready for a 2024 launch.

Log in

Don't have an account? Sign up now