Performance Numbers: How To Interpret Them

On the previous page, we covered all three of the initial Lakefield designs. All three are very premium products, either offering a super light and thin clamshell with the Samsung, a foldable display with the Lenovo, or dual 9-inch displays in the case of the Microsoft device. Typically we see these sorts of devices paired with the best-in-class performance hardware, which can cost a lot depending on where it is coming from. Add in the device material cost, and we can easily go north of $999, $1499, or even higher when paired with lots of storage, or items like variable refresh displays. Make no mistake, Lakefield will end up in premium high-cost products.

This means that there will be a certain expectation of performance. Users won’t be satisfied if they get an expensive product with mid-range performance – if they’ve paid top dollar, they want it to exceed in all areas. Performance, battery life, and aesthetics all matter to the end-user when we’re dealing with things like flexible displays or new and exciting form factors on top of everything else.

Now don’t get us wrong here, Lakefield certainly fits many of the criterion of a premium product. It was specifically designed to fit into a small footprint by using novel and complex technology. By using the die-to-die bonding techniques and PoP memory, Intel has put in 174 mm2 of silicon into 12mmx12mm dimensions at only 1mm z-height. It leverages both Intel’s leading edge 10+ manufacturing node as well as Intel’s 22FFL high efficiency manufacturing node, and then optimized layout and manufacturing to ensure it has the most appropriate thermal characteristics for the design. There’s also the ultra-low idle power, supposedly measuring 2-3 mW, which has been an important characteristic in laptops that have been using smartphone processors. Offering substantial idle battery life is a key to marketing this type of product.

However, this page is about performance. Ultimately Lakefield can be compared to a number of products on the market. Numbers in brackets indicate big cores and small cores:

  • Intel 7 W Lakefield (1+4) vs Qualcomm Snapdragon 7c (0+8)
  • Intel 7 W Lakefield (1+4) vs Intel 6 W Goldmont+ Atom (0+4) N5030
  • Intel 7 W Lakefield (1+4) vs Intel 5 W Amber Lake-Y (2+0) m3-8100Y
  • Intel 7 W Lakefield (1+4) vs Intel 9 W Ice Lake-Y (2+0) 1005G1
Comparison Table for Lakefield
Intel
i7-L16G7
AnandTech Intel
i3-1005G1
Intel
m3-8100Y
Intel
N5030
Qualcomm
SD 7c
Lakefield SoC Ice
Lake-Y
Amber
Lake-Y
Goldmont+ Kryo
1+4 Core Config 2+0 2+0 0+4 0+8
7 W TDP 9 W 5 W 6 W ~7 W
1 x SNC
4 x TNT
CPU 2 x SNC 2 x SKL 4 x GMN+ 8 x Kryo
Gen 11
64 EUs
0.5 GHz
GPU Gen 11
32 EUs
0.9 GHz
Gen 9
24 EUs
0.9 GHz
Gen 9
18 EUs
750 MHz
Adreno
618
 
4267 LPDDR 3733 LPD3-1866 2400 4267
Wi-Fi 6* Wi-Fi Wi-Fi 5* - - Wi-Fi 6
- Modem - - - Cat15/13

One processor I missed out here is the Qualcomm Snapdragon 8cx, which is a 4+4 configuration that Qualcomm has specifically built for these sorts of mobile devices. The 4+4 configuration, on paper, might seem unfair to the 1+4 of Lakefield, whereas the 0+8 configuratrion of the Snapdragon 7c is more in line with what we might expect. However, the Snapdragon 7c isn’t actually inside any retail devices right now, having only been on display at Qualcomm’s own event in December.

The thing is, the Snapdragon 7c is set to be in devices competing at the $500 level against entry-level Intel Celeron devices. The 8cx is the premium chip, that ends up in the premium devices. This is where Intel will have difficulty.

On Intel’s own slides, the company performs two main comparisons.

  1. Benchmarks against Amber Lake-Y, the i7-8500Y in 5W mode
  2. Benchmarks where the i5-L16G7 runs in 1+4 and 0+4 modes

Benchmarks vs. Intel Amber Lake i7-8500Y

For the first point, Intel promotes the following against Amber Lake:

  • +12% single threaded performance, measured by SPEC2006 (3.0 GHz vs 4.2 GHz)
  • +70% graphics performance, 3DMark11 comparing HD615 (24 EUs, Gen 9.5 at 1.05 GHz, 2x4 GB LPDDR3-1866) vs HD (64 EUs, Gen11 at 500 MHz, 2x4 GB LPDDR4X-4267)
  • +24% power efficiency, score per Watt on WebXPRT 3
  • +100% AI workloads on graphics, ResNet50 batch 128 on OpenVINO, comparing

For each of these workloads, there’s something very obvious to pick at.

The first one is SPEC2006, not SPEC2017, and it’s comparing an Amber Lake core to a Sunny Cove core, which as we discussed should have +18% IPC. The frequency difference (assuming both were allowed to turbo to max) is 40% in the favor of Amber Lake, however the Lakefield has a 40% TDP advantage.

On the graphics performance, it’s a substantial mashup – Gen 9 vs Gen 11, 24 EUs vs 64 EUs, 1.05 GHz vs 500 MHz, LPDDR3-1866 vs LPDDR4X-4267. We know that Intel is going wide and slow with Lakefield, and the fact that Lakefield has an additional 40% TDP to help the graphics and CPU cores, I suspect that each chip was battling to find the right balance of power to the CPU or power to the GPU.

On the AI workload, this benchmark has been hand-picked. Intel has done an offline Resnet-50, and run the CPUs in batches. With the GPU being wide and slow, there is the question as to whether the GPU would be competitive in batch-1 type scenarios. Again, there’s also a TDP difference here, as well as a memory difference that explains the raw performance change.

Benchmarks Against Lakefield in 1+4 Mode against 0+4 Mode

For the second set of benchmarks, Intel promotes +33% higher web performance and 17% better power efficiency by adding a big core to a quartet of small cores – essentially comparing a full fat Lakefield against a quad-core Atom design.

What this means is that Lakefield, by and large, will perform the same as a quad-core Atom in almost all tasks, especially heavy tasks. Given that we haven’t had a new Atom platform since 2017, and it’s been even longer since we saw Atom notebooks in a big way, I can guarantee that a lot of users will look at Lakefield and compare it to big-core designs. Intel has also tripped over its own feet in not comparing the performance to any of Qualcomm’s designs. The cost would seem to put it square against the Snapdragon 8cx, however the core layout suggests the 7c would be a fairer fight. Putting Intel’s AI test against Qualcomm’s hardware would also make for an interesting comparison.

Another thing to note, which Intel glossed over, that most people are going to be really concerned about.

What The Big Core Is Actually For

I’ve mentioned a few times in this piece that the big Sunny Cove core is more for end-user latency driven interactions, such as tapping on the screen, typing on the keyboard. When it comes to loading a web page, this blurs the line between response and workload, depending on the browser and how it manages threads.

Now, if we take a traditional high load single threaded workload, such as say, rendering. Which core will it run on? A lot of Intel’s marketing materials, as well as considering the layout of the chip, might get a reasonable end-user to expect that it would run on the high-performance single core. However, consider two things: firstly, rendering a frame is not a latency-driven interaction. Secondly, how many processes are running in the background? Both of these elements would point to the operating system pushing the workload, despite being single threaded, onto the Tremont Atom cores.

At the time of writing, Notebookcheck is the only outlet to publish data from an early look on Samsung’s Galaxy Book S. If we take a single threaded rendering workload, like Cinebench R15, then Lakefield scores 88 points, while the Amber Lake that Intel used in its slides scores 129, a +46% performance uplift to the older Amber Lake system. What in the world is going on? It’s running on the Atom cores.

Our recommendation, for anyone wanting to test the performance of that single Sunny Cove core, is to implement an affinity mask on the software being used. If the software only knows that one core exists, then it can only run on that core. This is how we suspect that Intel achieved the single core performance gains in benchmarks like SPEC2006. However Intel has more tools at its disposal – there’s a chance that the scheduler for these systems might ignore affinity masks in order to maintain a thermal balance in the design. We must wait until we get a sample in for ourselves.

To a certain extent we see this in the Cinebench R15 multi-threaded test. With a standard 5 thread processor, if you run a standard nT test, we expect it to fill all the cores to give the best performance. In Notebookcheck’s article, we can see that the scheduler has evicted the workload from the big core. This is likely due to power/thermal hotspot reasons.

Source: Notebookcheck

In the task manager on the right, we see the first four Atom cores running at 100% while in the multi-threaded test, while the large Sunny Cove core is relatively idle. Note that the CPU is running at 1.9 GHz, and not the 2.8 GHz that Intel has promoted is the all-core turbo for this product.

But the bottom line is that in most cases, expect Lakefield to perform similar to four Atom cores, just above Goldmont Plus, and not like any of the Skylake/Ice Lake Core products and its derivatives.

Lakefield CPUs and Devices Coming To Market The Future of Lakefield
Comments Locked

221 Comments

View All Comments

  • ichaya - Sunday, July 12, 2020 - link

    You've claimed ARM64 has a code density advantage without any evidence for a few posts now. Being byte-aligned has advantages too, which are clear in the paper with the real world program! You're welcome to provide more real world evidence!

    We're changing the goal posts now with new numbers, you can't estimate IPC based on one specific INTrate2006 test, and assume it's similar across other workloads as well. If we just stick to INTrate2006, IPC seems within 5% where Graviton 2 has twice the cache of AMD Epyc 7742.

    Comparing a top-line power number like you were doing is irrelevant when features like AVX can easily blow past any power envelope you might have, and one chip lacks the feature.
  • Wilco1 - Sunday, July 12, 2020 - link

    No, I am stating that AArch64 has better code density as a fact. Maybe 5 years ago you could argue about it as AArch64 was still relatively new, but today that's not even disputable. So check it out if you'd like to see it for yourself.

    I used the overall intrate result to get an accurate IPC comparison. If you do the math correctly you'll see that Graviton 2 has 12% higher IPC than EPYC 7742.

    At the end of the day what matters is performance, perf/W and cost. Whether you have AVX or not is not relevant in this comparison - EPYC 7742 uses the same amount of power whether it executes AVX code or not.
  • ichaya - Tuesday, July 14, 2020 - link

    This is not the first time I've seen someone look at single thread performance and disregard everything else. All Graviton 2 and A13 single thread gains can be attributed to large (100~200% more) shared L2/L3 caches, and when compared with x86, 5% or even 75% IPC gains turn out to be ~10% less real world performance or ~10% more with marginal power use difference on 7nm. AMD has everything from a 15W to 280W chip.

    For multi-threaded, the Graviton 2 looks better, but the 64 vcpu EPYC 2 c5a.16xlarge (144MB L2+L3) AWS instance costs the same as the 64 core Graviton 2 m6g.16xlarge (96MB L2+L3) instance and delvers equivalent performance on real world tasks while having 1/2 the real cores, 1/2 the system RAM and 50% more L2+L3.

    perf/W/$ is important, and since ARM has always been on the lower end of W and $, it can be hard to see past it. If you can compare cache sizes, power and real world performance, the only thing revolutionary is the fact that Amazon, Apple and the ARM ecosystem have come this far in a few years. The overall features (AVX2+SMT among others) and openness still leaves a lot to be desired.
  • Wilco1 - Wednesday, July 15, 2020 - link

    Single threaded performance is important in showing that x86 does no longer have the big advantage it once used to have. Overall throughput is well correlated with single thread performance, you can see that clearly in the results we discussed. Do you believe 64 Graviton 1 cores would do equally well against 7742 if they had the same huge caches?

    I haven't seen serious benchmarks on c5a, do you have a link? With 32 cores at 3.3GHz it should burn well over 200W, not an improvement...

    It's not that revolutionary if you followed the rapid increase of single thread performance over the last 5 years. Smartphones paid for the progress in microarchitecture and process technology that enabled competitive Arm servers (it helped AMD surpass Intel as well). I don't believe SMT or AVX are useful - 128 cores in Altra Max will beat 64 cores with SMT+AVX on performance and area at similar power.

    As for AVX, this article discusses how Intel's latest CPU disables AVX... Linus had some interesting comments recently about the fragmentation of the many AVX variants. Then there are all the unresolved clocking and power issues. It's a mess.
  • ichaya - Thursday, July 16, 2020 - link

    If there was a significant power difference between m6g.16xlarge and c5a.16xlarge, they would be priced differently. 128GB of RAM can't be more than ~15W.

    Single thread performance can help multi-thread performance up to a point, but SMT, non-boost clocks, and biasing towards TLP more than ILP (like an in-order GPU) can hurt single thread performance at the expense of more multi-threaded throughput.

    AVX-512 is a mess, but AVX2 is worth having in most contexts now. Maybe some AVX512 instructions worth having will make it into a AVX2.1 which can completely supersede AVX2. For the price of Lakefield, there are certainly more attractive options, though compatibility, packaging and performance can trump battery life.
  • Wilco1 - Thursday, July 16, 2020 - link

    Well there is a much better comparison, c6g.16xlarge has 128GB and is 12% cheaper than c5a.16xlarge. More than enough to pay for the electricity cost of the 280W TDP of c5a.

    Yes you can optimize for multithreaded throughput but SMT remains questionable, especially for large core counts. Why add SMT when you could just add some more cores?

    Indeed AVX512 is worse, and could be removed without anyone missing it. Lakefield battery life comparisons are in, the Atom curse has struck yet again...
  • ichaya - Thursday, July 16, 2020 - link

    12% is probably more the amount of subsidies these instances are getting. Amazon has a very very long history of putting any profit margins back into growth. Either that, or 128GB of RAM is 100W+!

    SMT is perhaps the lowest level at which TLP can be extracted, recent multi-core Atoms don't have it, but for server/workstation tasks like compilation, DB engine or even general multi-tasking, it's well worth it.
  • Wilco1 - Friday, July 17, 2020 - link

    Graviton 2 is less than a third of the silicon area of EPYC so cheaper to make. 128GB server DRAM costs over $1000, which is why the 256GB/512GB versions are more expensive. The power cost of extra DRAM is a tiny fraction of that.

    There are tasks where SMT helps but equally there also tasks where it is slower. So it looks great on marketing slides where you just show the best cases, but overall it is a small gain.
  • ichaya - Saturday, July 18, 2020 - link

    I wouldn't call a 64 vcpu (180W) system beating or equaling a 64 core (110W) system in web serving/DB and code compilation a small gain. The tasks where SMT hurts is basically single threaded JS, which is just such a shame. Shame! I don't think POWER, SPARC and others have been wrong in having added SMT years ago.

    For code compilation and DB the differences are 50%-100%+ making perf/W/$ very competitive.
    https://www.phoronix.com/scan.php?page=article&...

    This article also seems to mention SMT might make an appearance in the next Neoverse N* chips: https://www.nextplatform.com/2019/02/28/arm-sharpe...
  • Wilco1 - Sunday, July 19, 2020 - link

    The Phoronix link has various benchmarks that aren't even running identical code between different ISAs (eg. Linux kernel compile). So it's not anywhere near a fair CPU comparison like SPEC. And this: https://openbenchmarking.org/result/1907314-AS-RYZ... shows SMT gives almost no gain on multithreaded benchmarks once you stop cherry picking the good results and ignore the bad ones...

    Even if we just consider the benchmarks with the largest SMT speedup, Coremark and 7-zip have good SMT gains of 41% and 32%, but m6g *still* outperforms c5a by 5% and 24%.

    So the best SMT gain combined with a 32% frequency advantage and 4 times the L3 cache is still not enough to provide equal per-thread performance!

Log in

Don't have an account? Sign up now