Apple's Humongous CPU Microarchitecture

So how does Apple plan to compete with AMD and Intel in this market? Readers who have been following Apple’s silicon endeavors over the last few years will certainly not be surprised to see the performance that Apple proclaimed during the event.

The secret sauce lies in Apple’s in-house CPU microarchitecture. Apple’s long journey into custom CPU microarchitectures started off with the release of the Apple A6 back in 2012 in the iPhone 5. Even back then with their first-generation “Swift” design, the company had marked some impressive performance figures compared to the mobile competition.

The real shocker that really made waves through the industry was however Apple’s subsequent release of the Cyclone CPU microarchitecture in 2013’s Apple A7 SoC and iPhone 5S. Apple’s early adoption of the 64-bit Armv8 ISA shocked everybody, as the company was the first in the industry to implement the new instruction set architecture, but they beat even Arm’s own CPU teams by more than a year, as the Cortex-A57 (Arm own 64-bit microarchitecture design) would not see light of day until late 2014.

Apple famously called their “Cyclone” design a “desktop-class architecture” which in hindsight probably should have an obvious pointer to where the company was heading. Over subsequent generations, Apple had evolved their custom CPU microarchitecture at an astounding rate, posting massive performance gains with each generation, which we’ve covered extensively over the years:

AnandTech A-Series Coverage and Testing
Year Apple A# Review / Coverage
2012 A6 The iPhone 5 Review
2013 A7 The iPhone 5s Review
2014 A8 The iPhone 6 Review
2015 A9 The Apple iPhone 6s and iPhone 6s Plus Review
2016 A10 The iPhone 7 and iPhone 7 Plus Review
2017 A11 -
2018 A12 The iPhone XS & XS Max Review
2019 A13 The Apple iPhone 11, 11 Pro & 11 Pro Max Review
2020 A14 You're reading it

This year’s A14 chip includes the 8th generation in Apple’s 64-bit microarchitecture family that had been started off with the A7 and the Cyclone design. Over the years, Apple’s design cadence seems to have settled down around major bi-generation microarchitecture updates starting with the A7 chipset, with the A9, A11, A13 all showcasing major increases of their design complexity and microarchitectural width and depth.

Apple’s CPUs still pretty much remain a black box design given that the company doesn’t disclose any details, and the only publicly available resources on the matter date back to LLVM patches in the A7 Cyclone era, which very much aren’t relevant anymore to today’s designs. While we don’t have the official means and information as to how Apple’s CPU work, that doesn’t mean we cannot figure out certain aspects of the design. Through our own in-house tests as well as third party microbenchmarks (A special credit due for @Veedrac’s microarchitecturometer test suite), we can however unveil some of the details of Apple’s designs. The following disclosures are estimated based on testing the behavior of the latest Apple A14 SoC inside of the iPhone 12 Pro:

Apple's Firestorm CPU Core: Even Bigger & Wider

Apple’s latest generation big core CPU design inside of the A14 is codenamed “Firestorm”, following up last year’s “Lightning” microarchitecture inside of the Apple A13. The new Firestorm core and its years long pedigree from continued generational improvements lies at the heart of today’s discussion, and is the key part as to how Apple is making the large jump away from Intel x86 designs to their own in-house SoCs.

The above diagram is an estimated feature layout of Apple’s latest big core design – what’s represented here is my best effort attempt in identifying the new designs’ capabilities, but certainly is not an exhaustive drill-down into everything that Apple’s design has to offer – so naturally some inaccuracies might be present.

What really defines Apple’s Firestorm CPU core from other designs in the industry is just the sheer width of the microarchitecture. Featuring an 8-wide decode block, Apple’s Firestorm is by far the current widest commercialized design in the industry. IBM’s upcoming P10 Core in the POWER10 is the only other official design that’s expected to come to market with such a wide decoder design, following Samsung’s cancellation of their own M6 core which also was described as being design with such a wide design.

Other contemporary designs such as AMD’s Zen(1 through 3) and Intel’s µarch’s, x86 CPUs today still only feature a 4-wide decoder designs (Intel is 1+3) that is seemingly limited from going wider at this point in time due to the ISA’s inherent variable instruction length nature, making designing decoders that are able to deal with aspect of the architecture more difficult compared to the ARM ISA’s fixed-length instructions. On the ARM side of things, Samsung’s designs had been 6-wide from the M3 onwards, whilst Arm’s own Cortex cores had been steadily going wider with each generation, currently 4-wide in currently available silicon, and expected to see an increase to a 5-wide design in upcoming Cortex-X1 cores.

Apple’s microarchitecture being 8-wide actually isn’t new to the new A14. I had gone back to the A13 and it seems I had made a mistake in the tests as I had originally deemed it a 7-wide machine. Re-testing it recently, I confirmed that it was in that generation that Apple had upgraded from a 7-wide decode which had been present in the A11 and 12.

One aspect of recent Apple designs which we were never really able to answer concretely is how deep their out-of-order execution capabilities are. The last official resource we had on the matter was a 192 figure for the ROB (Re-order Buffer) inside of the 2013 Cyclone design. Thanks again to Veedrac’s implementation of a test that appears to expose this part of the µarch, we can seemingly confirm that Firestorm’s ROB is in the 630 instruction range deep, which had been an upgrade from last year’s A13 Lightning core which is measured in at 560 instructions. It’s not clear as to whether this is actually a traditional ROB as in other architectures, but the test at least exposes microarchitectural limitations which are tied to the ROB and behaves and exposes correct figures on other designs in the industry. An out-of-order window is the amount of instructions that a core can have “parked”, waiting for execution in, well, out of order sequence, whilst the core is trying to fetch and execute the dependencies of each instruction.

A +-630 deep ROB is an immensely huge out-of-order window for Apple’s new core, as it vastly outclasses any other design in the industry. Intel’s Sunny Cove and Willow Cove cores are the second-most “deep” OOO designs out there with a 352 ROB structure, while AMD’s newest Zen3 core makes due with 256 entries, and recent Arm designs such as the Cortex-X1 feature a 224 structure.

Exactly how and why Apple is able to achieve such a grossly disproportionate design compared to all other designers in the industry isn’t exactly clear, but it appears to be a key characteristic of Apple’s design philosophy and method to achieve high ILP (Instruction level-parallelism).

Many, Many Execution Units

Having high ILP also means that these instructions need to be executed in parallel by the machine, and here we also see Apple’s back-end execution engines feature extremely wide capabilities. On the Integer side, whose in-flight instructions and renaming physical register file capacity we estimate at around 354 entries, we find at least 7 execution ports for actual arithmetic operations. These include 4 simple ALUs capable of ADD instructions, 2 complex units which feature also MUL (multiply) capabilities, and what appears to be a dedicated integer division unit. The core is able to handle 2 branches per cycle, which I think is enabled by also one or two dedicated branch forwarding ports, but I wasn’t able to 100% confirm the layout of the design here.

The Firestorm core here doesn’t appear to have major changes on the Integer side of the design, as the only noteworthy change was an apparent slight increase (yes) in the integer division latency of that unit.

On the floating point and vector execution side of things, the new Firestorm cores are actually more impressive as they a 33% increase in capabilities, enabled by Apple’s addition of a fourth execution pipeline. The FP rename registers here seem to land at 384 entries, which is again comparatively massive. The four 128-bit NEON pipelines thus on paper match the current throughput capabilities of desktop cores from AMD and Intel, albeit with smaller vectors. Floating-point operations throughput here is 1:1 with the pipeline count, meaning Firestorm can do 4 FADDs and 4 FMULs per cycle with respectively 3 and 4 cycles latency. That’s quadruple the per-cycle throughput of Intel CPUs and previous AMD CPUs, and still double that of the recent Zen3, of course, still running at lower frequency. This might be one reason why Apples does so well in browser benchmarks (JavaScript numbers are floating-point doubles).

Vector abilities of the 4 pipelines seem to be identical, with the only instructions that see lower throughput being FP divisions, reciprocals and square-root operations that only have an throughput of 1, on one of the four pipes.

On the load-store front, we’re seeing what appears to be four execution ports: One load/store, one dedicated store and two dedicated load units. The core can do at max 3 loads per cycle and two stores per cycle, but a maximum of only 2 loads and 2 stores concurrently.

What’s interesting here is again the depth of which Apple can handle outstanding memory transactions. We’re measuring up to around 148-154 outstanding loads and around 106 outstanding stores, which should be the equivalent figures of the load-queues and store-queues of the memory subsystem. To not surprise, this is also again deeper than any other microarchitecture on the market. Interesting comparisons are AMD’s Zen3 at 44/64 loads & stores, and Intel’s Sunny Cove at 128/72. The Intel design here isn’t far off from Apple and actually the throughput of these latest microarchitecture is relatively matched – it would be interesting to see where Apple is going to go once they deploy the design to non-mobile memory subsystems and DRAM.

One large improvement on the part of the Firestorm cores this generation has been on the side of the TLBs. The L1 TLB has been doubled from 128 pages to 256 pages, and the L2 TLB goes up from 2048 pages to 3072 pages. On today’s iPhones this is an absolutely overkill change as the page size is 16KB, which means that the L2 TLB covers 48MB which is well beyond the cache capacity of even the A14. With Apple moving the microarchitecture onto Mac systems, having compatibility with 4KB pages and making sure the design still offers enough performance would be a key part as to why Apple chose to make such a large upgrade this generation.

On the cache hierarchy side of things, we’ve known for a long time that Apple’s designs are monstrous, and the A14 Firestorm cores continue this trend. Last year we had speculated that the A13 had 128KB L1 Instruction cache, similar to the 128KB L1 Data cache for which we can test for, however following Darwin kernel source dumps Apple has confirmed that it’s actually a massive 192KB instruction cache. That’s absolutely enormous and is 3x larger than the competing Arm designs, and 6x larger than current x86 designs, which yet again might explain why Apple does extremely well in very high instruction pressure workloads, such as the popular JavaScript benchmarks.

The huge caches also appear to be extremely fast – the L1D lands in at a 3-cycle load-use latency. We don’t know if this is clever load-load cascading such as described on Samsung’s cores, but in any case, it’s very impressive for such a large structure. AMD has a 32KB 4-cycle cache, whilst Intel’s latest Sunny Cove saw a regression to 5 cycles when they grew the size to 48KB. Food for thought on the advantages or disadvantages of slow of fast frequency designs.

On the L2 side of things, Apple has been employing an 8MB structure that’s shared between their two big cores. This is an extremely unusual cache hierarchy and contrasts to everybody else’s use of an intermediary sized private L2 combined with a larger slower L3. Apple here disregards the norms, and chooses a large and fast L2. Oddly enough, this generation the A14 saw the L2 of the big cores make a regression in terms of access latency, going back from 14 cycles to 16 cycles, reverting the improvements that had been made with the A13. We don’t know for sure why this happened, I do see higher parallel access bandwidth into the cache for scalar workloads, however peak bandwidth still seems to be the same as the previous generation. Another point of hypothesis is that because Apple shares the L2 amongst cores, that this might be an indicator of changes for Apple Silicon SoCs with more than just two cores connected to a single cache, much like the A12X generation.

Apple has had employed a large LLC on their SoCs for many generations now. On the A14 this appears to be again a 16MB cache that is serving all the IP blocks on the SoC, most useful of course for the CPU and GPU. Comparatively speaking, this cache hierarchy isn’t nearly as fast as the actual CPU-cluster L3s of other designs out there, and in recent years we’ve seen more mobile SoC vendors employ such LLC in front of the memory controllers for the sake of power efficiency. What Apple would do in a larger laptop or desktop chip remains unclear, but I do think we’d see similar designs there.

We’ve covered more specific aspects of Apple’s designs, such as their MLP (memory level parallelism) capabilities, and the A14 doesn’t seem to change in that regard. One other change I’ve noted from the A13 is that the new design now also makes usage of Arm’s more relaxed memory model in that the design is able to optimise streaming stores into non-temporal stores automatically, mimicking the change that had been introduced in the Cortex-A76 and the Exynos-M4. x86 designs wouldn’t be able to achieve a similar optimization in theory – at least it would be very interesting to see if one attempted to do so.

Maximum Frequency vs Loaded Threads
Per-Core Maximum MHz
Apple A14 1 2 3 4 5 6
Performance 1 2998 2890 2890 2890 2890 2890
Performance 2   2890 2890 2890 2890 2890
Efficiency 1     1823 1823 1823 1823
Efficiency 2       1823 1823 1823
Efficiency 3         1823 1823
Efficiency 4           1823

Of course, the old argument about having a very wide architecture is that you cannot clock as high as something which is narrower. This is somewhat true; however, I wouldn’t come to any conclusion as to the capabilities of Apple’s design in a higher power device. On the A14 inside of the new iPhones the new Firestorm cores are able to reach 3GHz clock speeds, clocking down to 2.89GHz when there’s two cores active at any time.

We’ll be investigating power in more detail in just a bit, but I currently see Apple being limited by the thermal envelope of the actual phones rather than it being some intrinsic clock ceiling of the microarchitecture. The new Firestorm cores are clocking in now at roughly the same speed any other mobile CPU microarchitecture from Arm even though it’s a significantly wider design – so the argument about having to clock slower because of the more complex design also doesn’t seem to apply in this instance. It will be very interesting to see what Apple could do not only in a higher thermal envelope device such as a laptop, but also on a wall-powered device such as a Mac.

Replacing x86 - The Next Big Step Dominating Mobile Performance
Comments Locked

644 Comments

View All Comments

  • KarlKastor - Monday, November 16, 2020 - link

    @techconc
    Do you think a 4 Core Zen2 was different than a 8 Core Zen2. Yes it is much different. Zen 1 was even much inhomogenous with increasing core count.

    Apple can't just put 8 big cores in it and is finished. All cores with one unified L2 Cache? The core interconnect will be different for sure. The cache system too, I bet.

    M1 and A14 will be much similar, yes.
    But you can't extrapolate from a single thread benchmark to a multi thread practical case. It can work, but don't have to.
    The cache system, core interconnect, memeory subsystem, all is much more important with many cores working at the same time.
  • Kangal - Thursday, November 12, 2020 - link

    Hi Andrei,
    I'm very disappointed with this article. It is not very professional nor upto Anandtech standards. Whilst I don't doubt the Apple A14/A14X/M1 is a very capable chipset, we shouldn't take Apple's claims at face-value. I feel like you've just added more fuel to the fire, that which is hype.

    I've read the whole thing, and you've left me thinking like this ARM Chipset is supposedly similar to the 5W TDP we have on iPhones/iPads, and able to compete with 150W Desktop x86 chipsets. While that possible, it doesn't pass the sniff test. And even more convoluted, is that this chipset is supposed to extend the battery life notably (from 10hrs upto 17hrs or 20hrs) by x1.7-x2.0 factor, yet the difference in the TDP is far greater (from 5W compared to 28W) in x4.5-x6.0 difference. So this is losing efficiency somewhere, otherwise we should've seen battery life estimates like 45hrs to 60hrs. Both laptops have the same battery size.

    Apple has not earned the benefit of the doubt, instead they have a track-record of lying (or "exaggerating"). I think these performance claims, and estimates by you, really needed to be downplayed. And we should be comparing ACTUAL performance when that data is available. And by that I mean running it within proper thermal limits (ie 10-30min runtime), with more rounded benchmarking tools (CineBench r23 ?), to deduce the performance deficits and improvements we are likely to experience in real-world conditions (medium duration single-core, thermal throttling multi-thread, GPU gaming, and power drain differences). Then we can compare that to other chipsets like the 15W Macbook Air, the 28W MacBook Pro, and Hackintosh Desktops with Core i9-9900k or r9-5950x chipsets. And if the Apple M1 passes with flying colours, great, hype away! But if they fail abysmally, then condemn. Or if it is very mixed, then only give a lukewarm reception.

    So please follow up this article, with a more accurate and comprehensive study, and revert back to the professional standards that allow us readers to continue sharing your site with others. Thank you for reading my concerns.
  • Kangal - Thursday, November 12, 2020 - link

    I just want to add, that during the recent announcement by Nvidia, we were lead to believe that the RTX 3080 has a +100% performance uplift over the RTX 2080. Now that tests have been conducted by trustworthy, professional, independent reviewers. Well, it is actually more like +45% performance uplift. To get to the +70% -to- +90% performance uplift requires us to do some careful cherry-picking of data.

    My fear is that a similar case has happened with the Apple M1. With your help, they've made this look like it is as fast as an Intel Core i9-9900k. I suspect it will be much much much much slower, when looking at non-cherry picked data. And I suspect it will still be a modest improvement over the Intel 28W Laptop chipsets. But that is a far cry from the expectations that have been setup. Just like the case was with the RTX-3000 hype launch.
  • Spunjji - Thursday, November 12, 2020 - link

    @Kangal - Personally, I'm very disappointed in various commenters' tendency to blame the article authors for their own errors in reading the article.

    Firstly, it's basically impossible to read the whole thing and come away with the idea that M1 will have a 5W TDP. It has double the GPU and large-core CPU resources of A14 - what was measured here - so logically it should start at somewhere around 10W TDP and move up from there.

    To your battery life qualms - throw in some really simple estimates to account for other power draw in the system (storage, display, etc.) would get you to an understanding of why the battery life is "only" 1.7X to 2X their Intel models.

    As for Apple's estimates being "downplayed" - sure, only they provide *actual test data* in here that appears to validate their claims. I don't know why you think CineBench is more "rounded" than SPEC - the opposite is actually true; CineBench does lots of one thing that's easily parallelized, whereas SPEC tests a number of different features of a CPU based on a large range of workloads.

    In summary: your desire for this not to be as good as it *objectively* appears to be is what's informing your comment. The article was thoroughly professional. In case you're wondering, I generally despise Apple and their products - but I can see a well-designed CPU when the evidence is placed directly in front of me.
  • Kangal - Friday, November 13, 2020 - link

    @Spunjji

    First of all, you are objectively wrong. It is not debatable, it is a fact. That this article CAN (could, would, has) been read and understood in a manner different to yours. So you can't just use a blanket statement like "you're holding it wrong" or "it's the readers fault". When clearly there are things that can be done to mitigate the issue, and that was my qualm. This article glorifies Apple, when it should be cautioning consumers. I'm not opposed to glorifying things, credit where due.

    The fact is Andrei, who representing Anandtech, is assuming a lot of the data points. He's taking Apple's word at face value. Imagine the embarrassment if they take a stance such as this, only to be proven wrong a few weeks later. What should have been done, is that more effort and more emphasis should have been placed on comparisons to x86 systems. My point still stands, that there's a huge discrepancy between "User Interface fluidity", "Synthetic Benchmarks", "Real-world Applications", and "Legacy programs". And also there's the entire point of power-draw limitations, heat dissipation, and multi-threaded processing.

    Based on this article, people will see the ~6W* Apple A14 chipset is only 5%-to-10% slower than the ~230W (or 105W TDP) AMD r9-5950x that just released and topped all the charts. So if the Apple Silicon M1 is supposed to be orders of magnitude faster, (6W vs 12W or maybe even more), then you can make the logical conclusion that the Apple M1 is +80% -to- +290% faster when compared to the r9-5950x. That's insane. Yet it could be plausible. So the sensible thing to do is to be skeptical. As for CineBench, I think it is a more rounded test. I am not alone in this claim, many other users, reviewers, testers, and experts also vouch for it. Now, I'm not prepared to die on this hill, so I'll leave it at that.

    I realised the answer to the battery life question as I was typing it. And I do think a +50% to +100% increase is revolutionary (if tested/substantiated). However, the point was that Andrei was supposed to look into little details like that, and not leave readers thinking. I know that Apple would extend the TDP of the chip, that much is obvious to me even before reading anything, the issue is that this point itself was never actually addressed.

    Your summary is wrong. You assume that I have a desire, to see Apple's products to be lower than claimed. I do not. I am very unbiased, and want the data as clean as possible. Better competition breeds better progress. In fact, despite my reservations against the company, this very comment is being typed on an Early-2015 MacBook Pro Retina 13inch. The evidence that's placed in front of you isn't real, it is a guesstimate at best. There's many red-flags seeing their keynote and reading this article. Personally, I will have to wait for the devices to release, people to start reviewing them thoroughly, and I will have to think twice about digesting the Anandtech version when released. However, I'm not petty enough to boycott something because of subjective reasons, and will likely give Anandtech the benefit of the doubt. I hope I have satisfied some of your concerns.

    *based on a previous test by Anandtech.
  • Spunjji - Friday, November 13, 2020 - link

    @Kangal - The fact that a reader *can* get through the whole thing whilst imposing their own misguided interpretations on it doesn't mean it's the author's fault for them doing so. Writers can't spend their time reinventing the wheel for the benefit of people who didn't do basic background reading that the article itself links to and/or acknowledge the article's stated limitations.

    Your "holding it wrong" comparison is a funny one. You've been trying to chastise the article's author for not explicitly preventing people from wilfully misinterpreting the data therein, which imposes an absurd burden on the author. To refer back to the "holding it wrong" analogy, you've tried to chew on the phone and are now blaming the phone company for failing to tell people not to chew on it. It's not a defensible position.

    As it stands, he assumes nothing - nothing is taken at face value with regard to the conclusions drawn. He literally puts their claims to the test in the only manner currently available to him at this point in time. The only other option is for him to not do this at all, which would just leave you with Apple's claims and nothing else.

    As it is, the article indicates that the architecture inside the A14 chip is capable of single-core results comparable to AMD and Intel's best. It tells us nothing about how M1 will perform in its complete form in full applications compared with said chips, and the article acknowledges that. The sensible thing to do is /interpret the results according to their stated limitations/, not "be sceptical" in some generic and uncomprehending way.

    I think this best sums up the problem with your responses here: "The evidence that's placed in front of you isn't real, it is a guesstimate at best". Being an estimate doesn't make something not real. The data is real, the conclusions drawn from it are the estimates. Those are separate things. The fact that you're conflating them - even though the article is clear about its intent - indicates that the problem is with how you're thinking about and responding to the article, not the article itself. That's why I assumed you were working from a position of personal bias - regardless of that, you're definitely engaged in multiple layers of flawed reasoning.
  • Kangal - Friday, November 13, 2020 - link

    @Spunjji

    I agree, it is not the writers fault for having readers misinterpret some things. However, you continue fail to acknowledge that a writer actually has the means and opportunity to greatly limit such things. It is not about re-inventing the wheel, that's a fallacy. This is not about making misguided people change their minds, it is about allowing neutral readers be informed with either tangible facts, or putting disclaimers on claims or estimates. I even made things simple, said that Andrei simply needed to address that the figures are estimates so that the x86 comparisons aren't absurd.

    "You're holding it wrong" is an apt analogy. I'm not chewing on the phone, nor the company. I've already stated my reservations (they've lied before, and aren't afraid of exaggerating things). So you're misguided here, if you actually think I was even defending such a position. I actually think you need to increase your reading comprehension, something that you actually have grilled me on. Ironic.

    I have repeated myself several times, there are some key points that need to be addressed (eg/ legacy program performance, real-world applications, multi-threaded, synthetic benchmarks, and user experience). None of these have been addressed. You said the article acknowledges this, yet you haven't quoted anything. Besides, my point was this point needed to be stressed in the article multiple times, not just an off-hand remark (and even that wasn't made).

    Being an estimate doesn't make something not real. Well, sure it does. I can make estimates about a certain satellites trajectory, yet it could all be bogus. I'm not conflating the issue, you have. I've displayed how the information presented could be misinterpreted. This is not flawed reasoning, this is giving you an example of how loosely this article has been written. I never stated that I've misinterpreted it, because I'm a skeptical individual and prefer to dive deeper, read back my comments and you can see I've been consistent on this point. Most other readers can and would make that mistake. And you know what, a quick look on other sites and YouTube, well it shows that is exactly what has happened (there are people thinking the MBA is faster than almost all high-end desktops).

    I actually do believe that some meaningful insights can be gathered by guesstimates. Partial information can be powerful, but it is not the complete information. Hence, estimated need to be taken with a pinch of salt, sometimes a little, other times a lot. Any professional who's worth their salt (pun intended) will make these disclaimers. Even when writing Scientific Articles, we're taught to always put disclaimers when making interpretations. Seeing the quality of writing drop on Anandtech begs one to not defend them, but to pressure them instead to improve.
  • varase - Wednesday, November 11, 2020 - link

    Remember that M1 has a higher number of Firestorm cores which produce more heat - though not as much as x86 cores.

    There may be some throttling going on - especially on the fanless laptop (MacBook Air?).

    Jeez ... think of those compute numbers on a fanless design. Boggles the mind.

    Whenever you compare computers in the x86 world at any performance level at all, the discussion inevitably devolves into, "How good is the cooling?" Now imagine a competitor who can get by with passive heat pipe/case radiation cooling - and still sustain impressive compute numbers. Just the mechanical fan energy savings alone can go a good way to preserving battery life, not to mention a compute unit with such a lower TDP.
  • hecksagon - Tuesday, November 10, 2020 - link

    These benchmarks don't show time as an axis. Yes the A14 can compete with an i7 laptop in bursty workloads. Once the iPhone gets heat soaked performance starts to tank pretty quickly. This throttling isn't represented in the charts because these are short benchmarks and the performance isn't plotted over time.
  • Zerrohero - Wednesday, November 11, 2020 - link

    Do you think that the M1 performance with active cooling will “tank” like A14 performance does in an iPhone enclosure?

    Do you understand how ridiculous your point is?

Log in

Don't have an account? Sign up now