Conclusion

For anyone buying a new system today, the market is a little bleak. Anyone wanting a new GPU has to actively pay attention to stock levels, or drive to a local store for when a delivery arrives. The casual buyers then either look to pre-built systems (which are also flying off the shelves), or just hang on to what they have for another year.

But there is another way. I find that users fall in to two camps.

The first camp is the ‘upgrade everything at once’ attitude. These users sell their old systems and buy, mostly, all anew. Depending on budget and savings, this is probably a good/average system, and it means you get a good run of what’s available at that time. It’s a multi-year upgrade cycle where you might get something good for that generation, and hopefully everything is balanced.

The other camp is the ‘upgrade one piece at a time’. This means that if it’s time to upgrade a storage drive, or a memory kit, or a GPU, or a CPU, you get the best you can afford at that time. So you might end up with an older CPU but a top end GPU, good storage, good power supply, and then next time around, it’s all about CPU and motherboard upgrades. This attitude has the potential for more bottlenecks, but it means you often get the best of a generation, and each piece holds its resale value more.

In a time where we have limited GPUs available, I can very much see users going all out on the CPU/memory side of the equation, perhaps spending a bit extra on the CPU, while they wait for the graphics market to come back into play. After all, who really wants to pay $1300 for an RTX 3070 right now?

Performance and Analysis

In our Core i7-11700K review, our conclusions there are very much broadly applicable here. Intel’s Rocket Lake as a backported processor design has worked, but has critical issues with efficiency and peak power draw. Compared to the previous generation, clock-for-clock performance gains for math workloads are 16-22% or 6-18% for other workloads, however the loss of two cores really does restrict how much of a halo product it can be in light of what AMD is offering.

Rocket Lake makes good in offering PCIe 4.0, and enabling new features like Gear ratios for the memory controller, as well as pushing for more support for 2.5 gigabit Ethernet, however it becomes a tough sell. At the time we reviewed the Core i7-11700K, we didn’t know the pricing, and it was looking like AMD’s stock levels were pretty bad, subsequently making Intel the default choice. Since then, Intel's pricing hasn't turned out too bad for its performance compared to AMD (except for the Core i9), however AMD’s stock is a lot more bountiful.

For anyone looking at the financials for Intel, the new processor is 25% bigger than before, but not being sold for as big a margin as you might expect. In some discussions in the industry, it looks like retailers are getting roughly 20%/80% stock for Core i9 to Core i7, indicating that Intel is going to be very focused on that Core i7 market around $400-$450. In that space, AMD and Intel both have well-performing products, however AMD gets an overall small lead and is much more efficient.

However, with the GPU market being so terrible, users could jump an extra $100 and get 50% more AMD cores. When AMD is in stock, Intel’s Rocket Lake is more about the platform than the processor. If I said that that the Rocket Lake LGA1200 platform had no upgrade potential, for users buying in today, an obvious response might be that neither does AM4, and you’d be correct. However, for any user buying a Core i7-11700K on an LGA1200 today, compared to a Ryzen 7 5800X customer on AM4, the latter still has the opportunity to go to 16 cores if needed. Rocket Lake comes across with a lot of dead-ends in that regard, especially as the next generation is meant to be on a new socket, and with supposedly new memory.

Rocket Lake: Failed Experiment, or Good Attempt?

For Intel, Rocket Lake is a dual purpose design. On the one hand, it provides Intel with something to put into its desktop processor roadmap while the manufacturing side of the business is still getting sorted. On the other hand it gives Intel a good marker in the sand for what it means to backport a processor.

Rocket Lake, in the context of backporting, has been a ‘good attempt’ – good enough to at least launch into the market. It does offer performance gains in several key areas, and does bring AVX-512 to the consumer market, albeit at the expense of power. However in a lot of use cases that people are enabling today, which aren’t AVX-512 enabled, there’s more performance to be had with older processors, or the competition. Rocket Lake also gets you PCIe 4.0, however users might feel that is a small add-in when AMD has PCIe 4.0, lower power, and better general performance for the same price.

Intel’s future is going to be full of processor cores built for multiple process nodes. What makes Rocket Lake different is that when the core was designed for 10nm, it was solely designed for 10nm, and no thought was ever given to a 14nm version. The results in this review show that this sort of backporting doesn’t really work, not to the same level of die size, performance, and profit margin needed to move forward. It was a laudable experiment, but in the future, Intel will need to co-design with multiple process nodes in mind.

Gaming Tests: Strange Brigade
Comments Locked

279 Comments

View All Comments

  • SystemsBuilder - Wednesday, March 31, 2021 - link

    and you are a Computer Science graduate? What Linus T. is saying is that AVX-512 is a power hog and he is right about that. Linus T. is not saying that "a couple dozen or so people" are able to program it. Power requirements and programing hardness are 2 different things.
    On the second point, I 100% stand by that any decent Computer Science/Engineering graduate should be able to program AVX-512 effectively (overcoming hardness not power requirements).
    Also, I do program AVX-512 and I 100% stand by what I said. You just need to know what you are doing and vectorize algorithms. If you use the good old sequential algorithms you will not archive anything with AVX-512, but it you vectorize you're classical algorithms you will also achieve >100% benefits in many inner loops in so called mainstream programming. AVX-512 can give you 2x uplift if you know how to utilize both FMA units on port 0+1 and 5 and it's not hard.
    Lastly, with decent negative AVX-512 offsets in BIOS, you can bring down the power utilization to ok levels AND still get 2x improvements in the inner loops (because of vectorized algorithmic improvement).
  • Hifihedgehog - Wednesday, March 31, 2021 - link

    > and you are a Computer Science graduate?

    No, I am a Computer Engineering graduate. Sorry, but you are grasping at straws. Plus you are overcomplicating the obvious to try to be an Intel apologist. Just see this and this. Read it and weep. Intel flopped big time this release:

    https://i.imgur.com/HZVC03T.png

    https://i.imgflip.com/53vqce.jpg
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    So fellow CS/CE grad. I'm not arguing that AVX-512 is a power hog (it is) or that the AVX-512 offsets slows down the rest of the CPU (they do). I am arguing the premise that AVX-512 is supposed to be so incredibly hard to do that only "couple dozen or so people" can do is wrong today - Skylake-X with AVX-512 was launched 2017 for heaven's sake. Surely, I can't be the only CS/CE guy how figured it out by now. I mean really? When Ian wrote what Keller said (and keep on writing it) that that this AVX-512 is sooo hard to do that only a few guys on the planet can do it well, my reaction was "let's see about that". I mean come on guys, really!
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    More specifically Linus is concerned that because you need to use negative offsets to keep the power utilization down when engaging AVX-512 it slows down everything else going on. i.e. AVX-512 power requirements overall CPU impact. The new cores designs (already Cypress Cove maybe? but Sapphire Rapids definitely!) will allow AVX-512 workloads to run at one frequency (with lower negative offsets that for instance Skylake-X) and non AVX-512 workloads at a different frequency on various cores and keep within the power budget. this is ideal.
  • arashi - Wednesday, March 31, 2021 - link

    This belongs in r/ConfidentlyIncorrect and r/IAmVerySmart, anyone who thinks coding for AVX512 PROPERLY is doable by "any CS/CE major graduate worth their salt" would be laughed out of the industry.
  • Hifihedgehog - Wednesday, March 31, 2021 - link

    Exactly. The real reason for the nonsensical wall of text is SystemsBuilder is trying desperately to overexplain things to put lipstick on a pig. And he repeats himself too like I am listening to an automated bot caught in a recursive loop which is quite funny actually.
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    So you are a CE major, have you actually tried to program in AVX 512? If not, try to do a matrix by matrix multiplication of 16x16 FP32 matrices for instance and come back. You'll notice incredible performed increase. It's not lipstick on a pig, it actually is very powerful, especially computing through large volumes of related data SIMD style.
  • Meteor2 - Saturday, April 17, 2021 - link

    Disappointing response. You throw insults but not rebuttals.

    Me thinks SB has a point.
  • SystemsBuilder - Wednesday, March 31, 2021 - link

    really? any you are one CS graduate? have you tried?
  • MS - Tuesday, March 30, 2021 - link

    What the he'll is that supposed to mean that you can't you can't get the frequency at 10 nm and therefore you have to stick with the 14 nm node? That's pure nonsense, AND is at 7 nm and they are getting the target frequencies. Maybe stop spreading the Coolaid and call a spade a spade....

Log in

Don't have an account? Sign up now