Core: Load Me Up

When discussing the size of the reorder buffer, I mentioned that for some ops relying on the data of others, the order in which they need to be processed has to remain consistent – the load for the second op has to follow the store from the first in order for the calculation to be correct. This works for data that is read from and written to the same location in the same data stream, however with other operations, the memory addresses for loads and stores are not known until they pass the address generation units (AGUs).

This makes reordering a problem at a high level. You ultimately do not want a memory location to be written and stored by two different operations at the same time, or for the same memory address to be used by different ops while one of those ops is sitting in the reorder queue. When a load micro-op enters the buffer, the memory addresses of previous stores are not known until they pass the AGUs. Note, that this applies to memory addresses in the caches as well as main memory. However, if one can speed up loads and load latency in the buffers, this typically has a positive impact in most software scenarios.

With Core, Intel introduced a ‘New Memory Disambiguation’. For lack of a better analogy, this means that the issue of loads preceding stores is given a ‘do it and we’ll clean up after’ approach. Intel stated at the time that the risk that a load will load a value out of an address that is being written to by a store that has yet to be finished is pretty small (1-2%), and the chance decreases with larger caches. So by allowing loads to go head of stores, this allows a speedup but there has to be a catch net for when it goes wrong. To avoid this, a predictor is used to help. The dynamic alias predictor tries to spot this issue. If it happens, the load will have to be repeated, with a penalty of about 20-cycles.

Unofficial AnandTech Diagram

The predictor gives permission for a load to move ahead of a store, and after execution the conflict logic scans the buffer in the Memory reOrder Buffer (MOB) to detect an issue. If it happens, then the load is reprocessed back up the chain. In the worst case scenario, this might reduce performance, but as Johan said back in 2006: ‘realistically it is four steps forward and one step back, resulting in a net performance boost’.

Using this memory disambiguation technique, Intel reported a 40% performance boost purely on allowing loads to be more flexible in a number of synthetic loads (or 10-20% in real world), along with L1 and L2 performance boosts. It is worth noting that this feature affects INT workloads more often than FP workloads, purely on the basis that FP workloads tend to be more ordered by default. This is why AMD’s K8 lost ground to Intel on INT workloads, despite having a lower latency memory system and more INT resources, but stayed on track with FP.

Core: No Hyper-Threading, No Integrated Memory Controller

In 2016, HT and an integrated memory controller (IMC) are now part of the fundamental x86 microarchitecture in the processors we can buy. It can be crazy to think that one of the most fundamental upticks in x86 performance in the last decade lacked these two features. At the time, Intel gave reasons for both.

Simultaneous Hyper-Threading, the act of having two threads funnel data through a single core, requires large buffers to cope with the potential doubling of data and arguably halves resources in the caches, producing more cache pressure. However, Intel gave different reasons at the time – while SMT gave a 40% performance boost, it was only seen as a positive by Intel in server applications. Intel said that SMT makes hotspots even hotter as well, meaning that consumer devices would become power hungry and hot without any reasonable performance improvement.

On the IMC, Intel stated at the time that they had two options: an IMC, or a larger L2 cache. Which one would be better is a matter for debate, but Intel in the end went with a 4 MB L2 cache. Such a cache uses less power than an IMC, and leaving the IMC on the chipset allows for a wider support range of memory types (in this case DDR2 for consumers and FB-DIMM for servers). However, having an IMC on die improves memory latency significantly, and Intel stated that techniques such as memory disambiguation and improved prefetch logic can soak up this disparity.

As we know know, on-die IMCs are the big thing.

Core: Out of Order and Execution Core: Performance vs. Today
Comments Locked

158 Comments

View All Comments

  • Hrel - Thursday, July 28, 2016 - link

    10 years to double single core performance, damn. Honestly thought Sandy Bridge was a bigger improvement than that. Only 4 times faster in multi-core too.

    Glad to see my 4570S is still basically top of the line. Kinda hard to believe my 3 year old computer is still bleeding edge, but I guess that's how little room for improvement there is now that Moore's law is done.

    Guess if Windows 11 brings back normal functionality to the OS and removes "apps" entirely I'll have to upgrade to a DX12 capable card. But I honestly don't think that's gonna happen.

    I really have no idea what I'm gonna do OS wise. Like, I'm sure my computers won't hold up forever. But Windows 10 is unusable and Linux doesn't have proper support still.

    Computer industry, once a bastion of capitalism and free markets, rife with options and competition is now become truly monastic. Guess I'm just lamenting the old days, but at the same time I am truly wondering how I'll handle my computing needs in 5 years. Windows 10 is totally unacceptable.
  • Michael Bay - Thursday, July 28, 2016 - link

    I like how desperate you anti-10 shills are getting.
    More!
  • Namisecond - Thursday, July 28, 2016 - link

    I do not think that word means what you think it means...
  • TormDK - Thursday, July 28, 2016 - link

    You are right - there is not going to be a Windows 11, and Microsoft is not moving away from "apps".

    So you seems stuck between a rock in a hard place if you don't want to go on Linux or a variant, and don't want to remain in the Microsoft ecosystem.
  • mkaibear - Thursday, July 28, 2016 - link

    >Windows 10 is unusable

    Now, just because you're not capable of using it doesn't mean everyone else is incapable. There are a variety of remedial computer courses available, why not have a word with your local college?
  • AnnonymousCoward - Thursday, July 28, 2016 - link

    4570S isn't basically top of the line. It and the i5 are 65W TDP. The latest 91W i7 is easily 33% faster. Just run the benchmark in CPU-Z to see how you compare.
  • BrokenCrayons - Thursday, July 28, 2016 - link

    Linux Mint has been my primary OS since early 2013. I've been tinkering with various distros starting with Slackware in the late 1990s as an alternative to Windows. I'm not entirely sure what you mean my "doesn't have proper support" but I don't encourage people to make a full conversion to leave Windows behind just because the current user interface isn't familiar.

    There's a lot more you have to figure out when you switch from Windows to Linux than you'd need to learn if going from say Windows 7 to Windows 10 and the transition isn't easy. My suggestion is to purchase a second hand business class laptop like a Dell Latitude or HP Probook being careful to avoid AMD GPUs in doing so and try out a few different mainstream distros. Don't invest a lot of money into it and be prepared to sift through forums seeking out answers to questions you might have about how to make your daily chores work under a very different OS.

    Even now, I still keep Windows around for certain games I'm fond of but don't want to muck around with in Wine to make work. Steam's Linux-friendly list had gotten a lot longer in the past couple of years thanks to Valve pushing Linux for the Steam Box and I think by the time Windows 7 is no longer supported by Microsoft, I'll be perfectly happy leaving Windows completely behind.

    That said, 10 is a good OS at its core. The UI doesn't appeal to everyone and it most certainly is collecting and sending a lot of data about what you do back to Microsoft, but it does work well enough if your computing needs are in line with the average home user (web browsing, video streaming, gaming...those modest sorts of things). Linux can and does all those things, but differently using programs that are unfamiliar...oh and GIMP sucks compared to Photoshop. Just about every time I need to edit an image in Linux, I get this urge to succumb to the Get Windows 10 nagware and let Microsoft go full Big Brother on my computing....then I come to my senses.
  • Michael Bay - Thursday, July 28, 2016 - link

    GIMP is not the only, ahem, "windows ecosystem alternative" that is a total piece of crap on loonixes. Anything outside of the browser window sucks, which tends to happen when your code maintainers are all dotheads and/or 14 years old.
  • Arnulf - Thursday, July 28, 2016 - link

    I finally relegated my E6400-based system from its role as my primary computer and bought a new one (6700K, 950 Pro SSD, 32 GB RAM) a couple of weeks ago.

    While the new one is certainly faster at certain tasks the biggest advantage for me is significantly lower power consumption (30W idle, 90W under load versus 90W idle and 160-180W under load for the old one) and consequently less noise and less heat generation.

    Core2 has aged well for me, especially after I added a Samsung 830 to the system.
  • Demon-Xanth - Thursday, July 28, 2016 - link

    I still run an i5-750, NVMe is pretty much the only reason I want to upgrade at all.

Log in

Don't have an account? Sign up now