Core: Load Me Up

When discussing the size of the reorder buffer, I mentioned that for some ops relying on the data of others, the order in which they need to be processed has to remain consistent – the load for the second op has to follow the store from the first in order for the calculation to be correct. This works for data that is read from and written to the same location in the same data stream, however with other operations, the memory addresses for loads and stores are not known until they pass the address generation units (AGUs).

This makes reordering a problem at a high level. You ultimately do not want a memory location to be written and stored by two different operations at the same time, or for the same memory address to be used by different ops while one of those ops is sitting in the reorder queue. When a load micro-op enters the buffer, the memory addresses of previous stores are not known until they pass the AGUs. Note, that this applies to memory addresses in the caches as well as main memory. However, if one can speed up loads and load latency in the buffers, this typically has a positive impact in most software scenarios.

With Core, Intel introduced a ‘New Memory Disambiguation’. For lack of a better analogy, this means that the issue of loads preceding stores is given a ‘do it and we’ll clean up after’ approach. Intel stated at the time that the risk that a load will load a value out of an address that is being written to by a store that has yet to be finished is pretty small (1-2%), and the chance decreases with larger caches. So by allowing loads to go head of stores, this allows a speedup but there has to be a catch net for when it goes wrong. To avoid this, a predictor is used to help. The dynamic alias predictor tries to spot this issue. If it happens, the load will have to be repeated, with a penalty of about 20-cycles.

Unofficial AnandTech Diagram

The predictor gives permission for a load to move ahead of a store, and after execution the conflict logic scans the buffer in the Memory reOrder Buffer (MOB) to detect an issue. If it happens, then the load is reprocessed back up the chain. In the worst case scenario, this might reduce performance, but as Johan said back in 2006: ‘realistically it is four steps forward and one step back, resulting in a net performance boost’.

Using this memory disambiguation technique, Intel reported a 40% performance boost purely on allowing loads to be more flexible in a number of synthetic loads (or 10-20% in real world), along with L1 and L2 performance boosts. It is worth noting that this feature affects INT workloads more often than FP workloads, purely on the basis that FP workloads tend to be more ordered by default. This is why AMD’s K8 lost ground to Intel on INT workloads, despite having a lower latency memory system and more INT resources, but stayed on track with FP.

Core: No Hyper-Threading, No Integrated Memory Controller

In 2016, HT and an integrated memory controller (IMC) are now part of the fundamental x86 microarchitecture in the processors we can buy. It can be crazy to think that one of the most fundamental upticks in x86 performance in the last decade lacked these two features. At the time, Intel gave reasons for both.

Simultaneous Hyper-Threading, the act of having two threads funnel data through a single core, requires large buffers to cope with the potential doubling of data and arguably halves resources in the caches, producing more cache pressure. However, Intel gave different reasons at the time – while SMT gave a 40% performance boost, it was only seen as a positive by Intel in server applications. Intel said that SMT makes hotspots even hotter as well, meaning that consumer devices would become power hungry and hot without any reasonable performance improvement.

On the IMC, Intel stated at the time that they had two options: an IMC, or a larger L2 cache. Which one would be better is a matter for debate, but Intel in the end went with a 4 MB L2 cache. Such a cache uses less power than an IMC, and leaving the IMC on the chipset allows for a wider support range of memory types (in this case DDR2 for consumers and FB-DIMM for servers). However, having an IMC on die improves memory latency significantly, and Intel stated that techniques such as memory disambiguation and improved prefetch logic can soak up this disparity.

As we know know, on-die IMCs are the big thing.

Core: Out of Order and Execution Core: Performance vs. Today
Comments Locked

158 Comments

View All Comments

  • e1jones - Wednesday, July 27, 2016 - link

    My E8400 is still my daily driver, 4x 2gb and an SSD swapped in later as the boot drive. Still runs great, except it tends to get bogged down by the TrustedInstaller and the Firefox memory leaks.
  • rarson - Friday, August 5, 2016 - link

    I've got an E8600 in an Abit IP35 Pro motherboard. I was having a hard time finding DDR2-1066 last I looked, so I settled for 800. With an SSD and 7870, it's surprising how well it still games. I don't think I'll upgrade the GPU again just due to the fact that I'm limited to PCI-e 2.
  • FourEyedGeek - Monday, August 8, 2016 - link

    You could get a higher end GPU and still benefit from increased performance, then get a new CPU motherboard combo when you want too.
  • BrokenCrayons - Wednesday, July 27, 2016 - link

    I just upgraded out of a Q6600 and 4GB DDR2 about 2 months ago and I admit that I was still kicking around the idea of leaving it alone as I was pulling the motherboard out of the case. I replaced it with a cheap AMD 860k and 16GB DDR3 which really hasn't done a lot to improve the system's performance. In retrospect, I think I could realistically have squeezed another couple of years out of it, but the motherboard's NIC was iffy and I really wanted reliable ethernet.

    As for laptops, I've got a couple C2Ds kicking around that are perfectly adequate (T2310 & P8400) for daily use. I really can't see any point in replacing them just yet. Core was a good design through all its iterations.
  • Beany2013 - Wednesday, July 27, 2016 - link

    I like your style - rather than drop $100 on a midlevel intel NIC, you replace an entire platform.

    I strongly approve of these economics :-)
  • Michael Bay - Thursday, July 28, 2016 - link

    USB3 is kind of nice.
  • BrokenCrayons - Thursday, July 28, 2016 - link

    Well the NIC wasn't the only reason, but it was the last in a series of others that I was already coping with that tipped the scales. The upgrade was under $200 for the board, processor and memory so it really boiled down to one weekend dinner out to a mid-range restaurant. It was worth it for more reliable Steam streaming and fewer VNC disconnects as that wired ethernet port is the only means by which I regularly interact with my desktop since it has no monitor and is crammed into a corner in my utility room.
  • artk2219 - Friday, July 29, 2016 - link

    Why didn't you go for an FX if you dont mind me asking? You liked the FM2+ platform a bit better?
  • BrokenCrayons - Friday, July 29, 2016 - link

    Actually, I didn't give much of anything in the system a very close look before buying. I admittedly did about twenty minutes of research to make sure the 860k and the bottom feeder motherboard I'd picked would play nicely together before making a purchase. So the CPU & motherboard pair were the result of laziness and apathy rather than a preference for FM2+.
  • artk2219 - Monday, August 1, 2016 - link

    Ah ok gotcha, I just wanted to share that if you had a microcenter near you they sell FX 8320E's bundled with motherboards for 125 to 170 depending on which board you want to use. That can be quite the steal and a great base for a new cheap system once you bump the clocks on the 8320E.

Log in

Don't have an account? Sign up now