Core: Load Me Up

When discussing the size of the reorder buffer, I mentioned that for some ops relying on the data of others, the order in which they need to be processed has to remain consistent – the load for the second op has to follow the store from the first in order for the calculation to be correct. This works for data that is read from and written to the same location in the same data stream, however with other operations, the memory addresses for loads and stores are not known until they pass the address generation units (AGUs).

This makes reordering a problem at a high level. You ultimately do not want a memory location to be written and stored by two different operations at the same time, or for the same memory address to be used by different ops while one of those ops is sitting in the reorder queue. When a load micro-op enters the buffer, the memory addresses of previous stores are not known until they pass the AGUs. Note, that this applies to memory addresses in the caches as well as main memory. However, if one can speed up loads and load latency in the buffers, this typically has a positive impact in most software scenarios.

With Core, Intel introduced a ‘New Memory Disambiguation’. For lack of a better analogy, this means that the issue of loads preceding stores is given a ‘do it and we’ll clean up after’ approach. Intel stated at the time that the risk that a load will load a value out of an address that is being written to by a store that has yet to be finished is pretty small (1-2%), and the chance decreases with larger caches. So by allowing loads to go head of stores, this allows a speedup but there has to be a catch net for when it goes wrong. To avoid this, a predictor is used to help. The dynamic alias predictor tries to spot this issue. If it happens, the load will have to be repeated, with a penalty of about 20-cycles.

Unofficial AnandTech Diagram

The predictor gives permission for a load to move ahead of a store, and after execution the conflict logic scans the buffer in the Memory reOrder Buffer (MOB) to detect an issue. If it happens, then the load is reprocessed back up the chain. In the worst case scenario, this might reduce performance, but as Johan said back in 2006: ‘realistically it is four steps forward and one step back, resulting in a net performance boost’.

Using this memory disambiguation technique, Intel reported a 40% performance boost purely on allowing loads to be more flexible in a number of synthetic loads (or 10-20% in real world), along with L1 and L2 performance boosts. It is worth noting that this feature affects INT workloads more often than FP workloads, purely on the basis that FP workloads tend to be more ordered by default. This is why AMD’s K8 lost ground to Intel on INT workloads, despite having a lower latency memory system and more INT resources, but stayed on track with FP.

Core: No Hyper-Threading, No Integrated Memory Controller

In 2016, HT and an integrated memory controller (IMC) are now part of the fundamental x86 microarchitecture in the processors we can buy. It can be crazy to think that one of the most fundamental upticks in x86 performance in the last decade lacked these two features. At the time, Intel gave reasons for both.

Simultaneous Hyper-Threading, the act of having two threads funnel data through a single core, requires large buffers to cope with the potential doubling of data and arguably halves resources in the caches, producing more cache pressure. However, Intel gave different reasons at the time – while SMT gave a 40% performance boost, it was only seen as a positive by Intel in server applications. Intel said that SMT makes hotspots even hotter as well, meaning that consumer devices would become power hungry and hot without any reasonable performance improvement.

On the IMC, Intel stated at the time that they had two options: an IMC, or a larger L2 cache. Which one would be better is a matter for debate, but Intel in the end went with a 4 MB L2 cache. Such a cache uses less power than an IMC, and leaving the IMC on the chipset allows for a wider support range of memory types (in this case DDR2 for consumers and FB-DIMM for servers). However, having an IMC on die improves memory latency significantly, and Intel stated that techniques such as memory disambiguation and improved prefetch logic can soak up this disparity.

As we know know, on-die IMCs are the big thing.

Core: Out of Order and Execution Core: Performance vs. Today
Comments Locked

158 Comments

View All Comments

  • pixelstuff - Wednesday, July 27, 2016 - link

    I think Core2 essentially accelerated the market saturation we are seeing and causing the PC market to decline a bit. My Core2 E8400 still runs Window 10 relatively fine, although I have built two more since because I like being near the cutting edge. However I know quite a few people still using Core2 CPUs for their basic computing needs.

    There just haven't been any new apps that are more resource intensive than a word processor or web browser which the entire world needs. So the PC replacement market has stagnated a bit.
  • stardude82 - Wednesday, July 27, 2016 - link

    Most Core processors are faster than the ho-hum Cherry Trail offerings you find low end PCs. So buying a new cute shiny black little box to replace your beige big box doesn't guarantee much.
  • boeush - Wednesday, July 27, 2016 - link

    It reads a little weird/myopic that only certain technologies are being considered while forecasting all the way out to 2030. For instance, lots of NAND/DRAM discussion but no mention of upcoming or already early-adoption tech like 3D XPoint or memristors, etc. No mention of optoelectronics (like photonic signalling on- and off-chip), no mention of III-V and other 'exotic' materials for chip manufacturing and improved frequency/power scaling (with focus instead devoted to feature sizes/stacking/platter size/defects.) And so on.

    I mean, if you're forecasting 5 years ahead, I'd understand. But talking about 15 years into the future but only extrapolating from what's on the market right now -- as opposed to what's in the labs and on drawing boards -- seems to be a little too pessimistic and/or myopic.
  • Ian Cutress - Wednesday, July 27, 2016 - link

    The full report mentions III-V and SiGe in the remit of future technologies. Anton and I are starting to discuss what parts we can pull out for individual news stories, to stay tuned.
  • Sam Snead - Wednesday, July 27, 2016 - link

    Heck I still have my Nexgen P110 cpu computer set up and run it once in awhile. From 1996. Remember the VESA local bus video card? Nexgen was later bought by AMD.
  • stardude82 - Wednesday, July 27, 2016 - link

    Ah, I remember Socket 7...
  • CoreLogicCom - Wednesday, July 27, 2016 - link

    I've still got a Dell E1705 laptop that I bought in 2006 which came with a Core Duo, which I upgraded to Core 2 Duo about 4 years into it, and maxed the RAM to 4GB (from the 2GB max it came with). It was decent, but really came alive when I put an SSD into it. I still use this laptop for basic stuff, and even some gaming (WoW and SWToR) with the Geforce Go GPU. It's definitely long in the tooth now, now running Windows 7 (it came with WinXP, but 10 is unsupported on the GPU even though there's a work around). I'm thinking mobile Kaby Lake and mobile Pascal will be the next laptop I keep for another 10 years.
  • Nacho - Wednesday, July 27, 2016 - link

    Can you beat me?
    Last month I finally upgraded my primary rig from a C2D E4300 @2.7Ghz! Memory started failing last year & I couldn't find cheap DDR2, so I was down to 2GB.
    Went for a i5 6500 and 16GB DDR4. The difference is incredible!
  • Filiprino - Wednesday, July 27, 2016 - link

    So much time since reading Anand's article on Conroe.
  • 3ogdy - Wednesday, July 27, 2016 - link

    Great article, Ian! I've found it a very good read and it's always nice to take a look back and analyze what we've been through so far.
    I also wanna point out just a few mini-errors I've found in the article:
    The Core 2 processors all came from a 143mm2 die, compared TO the 162mm2 of Pentium D. /
    by comparison to the large die sizes we see IN 2016 for things like the P100 /
    whereas the popular Core 2 Duo E6400 at $224 WAS at the same price as the Core i5-6600.
    As we NOW know, on-die IMCs are the big thing.
    Geometrical Scaling when this could NO longer operate
    By 2020-25 device features will be REDUCED (?)
    On the later -> LATTER?

    Keep up the amazing work!

Log in

Don't have an account? Sign up now