Core: It’s all in the Prefetch

In a simple CPU design, instructions are decoded in the core and data is fetched from the caches. In a perfect world, such as the Mill architecture, the data and instructions are ready to go in the lowest level cache at all times. This allows for the lowest latency and removes a potential bottleneck. Real life is not that rosy, and it all comes down to how the core can predict what data it needs and has enough time to drag it down to the lowest level of cache it can before it is needed. Ideally it needs to predict the correct data, and not interfere with memory sensitive programs. This is Prefetch.

The Core microarchitecture added multiple prefetchers in the design, as well as improving the prefetch algorithms, to something not seen before on a consumer core. For each core there are two data and one instruction prefetchers, plus another couple for the L2 cache. That’s a total of eight for a dual core CPU, with instructions not to interfere with ‘on-demand’ bandwidth from running software.

One other element to the prefetch is tag lookup for cache indexing. Data prefetchers do this, as well as running software, so in order to avoid a higher latency for the running program, the data prefetch uses the store port to do this. As a general rule (at least at the time), loads happen twice as often as stores, meaning that the store port is generally more ‘free’ to be used for tag lookup by the prefetchers. Stores aren’t critical for most performance metrics, unless the system can’t process stores quickly enough that it backs up the pipeline, but in most cases the rest of the core will be doing things regardless. The cache/memory sub-system is in control for committing the store through the caches, so as long as this happens eventually the process works out.

Core: More Cache Please

Without having access to a low latency data and instruction store, having a fast core is almost worthless. The most expensive SRAMs sit closest to the execution ports, but are also the smallest due to physical design limitations. As a result, we get a nested cache system where the data you need should be in the lowest level possible, and accesses to higher levels of cache are slightly further away. Any time spent waiting for data to complete a CPU instruction is time lost without an appropriate way of dealing with this, so large fast caches are ideal. The Core design, over the previous Netburst family but also over AMD’s K8 ‘Hammer’ microarchitecture, tried to swat a fly with a Buick.

Core gave a 4 MB Level 2 cache between two cores, with a 12-14 cycle access time. This allows each core to use more than 2MB of L2 if needed, something Presler did not allow. Each core also has a 3-cycle 32KB instruction + 32KB data cache, compared to the super small Netburst, and also supports 256 entries in the L1 data TLB, compared to 8. Both the L1 and L2 are accessible by a 256-bit interface, giving good bandwidth to the core.

Note that AMD’s K8 still has a few advantages over Core. The 2-way 64KB L1 caches on AMD’s K8 have a slightly better hit rate to the 8-way 32KB L1 caches on Core, with a similar latency. AMD’s K8 also used an on-die memory controller, lowering memory latency significantly, despite the faster FSB of Intel Core (relative to Netburst) giving a lower latency to Core. As stated in our microarchitecture overview at the time, Athlon 64 X2s memory advantage had gotten smaller, but a key element to the story is that these advantages were negated by other memory sub-system metrics, such as prefetching. Measured by ScienceMark, the Core microarchitecture’s L1 cache delivers 2x bandwidth, and the L2 cache is about 2.5x faster, than the Athlon one.

Ten Year Anniversary of Core 2 Duo and Conroe Core: Decoding, and Two Goes Into One
Comments Locked

158 Comments

View All Comments

  • pixelstuff - Wednesday, July 27, 2016 - link

    I think Core2 essentially accelerated the market saturation we are seeing and causing the PC market to decline a bit. My Core2 E8400 still runs Window 10 relatively fine, although I have built two more since because I like being near the cutting edge. However I know quite a few people still using Core2 CPUs for their basic computing needs.

    There just haven't been any new apps that are more resource intensive than a word processor or web browser which the entire world needs. So the PC replacement market has stagnated a bit.
  • stardude82 - Wednesday, July 27, 2016 - link

    Most Core processors are faster than the ho-hum Cherry Trail offerings you find low end PCs. So buying a new cute shiny black little box to replace your beige big box doesn't guarantee much.
  • boeush - Wednesday, July 27, 2016 - link

    It reads a little weird/myopic that only certain technologies are being considered while forecasting all the way out to 2030. For instance, lots of NAND/DRAM discussion but no mention of upcoming or already early-adoption tech like 3D XPoint or memristors, etc. No mention of optoelectronics (like photonic signalling on- and off-chip), no mention of III-V and other 'exotic' materials for chip manufacturing and improved frequency/power scaling (with focus instead devoted to feature sizes/stacking/platter size/defects.) And so on.

    I mean, if you're forecasting 5 years ahead, I'd understand. But talking about 15 years into the future but only extrapolating from what's on the market right now -- as opposed to what's in the labs and on drawing boards -- seems to be a little too pessimistic and/or myopic.
  • Ian Cutress - Wednesday, July 27, 2016 - link

    The full report mentions III-V and SiGe in the remit of future technologies. Anton and I are starting to discuss what parts we can pull out for individual news stories, to stay tuned.
  • Sam Snead - Wednesday, July 27, 2016 - link

    Heck I still have my Nexgen P110 cpu computer set up and run it once in awhile. From 1996. Remember the VESA local bus video card? Nexgen was later bought by AMD.
  • stardude82 - Wednesday, July 27, 2016 - link

    Ah, I remember Socket 7...
  • CoreLogicCom - Wednesday, July 27, 2016 - link

    I've still got a Dell E1705 laptop that I bought in 2006 which came with a Core Duo, which I upgraded to Core 2 Duo about 4 years into it, and maxed the RAM to 4GB (from the 2GB max it came with). It was decent, but really came alive when I put an SSD into it. I still use this laptop for basic stuff, and even some gaming (WoW and SWToR) with the Geforce Go GPU. It's definitely long in the tooth now, now running Windows 7 (it came with WinXP, but 10 is unsupported on the GPU even though there's a work around). I'm thinking mobile Kaby Lake and mobile Pascal will be the next laptop I keep for another 10 years.
  • Nacho - Wednesday, July 27, 2016 - link

    Can you beat me?
    Last month I finally upgraded my primary rig from a C2D E4300 @2.7Ghz! Memory started failing last year & I couldn't find cheap DDR2, so I was down to 2GB.
    Went for a i5 6500 and 16GB DDR4. The difference is incredible!
  • Filiprino - Wednesday, July 27, 2016 - link

    So much time since reading Anand's article on Conroe.
  • 3ogdy - Wednesday, July 27, 2016 - link

    Great article, Ian! I've found it a very good read and it's always nice to take a look back and analyze what we've been through so far.
    I also wanna point out just a few mini-errors I've found in the article:
    The Core 2 processors all came from a 143mm2 die, compared TO the 162mm2 of Pentium D. /
    by comparison to the large die sizes we see IN 2016 for things like the P100 /
    whereas the popular Core 2 Duo E6400 at $224 WAS at the same price as the Core i5-6600.
    As we NOW know, on-die IMCs are the big thing.
    Geometrical Scaling when this could NO longer operate
    By 2020-25 device features will be REDUCED (?)
    On the later -> LATTER?

    Keep up the amazing work!

Log in

Don't have an account? Sign up now