Core: It’s all in the Prefetch

In a simple CPU design, instructions are decoded in the core and data is fetched from the caches. In a perfect world, such as the Mill architecture, the data and instructions are ready to go in the lowest level cache at all times. This allows for the lowest latency and removes a potential bottleneck. Real life is not that rosy, and it all comes down to how the core can predict what data it needs and has enough time to drag it down to the lowest level of cache it can before it is needed. Ideally it needs to predict the correct data, and not interfere with memory sensitive programs. This is Prefetch.

The Core microarchitecture added multiple prefetchers in the design, as well as improving the prefetch algorithms, to something not seen before on a consumer core. For each core there are two data and one instruction prefetchers, plus another couple for the L2 cache. That’s a total of eight for a dual core CPU, with instructions not to interfere with ‘on-demand’ bandwidth from running software.

One other element to the prefetch is tag lookup for cache indexing. Data prefetchers do this, as well as running software, so in order to avoid a higher latency for the running program, the data prefetch uses the store port to do this. As a general rule (at least at the time), loads happen twice as often as stores, meaning that the store port is generally more ‘free’ to be used for tag lookup by the prefetchers. Stores aren’t critical for most performance metrics, unless the system can’t process stores quickly enough that it backs up the pipeline, but in most cases the rest of the core will be doing things regardless. The cache/memory sub-system is in control for committing the store through the caches, so as long as this happens eventually the process works out.

Core: More Cache Please

Without having access to a low latency data and instruction store, having a fast core is almost worthless. The most expensive SRAMs sit closest to the execution ports, but are also the smallest due to physical design limitations. As a result, we get a nested cache system where the data you need should be in the lowest level possible, and accesses to higher levels of cache are slightly further away. Any time spent waiting for data to complete a CPU instruction is time lost without an appropriate way of dealing with this, so large fast caches are ideal. The Core design, over the previous Netburst family but also over AMD’s K8 ‘Hammer’ microarchitecture, tried to swat a fly with a Buick.

Core gave a 4 MB Level 2 cache between two cores, with a 12-14 cycle access time. This allows each core to use more than 2MB of L2 if needed, something Presler did not allow. Each core also has a 3-cycle 32KB instruction + 32KB data cache, compared to the super small Netburst, and also supports 256 entries in the L1 data TLB, compared to 8. Both the L1 and L2 are accessible by a 256-bit interface, giving good bandwidth to the core.

Note that AMD’s K8 still has a few advantages over Core. The 2-way 64KB L1 caches on AMD’s K8 have a slightly better hit rate to the 8-way 32KB L1 caches on Core, with a similar latency. AMD’s K8 also used an on-die memory controller, lowering memory latency significantly, despite the faster FSB of Intel Core (relative to Netburst) giving a lower latency to Core. As stated in our microarchitecture overview at the time, Athlon 64 X2s memory advantage had gotten smaller, but a key element to the story is that these advantages were negated by other memory sub-system metrics, such as prefetching. Measured by ScienceMark, the Core microarchitecture’s L1 cache delivers 2x bandwidth, and the L2 cache is about 2.5x faster, than the Athlon one.

Ten Year Anniversary of Core 2 Duo and Conroe Core: Decoding, and Two Goes Into One
Comments Locked

158 Comments

View All Comments

  • perone - Friday, July 29, 2016 - link

    My E6300 is still running fine in a PC I have donated to a friend.
    It was set to 3GHz within a few days from purchase and never moved from that speed.
    Once or twice I changed the CPU fan as it was getting noisy.

    Great CPU and great motherboard the Asus P5B
  • chrizx74 - Saturday, July 30, 2016 - link

    These PCs are still perfectly fine if you install an SSD. I did it recently on an Acer Aspire t671 desktop. After modding the bios to enable AHCI I put an 850 evo (runs at sata 2 speed) and a pretty basic Nvidia GFX card. The system turned super fast and runs Windows 10 perfectly fine. You don't need faster processors all you need is get rid of the HDDs.
  • Anato - Saturday, July 30, 2016 - link

    I'm still running AMD Athlon x2 4850 2.5GHz as a file server + MythTV box. It supports ECC, is stable and has enough grunt to do its job so why replace. Yes, I could get bit energy efficiency but in my climate >50% of time heating is needed and new hardware has its risks of compatibility issues etc.

    +10 for anandtech again, article was great as always!
  • serendip - Sunday, July 31, 2016 - link

    I'm posting this on a Macbook with an E6600 2.4 GHz part. It's still rockin' after six years of constantly being tossed into a backpack. The comparisons between C2D and the latest i5 CPUs don't show how good these old CPUs really are - they're slow for hard number crunching and video encoding but they're plenty fast for typical workday tasks like Web browsing and even running server VMs. With a fast SSD and lots of RAM, processor performance ends up being less important.

    That's too bad for Intel and computer manufacturers because people see no need to upgrade. A 50% performance boost may look like a lot on synthetic benchmarks but it's meaningless in the real world.
  • artifex - Monday, August 1, 2016 - link

    "With a fast SSD and lots of RAM, processor performance ends up being less important."

    I remember back when I could take on Icecrown raids in WoW with my T7200-based Macbook.
    And I actually just stopped using my T7500-based Macbook a few months ago. For a couple years I thought about seeing if an SSD would perk it back up, but decided the memory bandwidth and size limitation, and graphics, was just not worth the effort. Funny that you're not impressed by i5s; I use a laptop with an i5-6200U, now. (Some good deals with those right now, especially if you can put up with the integrated graphics instead of a discrete GPU.) But then, my Macbooks were about 3 years older than yours :)
  • abufrejoval - Sunday, July 31, 2016 - link

    Replaced three Q6600 on P45 systems with socket converted Xeon X5492 at $60 off eBay each. Got 3.4GHz Quads now never using more than 60 Watts under Prime95 (150 Watts "official" TDP), with 7870/7950 Radeon or GTX 780 running all modern games at 1080p at high or ultra. Doom with Vulkan is quite fun at Ultra. Got my kids happy and bought myself a 980 ti off the savings. If you can live with 8GB (DDR2) or 16GB (DDR3), it's really hard to justify an upgrade from this 10 year old stuff.

    Mobile is a different story, of course.
  • seerak - Monday, August 1, 2016 - link

    My old Q6600 is still working with a friend.

    The laugher is that he (used to) work for Intel, and 6 months after I gave it to him in lieu of some owed cash, he bought a 4790K through the employee program (which isn't nearly as good as you'd think) and built a new system with it.

    The Q6600 works so well he's never gotten around to migrating to the new box - so the 4790k is still sitting unused! I'm thinking of buying it off him. I do 3D rendering and can use the extra render node.
  • jeffry - Monday, August 1, 2016 - link

    Thats a good point. Like, answering a question "are you willing to pay $800 for a new CPU to double the computers speed?" Most consumers say no. It all comes down to the mass market price.
  • wumpus - Thursday, August 4, 2016 - link

    Look up what Amazon (and anybody else buying a server) pays for the rest of the computer and tell me they won't pay $800 (per core) to double the computer's speed. It isn't a question of cost, Intel just can't do it (and nobody else can make a computer as fast as Intel, although IBM seems to be getting close, and AMD might get back in the "almost as good for cheap" game).
  • nhjay - Monday, August 1, 2016 - link

    The Core 2 architecture has served me well. Just last year I replaced my server at home which was based on a Core 2 Duo E6600 on a 965 chipset based motherboard. The only reason for the upgrade is that the CPU was having a difficult time handling transcoding jobs to several Plex clients at once.

    The desktop PC my kids use is Core 2 based, though slightly newer. Its a Core 2 Quad Q9400 based machine. It is the family "gaming" PC if you dare call it that. With a GT 730 in it, it runs the older games my kids play very well and Windows 10 hums along just fine.

Log in

Don't have an account? Sign up now