Smarter Prefetching and Caching

Making sure that the right instructions and data are ready for use in the caches in the "3 GHz and beyond" era is one of the most important tasks of the architectural engineer. This helps to ensure performance increases as clockspeeds get pushed higher; otherwise, higher CPU clockspeeds will simply result in the processor spending more time waiting for data. This technique of priming the caches is knows as Prefetching; however, the current hardware prefetching algorithms don't always lead to success. There are quite a few cases where they can actually lower performance, especially in bandwidth sensitive applications.

The Core architecture prefetching is without any doubt superior to what can be found in the Athlon 64 and Pentium 4. There are no less than three prefetchers - two data, one instruction - in each core, plus two prefetchers for the L2-cache. With eight prefetchers active in one dual-core Core CPU, all those prefetchers could easily get in the way of the "demand" bandwidth - the bandwidth which is needed by the load operations of the running program. In order to avoid this bottleneck, the prefetch monitor of the Core CPUs always give priority to the demand bandwidth; the prefetchers will never steal too much bandwidth away from the running program.

There is more. The data prefetch needs to perform tag lookups (tags = index of cache) in the caches frequently. To avoid this resulting in higher latency for the "normal" (caused by the program running) tag lookups, the data prefetch uses the store port for the tag lookup. If you remember, loads happen about twice as often as stores. This means the store port is used only half as much as the load port and it make sense to use that port for tag lookup by the prefetchers. Note also that stores are not critical for system performance in most cases -- once the data is "written" the processor can go on about its business. The cache/memory subsystem is in charge of replicating the data down to main memory, and as long as this happens eventually, everything works fine.

The cache system of the Core CPU is also very impressive. A massive 4 MB L2-cache is shared between the two cores and is accessed in only 12 to 14 cycles. Each core also has a 3 cycle 32 KB Instruction cache and a 32 KB data cache at its disposal. Note that the "Trace Cache" of NetBurst has been left behind with the return to shorter pipelines; the NetBurst Trace Cache basically functions as an instruction cache for pre-decoded instructions, and while this was apparently helpful for the long pipeline of NetBurst, Intel has apparently determined that a traditional L1 caching scheme makes more sense for Core.

Cache Architecture Overview

Click to enlarge

Just a quick look at the numbers in the above table make it clear that the memory subsystem of the Core architecture is impressive. It has twice as much L2 cache as current dual-core CPUs (the same amount as Presler), but the cache is still accessible with low latency. The shared L2 cache also allows one core to use more than 2MB of cache if necessary. Both L1 and L2 cache are accessed via a 256-bit wide bus, allowing the caches to deliver massive bandwidth to the core.

Core versus Hammer: Memory subsystem

Core's most important competitor, the Hammer ("K8") architecture, has two small but noteworthy advantages. The first is its bigger 2 x 64 KB L1 cache. This is only a small advantage as an 8-way 32 KB cache will have a hit rate close to that of a 2-way 64 KB cache.

The second and more important advantage is the on die memory controller, which lowers the latency to the memory considerably. However, the lower clockspeeds of the Core CPUs (relative to NetBurst) and the faster FSB also lower latency significantly. With the numbers available to us now, we have reason to believe that the Athlon 64 X2's latency advantage will shrink to only 15 to 20%. For comparison, the memory subsystem of the Pentium 4 was almost twice as slow as the Athlon 64 (80-90 ns versus 45-50 ns).

However, those two small advantages are likely negated by all the other memory subsystem metrics. The Core CPUs have much bigger caches and much smarter prefetching than the competition. The Core architecture's L1 cache delivers about twice as much bandwidth (Measured by ScienceMark), while it's L2-cache is about 2.5 times faster than the Athlon 64/Opteron one.

Index Decoding Instructions
Comments Locked

87 Comments

View All Comments

  • BigT383 - Monday, May 1, 2006 - link

    I loved this article. It's due to articles like these that I've been reading Anandtech since before the days of the K6-2.
  • PandaBear - Monday, May 1, 2006 - link

    Of course Core should be better than K8, it better be.

    The only thing I am concerned about the Core architecture is with all these additional stuff, it will probably cost a lot to make, not just the CPU, but the MB, chipset, will also be expensive with the additional high speed circuitry. That means it will probably cost more.

    K8 has been 5 years old and it is not bad standing against the latest and greatest. If AMD have something in the pipeline that will be the next monster CPU, it will be great. What I am concern about AMD is whether they can keep their yield up and have enough $ left behind to design K9 and beyond. Don't just sit there and lose the momentum they gain.
  • saratoga - Monday, May 1, 2006 - link

    Core is a pretty conservative design with a pretty small die for a new core. It should be very economical to produce. Probably more so then the chips its replaceing.
  • IntelUser2000 - Monday, May 1, 2006 - link

    quote:

    Of course Core should be better than K8, it better be.

    The only thing I am concerned about the Core architecture is with all these additional stuff, it will probably cost a lot to make, not just the CPU, but the MB, chipset, will also be expensive with the additional high speed circuitry. That means it will probably cost more.


    Not really. Not many expected that Intel will do more than increasing clock speeds and cache sizes since that's what they have been doing that since Pentium II.

    http://www.reghardware.co.uk/2006/04/05/intel_conr...">http://www.reghardware.co.uk/2006/04/05/intel_conr...

    The ASP went down. $530 for the fastest mainstream Conroe is rather good.
  • zsdersw - Monday, May 1, 2006 - link

    The pricing put out by Intel suggests that Core will be priced very aggressively. I can't see the 975 chipset costing significantly more than it does now when Core is released.

    The fact that Core is going to be built on Intel's 65nm process means that the "additional stuff" you refer to will cost less than it would if built on the 90nm process. And the die size probably grew a little, but not enough to offset the cost gains from the 65nm process.
  • xtremejack - Monday, May 1, 2006 - link

    K8 is only 3 years old. Didn't AMD celebrate their 3rd anniversary of Opteron a few days ago.
  • Griswold - Thursday, May 4, 2006 - link

    Its been sold for 3 years, but clearly the design is "a few days" older than that.
  • evident - Monday, May 1, 2006 - link

    as a junior computer engineer at villanova university, i found this article to be really informative and an awesome read. it's really cool to see the differences between these CPU architectures and shows that they are actually teaching me something useful!
  • PeteRoy - Monday, May 1, 2006 - link

    How can you say Netburst wasn't a huge success?

    I think Netburst was a success when it was launched and it should have died sooner, but it was good for it time and now it will be replaced.
  • JarredWalton - Monday, May 1, 2006 - link

    NetBurst started at 1.5 GHz basically and topped out at 3.8 GHz. Compared to previous architectures, that's pretty tame. P6 went from 150 MHz to 1.26 GHz (and beyond if you want to count P-M). Success monetarily vs. success as an overall design are two different things, and clearly NetBurst ran into trouble. Where are the 5 GHz+ Tejas chips? Waiting somewhere beyond the thermal even horizon.... :)

Log in

Don't have an account? Sign up now