L2 Cache: 4MB or 2MB?

The majority of Intel's Core 2 Duo line feature a single shared 4MB L2 cache, but the E6300 and E6400 are only equipped with a 2MB cache. Given that the Core 2 Duo doesn't have an on-die memory controller, it is more dependent on larger caches than AMD's Athlon 64 X2, which raises the question - how big of a difference is there between the 2MB L2 and 4MB L2 parts?

Our Core 2 Extreme X6800 (2.93GHz) part, like all X6800s, is unlocked so we could simulate any other Core 2 Duo clock speed with it. We managed to get our hands on a Core 2 Duo E6300 (1.86GHz/2MB) and we were in business.

The graph below features all of the benchmarks we ran on the processors in this review, showing the percent performance improvement from a Core 2 Duo E6300 (1.86GHz/2MB) to a hypothetical 1.86GHz Core 2 Duo with a 4MB L2 cache. The bar labeled "Average" is simply an average of all of the performance improvements graphed on the chart.

The 4MB L2 cache can increase performance by as much as 10% in some situations. Such a performance improvement is definitely tangible, and as applications grow larger in their working data sets then the advantage of a larger cache will only become more visible. Unfortunately, you do pay a price premium for this added performance and future proofing as the cheapest 4MB L2 part is the E6600 priced at $316.

If you're the type to upgrade often, then the extra cache is not worth it as you're not getting enough of a present day increase in performance to justify the added cost. However, if this processor will be the basis for your system for the next several years, we'd strongly recommend picking a 4MB flavor of Core 2.

The Test Memory Latency: No Integrated Memory Controller Necessary
Comments Locked

202 Comments

View All Comments

  • coldpower27 - Friday, July 14, 2006 - link

    Are there supposed to be there as they aren't functioning in Firefox 1.5.0.4
  • coldpower27 - Friday, July 14, 2006 - link

    You guys fixed it awesome.
  • Orbs - Friday, July 14, 2006 - link

    On "The Test" page (I think page 2), you write:

    please look back at the following articles:

    But then there are no links to the articles.

    Anyway, Anand, great report! Very detailed with tons of benchmarks using a very interesting gaming configuration, and this review was the second one I read (so it was up pretty quickly). Thanks for not saccrificing quality just to get it online first, and again, great article.

    Makes me want a Conroe!
  • Calin - Friday, July 14, 2006 - link

    Great article, and thanks for a well done job. Conroe is everything Intel marketing machine shown it to be.
  • stepz - Friday, July 14, 2006 - link

    The Core 2 doesn't have smaller emoty latency than K8. You're seeing the new advanced prefetcher in action. But don't just believe me, check with the SM2.0 author.
  • Anand Lal Shimpi - Friday, July 14, 2006 - link

    That's what Intel's explanation implied as well, when they are working well the prefetchers remove the need for an on-die memory controller so long as you have an unsaturated FSB. Inevitably there will be cases where AMD is still faster (from a pure latency perspective), but it's tough to say how frequently that will happen.

    Take care,
    Anand
  • stepz - Friday, July 14, 2006 - link

    Excuse me. You state "Intel's Core 2 processors now offer even quicker memory access than AMD's Athlon 64 X2, without resorting to an on-die memory controller.". That is COMPLETELY wrong and misleading. (see: http://www.aceshardware.com/forums/read_post.jsp?i...">http://www.aceshardware.com/forums/read_post.jsp?i... )

    It would be really nice from journalistic integrity point of view and all that, if you posted a correction or atleast silently changed the article to not be spreading incorrect information.


    Oh... and you really should have smelt something fishy when a memory controller suddenly halves its latency by changing the requestor.
  • stepz - Friday, July 14, 2006 - link

    To clarify. Yes the prefetching and espescially the speculative memory op reordering does wonders for realworld performance. But then let the real-world performance results speak for themselves. But please don't use broken synthetic tests. The advancements help to hide latency from applications that do real work. They don't reduce the actual latency of memory ops that that test was supposed to test. Given that the prefetcher figures out the access pattern of the latency test, the test is utterly meaningless in any context. The test doesn't do anything close to realworld, so if its main purpose is broken, it is utterly useless.
  • JarredWalton - Friday, July 14, 2006 - link

    Modified comments from a similar thread further down:

    Given that the prefetcher figures out the access pattern of the latency test, the test is utterly meaningless in any context."

    That's only true if the prefetcher can't figure out access patterns for all other applications as well, and from the results I'm pretty sure it can. You have to remember, even with the memory latency of approximately 35 ns, that delay means the CPU now has about 100 cycles to go and find other stuff to do. At an instruction fetch rate of 4 instructions per cycle, that's a lot of untapped power. So, while it waits on main memory access one, it can be scanning the next accesses that are likely to take place and start queuing them up and priming the RAM. The net result is that you may never actually be able to measure latency higher than 35-40 ns or whatever.

    The way I think of it is this: pipeline issues aside, a large portion of what allowed Athlon 64 to outperform NetBurst was reduced memory latency. Remember, Pentium 4 was easily able to outperform Athlon XP in the majority of benchmarks -- it just did so at higher clock speeds. (Don't *even* try to tell me that the Athlon XP 3200+ was as fast as a Pentium 4 3.2 GHz! LOL. The Athlon 64 3200+ on the other hand....) AMD boosted performance by about 25% by adding an integrated memory controller. Now Intel is faster at similar clock speeds, and although the 4-wide architectural design helps, not to mention 4MB shared L2, they almost certainly wouldn't be able to improve performance without improving memory latency -- not just in theory, but in actual practice. Looking at the benchmarks, I have to think that our memory latency scores are generally representative of what applications see.

    If you have to engineer a synthetic application specifically to fool the advanced prefetcher and op reordering, what's the point? To demonstrate a "worst case" scenario that doesn't actually occur in practical use? In the end, memory latency is only one part of CPU/platform design. The Athlon FX-62 is 61.6% faster than the Pentium XE 965 in terms of latency, but that doesn't translate into a real world performance difference of anywhere near 60%. The X6800 is 19.3% faster in memory latency tests, and it comes out 10-35% faster in real world benchmarks, so again there's not an exact correlation. Latency is important to look at, but so is memory bandwidth and the rest of the architecture.

    The proof is in the pudding, and right now the Core 2 pudding tastes very good. Nice design, Intel.
  • coldpower27 - Friday, July 14, 2006 - link

    But why are you posting the Manchester core's die size?

    What about the Socket AM2 Windsor 2x512KB model which has a die size of 183mm2?

Log in

Don't have an account? Sign up now