Introduction to the Processor Charts

Before we get to the actual charts, I want to take a minute to make clear how the charts are organized. Due to the number of features involved with modern processors, it can become difficult to determine which CPU is actually faster when comparing different models. For example, do you go with the 2250 MHz Athlon XP using the Thoroughbred core, which has a 2800+ model number, or should you go with the 2000 MHz Athlon XP that uses the Barton core, which also has a 2800+ model number? With Intel, it can be even more difficult: you have different cache sizes, bus speeds, and even architectures.

Since I figure a lot of people may actually find some sort of relative sorting useful, I have attempted to do this. How you wish to rate the various factors is of course a topic that could be debated ad nauseum . What I am presenting is by no means a definitive answer on which model is faster, but it should give a rough estimate. Below are the various families of processors and the weighting values that I used. I then took the weight factor and multiplied that by the actual clock speed to come up with a final performance ranking.

Since this is simply a rough estimate on my part, I am not including these ranking values in the actual charts, but they are how I sorted the data. Really, the reason for their existence was to get a sorting function that more or less agreed with my own personal opinion, so if I happen to have missed a processor, or if a new processor is released, I can simply add in the processor(s) to the chart and resort it. I'm open for suggestions on how these ratings might be improved, but please realize that there will never be a definitive formula, as relative performance depends on what specific code you are running.

If you don't like math or don't really care to know precisely how the charts are sorted, feel free to just skip to the next page. This is only for people that really want to know details. Also, the weighting factors are within each family - they have no correlation with other processor families. (So don't get upset that the Dothan has a 1.6 weighting and Athlon FX only has 1.15!) With that said, here are the weighting factors that I used.

Duron, Athlon, Athlon XP and Sempron

 64K L2 + 100 MHz bus = 0.7
 64K L2 + 133 MHz bus = 0.75
256K L2 + 100 MHz bus = 0.8
256K L2 + 133 MHz bus = 0.85
256K L2 + 166 MHz bus = 0.9
512K L2 + 133 MHz bus = 0.95
512K L2 + 166 MHz bus = 1.0
512K L2 + 200 MHz bus = 1.05

Athlon 64

 256K L2 + single-channel (Socket 754) = 0.9
 512K L2 + single-channel (Socket 754) = 0.95
1024K L2 + single-channel (Socket 754) = 1.0
 512K L2 + dual-channel   (Socket 939) = 1.04
1024K L2 + dual-channel   (Socket 940) = 1.11
1024K L2 + dual-channel   (Socket 939) = 1.15

Celeron 2 and Pentium 4

 128K L2 +  400 FSB =            0.6
 256K L2 +  400 FSB =            0.75
 256K L2 +  533 FSB =            0.80
 512K L2 +  400 FSB =            0.84
 512K L2 +  533 FSB =            0.91
1024K L2 +  533 FSB =            0.93
1024K L2 +  800 FSB =            0.98
 512K L2 +  800 FSB =            1.0
 512K L2 +  800 FSB + 2048K L3 = 1.15
2048K L2 + 1066 FSB =            1.2

Mobile Celeron, Mobile P4, Celeron M and Pentium M

 128K L2 + 400 FSB =             0.6
 256K L2 + 400 FSB =             0.75
 256K L2 + 533 FSB =             0.80
 512K L2 + 533 FSB + Northwood = 0.91
1024K L2 + 533 FSB + Prescott =  0.93
 512K L2 + 400 FSB + Dothan =    1.25
 512K L2 + 400 FSB + Banias =    1.3
1024K L2 + 400 FSB + Dothan =    1.35
1024K L2 + 400 FSB + Banias =    1.4
2048K L2 + 400 FSB =             1.5
2048K L2 + 533 FSB =             1.6
Intel Processors A case for AMD
Comments Locked

74 Comments

View All Comments

  • JarredWalton - Wednesday, September 1, 2004 - link

    Jenand - thanks for the information. There are certainly some errors in the Itanium charts, but very few people seem to know much about the architecture, so I haven't gotten any corrections. Most of the future IA64 chips are highly speculative in terms of featurs.

    Incidentally, it looks like Tukwilla (and Dimona) will be 4 core designs, with motherboards support 4 CPUs, thus "16C" - or something like that. As for Fanwood, I really don't know much about it other than the name and some speculation that it *might* be the same as Madison9M. Or it might be a Dual Processor version of Madison, which is multi-processor.

    http://endian.net/details.asp?ItemNo=3835
    http://www.xbitlabs.com/news/cpu/display/200311101...

    At the very least, Fanwood will have more than just a 9 MB cache configuration, it's probably safe to say.
  • JarredWalton - Wednesday, September 1, 2004 - link

    If Prescott and Pentium M both use the exact same branch predictor, then yes, the Prescott would be more accurate than Banias. However, with the doubling of the cache size on Dothan, I can't imagine Intel would leave it with inferior branch prediction. So perhaps it goes something like this in terms of branch prediction accuracy:

    P6 cores
    Willamette/Northwood
    Banias
    Prescott
    Dothan

    Possibly with the last two on the same level.

    I'm still waiting to see if we can get pipeline stage information from Intel, but I have encountered several other sources online that refer to the Willamette/Northwood as having a 28 stage pipeline. Guess there's no use in beating a dead horse, though - either Intel will pass on information and we can have a definite, or it will remain an unknown. Don't hold your breath on Intel, though. :)
  • IntelUser2000 - Wednesday, September 1, 2004 - link

    "Intel claims that the combination of the loop detector and indirect branch predictor gives Centrino a 20% increase in overall branch prediction accuracy, resulting in a 7% real performance increase."

    Sure, but Prescott also has Pentium M's branch predictor enhancements in addition to the enhancements made to Willamette, while Pentium M didn't get Willamette's enhancements, just the indirect branch predictor.

    Yes it says 20% increase, but from what? PIII, P4? Prescott?
  • jenand - Tuesday, August 31, 2004 - link

    There are a few errors and some missing information on the IPF sheet:
    1) Fanwood will get 4M(?) L3 or so, not 9M. You probably mixed it up with its bigger brother Madison9M, both to be released soon.

    2)Foxton and Pelleston are code names for technologies used in Montecito, not CPU code names.

    3) Dimona and Tukwila are "pairs" (just like Madison/Deerfield, Madison9M/Fanwood and Montecito/Millington) both will be made on 45nm nodes and are scheduled for 2007. Montvale is probably a shrink of Montecito or Millington to the 65nm node and will probably be launched in 2006.

    4) Montecito and Millington will be made on 90nm and use the PAC-611 socket. The FSB of Montecito will be 100MHZ for compatibility reasons, but will also be introduced at a higher FSB (166MHz?) late in 2005.

    5) Fanwood will probably get 100MHz and 133MHz FSB, not 166MHz. Same goes for Millington.

    I hope it was helpful. Please note that I don't have any internal information I only read the rumors.
  • JarredWalton - Tuesday, August 31, 2004 - link

    Heh... one last link. Hannibal discusses why the PM is able to have better branch prediction with a smaller BTB in his article about the PM. At the bottom of the following page is where he specifically discusses the improvements to the P4:

    http://castor.arstechnica.com/cpu/004/pentium-m/pe...

    And his summary: "Intel claims that the combination of the loop detector and indirect branch predictor gives Centrino a 20% increase in overall branch prediction accuracy, resulting in a 7% real performance increase. Of course, the usual caveats apply to these statistics, i.e. the increase in branch prediction accuracy and that increase's effect on real-world performance depends heavily on the type of code being run. Improved branch prediction gives the PM a leg up not only in terms of performance but in terms of power efficiency as well. Because of its improved branch prediction capabilities, the PM wastes less energy speculatively executing code that it will then have to throw away once it learns that it mispredicted a branch."

    He could be wrong, of course, but personally I trust his research on CPUs more than a lot of other sites - after all, he does *all* architectures, not just x86. Hopefully, Intel will provide me (Kristopher) with some direct answers. :)
  • JarredWalton - Tuesday, August 31, 2004 - link

    In case that last wasn't clear, I'm not saying the CPU detection is really that blatant, but if the CPU detection is required for accuracy, it *could* be that bad. Rumor, by the way, puts the Banias core at 14 or 15 stages, and the Dothan *might* add one more stage.
  • JarredWalton - Tuesday, August 31, 2004 - link

    Regarding Pentium M, I believe the difference to the branch prediction isn't merely a matter of size. It has a new indirect branch predictor, as well as some other features. Basically, P-M is designed for power usage first, and so they made a lot more elegant design decisions at times, whereas Northwood and Prescott are more of a brute force approach.

    As for the differences between various AT articles, it's probably worth pointing out that this is the first article I've ever written for Anandtech, so don't be too surprised that it has some differences of opinion. Who's right? It's difficult to say.

    As for the program mentioned in that thread, I downloaded it and ran it on my Athlon 64. You know what the result was? 13.75 to 13.97 cycles. Since a branch miss doesn't actually necessitate a flush of the entire pipeline, that would mean that it's estimating the length of the A64 as probably 15 or 16 stages - off by a factor of 33% or so. If it were off by that same amount on Prescott, that would put Prescott at [drumroll...] 23 stages.

    I've passed on some questions for Intel to Kristopher Kubuki, so maybe we can get the real poop. Until then, it's still a case of "nobody knows for sure". Estimating pipeline lengths based off of a program that reports accurate results on P4 and Northwood cores is at best a guess, I would say.

    Incidentally, I looked at the source code, and while I haven't really studied it extensively, there is a CPU detection, so the mispredict penalty is calculated differently on P4, P6, and *other* architectures. Maybe it's okay, maybe it's not, but if accurate results are dependent on CPU detection, that sort of calls the whole thing into question.

    if CPU=P6 then printf("12 stages.\n")
    else if CPU=P4 then printf("10 stages.\n")
    else if....

    Hopefully, it *is* relatively accurate, but as I said, ~14 cycles mispredict penalty on an Athlon 64 is either incorrect, or AMD actually created a 15 stage pipeline and didn't tell anyone. :)
  • IntelUser2000 - Monday, August 30, 2004 - link

    Okay, I don't know further than that. But one question: Since the old P4 article from Anandtech states 10 stage pipelin P6 core, and Prescott is claimed to have 31 stages and you claim otherwise, it tells that there is individual errors in the SAME site. So whether Hannibal's site can be trusted is doubtful because of that fact too, no? Also, take a look at this link: http://www.realworldtech.com/forums/index.cfm?acti...

    I asked a guy in the forums about it and that link is about the responses to it.

    One example Hannibal's site may be wrong is this: http://arstechnica.com/cpu/004/prescott-future/pre...

    At the end of that link it says: "There's actually another reason why the Pentium M won't benefit as much from hyperthreading. The Pentium M's branch predictor is superior to Prescott's, so the Pentium M is less likely to suffer from instruction-related pipeline stalls than the Prescott. This improved branch prediction, in combination with its shorter pipeline, means improved execution efficiency and less of a need for something like hyperthreading."

    Now, we know Pentium M has shorter pipeline than Prescott but better branch prediction? I really think its wrong, since one of the major improvements of BOTH Prescott and Pentium M in branch prediction is improvements in indirect branch prediction, PLUS, Prescott and Northwood I believe, has bigger BTB buffer size, somewhere in the order of 8x, because Pentium M used indirect branch prediction improvements to save die size and putting more buffer definitely doesn't coincide with that.
  • Fishie - Monday, August 30, 2004 - link

    This is a great summary of the processor cores. I would like to see the same thing done with video cards.
  • JarredWalton - Monday, August 30, 2004 - link

    #49 - Did you even read the links in post #44? Did you read post #44? Let's make it clear: the Willamette and Northwood cores were 20 stage pipelines coupled to an 8 stage prefetch/decode unit (which feeds into the trace cache). This much, we know for sure. The Prescott core appears to be 23 stages with the same (essentially) 8 stage prefetch/decode unit. So, you can call early P4 cores 20 stages, in which case Prescott is 23 stages, or you can call Prescott 31 stages, in which case early P4 cores were 28 stages.

    If you look at the chart in the link to Anandtech, notice how the P4 pipeline is lacking in fetch and decode stages? Anyway, there's nothing that says the AT chart you linked from Aug 2000 is the DEFINITIVE chart. People do make errors, and Intel hasn't been super forthcoming about their pipelines. I'll give you a direct link to where Hannibal talks about the P6 and P4 pipelines - take it up with him if you must:

    http://arstechnica.com/cpu/004/pentium-1/pentium-1...

    Synopsis: In the AT picture, the P6 pipeline has 2 fetch and 2 decode stages, while Hannibal describes it as 3.5 BTB/Fetch stages and 2.5 Decode stages.

    http://arstechnica.com/cpu/01q2/p4andg4e/p4andg4e-...

    Here, the P4 and G4e architectures are compared, but if you read this page, it explains the trace cache and how it effects things. Specifically: "Only when there's an L1 cache miss does that top part of the front end kick in in order to fetch and decode instructions from the L2 cache. The decoding and translating steps that are necessitated by a trace cache miss add another eight pipeline stages onto the beginning of the P4's pipeline, so you can see that the trace cache saves quite a few cycles over the course of a program's execution."
    -----------------------
    Further reading:

    http://episteme.arstechnica.com/eve/ubb.x?a=tpc&am...

    The comments in the "Discuss" section of the article contain further elaboration by Hannibal on the Prescott: "The 31 stages came from the fact that if you include the trace cache in the pipeline (which Intel normally doesn't and I didn't here) then the P4's pipeline isn't 20 stages but 28 (at least I think that's the number). So if you add three extra stages to 28 you get 31 total stages."

    The problem is, Intel simply isn't coming out and directly stating what the facts are. It *could* be that Prescott is really 31 stages (as Intel has said) plus another 8 to 10 stages of fetch/decode logic, putting the "total" length at 39 to 41 stages. However, given the clockspeed scaling - rather, the lack thereof - it would not be surprising to have it "only" be 23 stages plus 8 fetch/decode stages. After all, the die shrink to 90 nm should have been able to push the Northwood core to at least 4 GHz, which seems to be what the Prescott is hitting as well.

    Unless you actually work for Intel and can provide a definitive answer? I, personally, would love some charts from Intel documenting all of the stages of both the initial NetBurst pipeline as well as the Prescott pipeline. (Maybe I should mention this to Anand...?)

Log in

Don't have an account? Sign up now