Prescott's New Crystal Ball: Branch Predictor Improvements

We’ve said it before: before you can build a longer pipeline or add more execution units, you need a powerful branch predictor. The branch predictor (more specifically, its accuracy), will determine how many operations you can have working their way through the CPU until you hit a stall. Intel extended the basic Integer pipeline by 11 stages, so they need to make corresponding increases in the accuracy of Prescott’s branch predictor otherwise performance will inevitably tank.

Intel admits that the majority of the branch predictor unit remains unchanged in Prescott, but there have been some key modifications to help balance performance.

For those of you that aren’t familiar with the term, the role of a branch predictor in a processor is to predict the path code will take. If you’ve ever written code before, it boils down to being able to predict which part of a conditional statement (if-then, loops, etc…) will be taken. Present day branch predictors work on a simple principle; if branches were taken in the past, it is likely that they will be taken in the future. So the purpose of a branch predictor is to keep track of the code being executed on the CPU, and increment counters that keep track of how often branches at particular addresses were taken. Once enough data has accumulated in these counters, the branch predictor will then be able to predict branches as taken or not taken with relatively high accuracy, assuming they are given enough room to store all of this data.

One way of improving the accuracy of a branch predictor, as you may guess, is to give the unit more space to keep track of previously taken (or not taken) branches. AMD improved the accuracy of their branch predictor in the Opteron by increasing the amount of space available to store branch data, Intel has not chosen to do so with Prescott. Prescott’s Branch Target Buffer remains unchanged at 4K entries and it doesn’t look like Intel has increased the size of the Global History Counter either. Instead, Intel focused on tuning the efficiency of their branch predictor using less die-space-consuming methods.

Loops are very common in code, they are useful for zeroing data structures, printing characters or are simply a part of a larger algorithm. Although you may not think of them as branches, loops are inherently filled with branches – before you start a loop and every iteration of the loop, you must find out whether you should continue executing the loop. Luckily, these types of branches are relatively easy to predict; you could generally assume that if the outcome of a branch took you to an earlier point in the code (called a backwards branch), that you were dealing with a loop and the branch predictor should predict taken.

As you would expect, not all backwards branches should be taken – not all of them are at the end of a loop. Backwards branches that aren’t loop ending branches are sometimes the result of error handling in code, if an error is generated then you should back up and start over again. But if there’s no error generated in the application, then the prediction should be not-taken, but how do you specify this while keeping hardware simple?

Code Fragment A

Line 10: while (i < 10) do
Line 11: A;
Line 12: B;
Line 13: increment i;
Line 14: if i is still < 10, then go back to Line 11

Code Fragment B

Line 10: A;
Line 11: B;
Line 12: C;
...
Line 80: if (error) then go back to Line 11

Line 14 is a backwards branch at the end of a loop - should be taken!
Line 80 is a backwards branch not at the end of a loop - should not be taken!
Example of the two types of backwards branching

It turns out that loop ending branches and these error branches, both backwards branches, differentiate themselves from one another by the amount of code that separates the branch from its target. Loops are generally small, and thus only a handful of instructions will separate the branch from its target; error handling branches generally instruct the CPU to go back many more lines of code. The depiction below should illustrate this a bit better:

Prescott includes a new algorithm that looks at how far the branch target is from the actual branch instruction, and better determines whether or not to take the branch. These enhancements are for static branch prediction, which looks at certain scenarios and always makes the same prediction when those scenarios occur. Prescott also includes improvements to its dynamic branch prediction.

31 Stages: What’s this, Baskin Robbins? Prescott's Crystal Ball (continued)
Comments Locked

104 Comments

View All Comments

  • Jeff7181 - Sunday, February 1, 2004 - link

    I'm going to go out on a limb here and say 2004 is the year of the Athlon-64 and Intel will take a back seat this year unless their new socket will help increase clock speeds. When AMD makes the transition to 90nm I think you'll see a jump in clock speed from them too... and I'm willing to bet their current 130nm processors will scale to 2.6 or 2.8 Ghz if they want to put the effort into it before switching to 90nm.

    Intel better hope people adopt SSE3 in favor of AMD-64 otherwise they're going to lose the majority of the benchmark tests.

    On second thought... the real question is how high will Prescott scale... will we really see 4.0 Ghz by the end of the year? Will performance scale as well as it does with the Athlon-64?

    Right now, looking at the Prescott, the best I can say for it is "huh, 31 stages in the pipeline and they didn't lose too much performance, neat."
  • Barkuti - Sunday, February 1, 2004 - link

    Check out the article at xbitlabs:

    http://www.xbitlabs.com/articles/cpu/display/presc...

    Less technical but with a wider set of tests.
  • Stlr22 - Sunday, February 1, 2004 - link

    ;-)
  • Stlr22 - Sunday, February 1, 2004 - link

    ((((((((((((((CRAMITPAL))))))))))))))))

    Listen,I just want you to know that everything will be alright. Really, life isn't all that bad buddy. It's not good to keep so much hate inside. It's very unhealthy. We are all family here at the Anandtech forums and we care about you. If you ever need to sit down and talk, I'm ll ears pal. So that your brother doesn't feel left out, here's a hug for him aswell.......


    (((((((((((((AMDjihad)))))))))))))
  • KF - Sunday, February 1, 2004 - link

    Yeah, the Inquirer was right about 30 stages. Maybe I should start reading it! However I did read the one where the news linked to an article purporting that an Inquirer reporter had bumped into a person who had overheard an Intel executive say Prescott was 64 bit. Maybe Derek and Anand didn't have the space to squeeze that tiny detail into the review.

    I saw a paper on the Intel site a while ago, seemingly intended for some professional jounal, the premise of which was that it is ALWAYS preferable to make the pipeline longer, no matter how long, while using techniques to reduce the penalties. Like, 100 stages would be a good thing. Right then I knew what one team at Intel was up to. The fact that they didn't explain any new penalty reduction techniques only made it all the more sure what Intel had in the works (otherwise why write the paper?), and that they had the techniques worked out, but still under wraps.
  • ianwhthse - Sunday, February 1, 2004 - link

    Err.. *Cramitpal

    Sorry about that. My mind is wandering.
  • ianwhthse - Sunday, February 1, 2004 - link

    Did we actually just get 26 good posts in before crumpet showed up?
  • FiberOptik - Sunday, February 1, 2004 - link

    I like the part about the new shift/rotate unit on the CPU. Does this mean that prescott will be noticeably faster for the RC5 project? Athlon's usually mop the floor with whatever the Northwood can pump out.
  • eBauer - Sunday, February 1, 2004 - link

    "Botmatch has bots (AI) playing, shooting, running, etc. (deathmatch) while Flyby does not. The number that you should be most interested in is the Botmatch scores."

    No, I am talking about the botmatch scores from previous articles. Well aware of the difference between flyby and botmatch. http://www.anandtech.com/cpu/showdoc.html?i=1946&a... In that article, all CPU's had about 10 more fps than the CPU's in the prescott article.




  • AnonymouseUser - Sunday, February 1, 2004 - link

    "I am curious as to why the UT2k3 botmatch scores dropped on all CPU's... Different map?"

    Botmatch has bots (AI) playing, shooting, running, etc. (deathmatch) while Flyby does not. The number that you should be most interested in is the Botmatch scores.

Log in

Don't have an account? Sign up now