Typical branches have one of two options: either don’t take the branch, or go to the target instruction and begin executing code there:

A Typical Branch

...
Line 24: if (a = b)
Line 25:
execute this code;
Line 26: otherwise
Line 27: go to line 406;
...

Most branches have two options - don't take the branch or go to the target and start executing there

There is a third type of branch – called an indirect branch – that complicates predictions a bit more. Instead of telling the CPU where to go if the branch is taken, an indirect branch will tell the CPU to look at an address in a register/main memory that will contain the location of the instruction that the CPU should branch to. An indirect branch predictor, originally introduced in the Pentium M (Banias), has been included in Prescott to predict these types of branches.

An Indirect Branch

...
Line 113: if (z < 2)
Line 114: execute this code;
Line 115: otherwise
Line 116: go to memory location F and retreive the address of where to start executing

...

Conventionally, you predict an indirect branch somewhat haphazardly by telling the CPU to go to where most instructions of the program end up being located. It’s sort of like needing to ask your boss what he wants you to do, but instead of asking just walking into the computer lab because that’s where most of your work ends up being anyways. This method of indirect branch prediction ends up working well for a lot of cases, but not all. Prescott’s indirect branch predictor features algorithms to handle these cases, although the exact details of the algorithms are not publicly available. The fact that the Prescott team borrowed this idea from the Pentium M team is a further testament to the impressive amount of work that went into the Pentium M, and what continues to make it one of Intel’s best designed chips of all time.

Prescott’s indirect branch predictor is almost directly responsible for the 55% decrease in mispredicted branches in the 253.perlbmk SPEC CPU2000 test. Here’s what the test does:

253.perlbmk is a cut-down version of Perl v5.005_03, the popular scripting language. SPEC's version of Perl has had most of OS-specific features removed. In addition to the core Perl interpreter, several third-party modules are used: MD5 v1.7, MHonArc v2.3.3, IO-stringy v1.205, MailTools v1.11, TimeDate v1.08

The reference workload for 253.perlbmk consists of four scripts:

The primary component of the workload is the freeware email-to-HTML converter MHonArc. Email messages are generated from a set of random components and converted to HTML. In addition to MHonArc, which was lightly patched to avoid file I/O, this component also uses several standard modules from the CPAN (Comprehensive Perl Archive Network).

Another script (which also uses the mail generator for convienience) excercises a slightly-modified version of the 'specdiff' script, which is a part of the CPU2000 tool suite.

The third script finds perfect numbers using the standard iterative algorithm. Both native integers and the Math::BigInt module are used.
Finally, the fourth script tests only that the psuedo-random numbers are coming out in the expected order, and does not really contribute very much to the overall runtime.

The training workload is similar, but not identical, to the reference workload. The test workload consists of the non-system-specific parts of the acutal Perl 5.005_03 test harness.

In the case of the mail-based benchmarks, a line with salient characteristics (number of header lines, number of body lines, etc) is output for each message generated.

During processing, MD5 hashes of the contents of output "files" (in memory) are computed and output.

For the perfect number finder, the operating mode (BigInt or native) is output, along with intermediate progress and, of course, the perfect numbers.
Output for the random number check is simply every 1000th random number generated.

As you can see, the performance improvement is in a real-world algorithm. As is commonplace for microprocessor designers to do, Intel measured the effectiveness of Prescott’s branch prediction enhancements in SPEC and came up with an overall reduction in mispredicted branches of about 13%:

Percentage Reduction in Mispredicted Branches for Prescott over Northwood (higher is better)
164.gzip
1.94%
175.vpr
8.33%
176.gcc
17.65%
181.mcf
9.63%
186.crafty
4.17%
197.parser
17.92%
252.eon
11.36%
253.perlbmk
54.84%
254.gap
27.27%
255.vortex
-12.50%
256.bzip2
5.88%
300.twolf
6.82%
Overall
12.78%

The improvements seen above aren’t bad at all, however remember that this sort of a reduction is necessary in order to make up for the fact that we’re now dealing with a 55% longer pipeline with Prescott.

The areas that received the largest improvement (> 10% fewer mispredicted branches) were in 176.gcc, 197.parser, 252.eon, 253.perlbmk and 254.gap. The 176.gcc test is a compiler test, which the Pentium 4 has clearly lagged behind the Athlon 64 in. 197.parser is a word processing test, also an area where the Pentium 4 has done poorly in the past thanks to branch-happy integer code. 252.eon is a ray tracer, and we already know about 253.perlbmk; improvements in 254.gap could have positive ramifications for Prescott’s performance in HPC applications as it simulates performance in math intensive distributed data computation.

The benefit of improvements under the hood like the branch prediction algorithms we’ve discussed here is that they are taken advantage of on present-day software, with no recompiling and no patches. Keep this in mind when we investigate performance later on.

We’ll close this section off with another interesting fact – although Prescott features a lot of new improvements, there are other improvements included in Prescott that were only introduced in later revisions of the Northwood core. Not all Northwood cores are created equal, but all of the enhancements present in the first Hyper Threading enabled Northwoods are also featured in Prescott.

Prescott's New Crystal Ball: Branch Predictor Improvements An Impatient Prescott: Scheduler Improvements
Comments Locked

104 Comments

View All Comments

  • INTC - Monday, February 2, 2004 - link

    Ummmm yea, kinda reminds me of cooking an egg on an Athlon XP http://www.biggaybear.co.uk/Menu/Aegg/Aeggs.html
  • cliffa3 - Monday, February 2, 2004 - link

    something good to include on the mb compatibility article would be what boards would house the 2.8/533...i'm wondering myself if the E7205 chipset would...i have a p4g8x, and it would be a welcome upgrade with HT and all the other goodies if it oc's well.
  • Stlr22 - Monday, February 2, 2004 - link

    They didn't burn down, but the proc were running hot. Not to mention, these are the FIRST releases in the Prescott line. What's it gonna be like later on?....

    Just think, a P4 based computer that turns your living room into your very own Sauna!!....WHOOO-HOOO!!.....now that's what I call a bargain!


  • INTC - Monday, February 2, 2004 - link

    The message is clear: Anandtech and all of the other review sites didn't burn down so I guess it's not a flame thrower.

    Prescott is not as fast as I had hoped but is definitely not the step backwards as some were rumoring it to be. I think a Prescott 2.8 @ 250 MHz FSB will be really nice to play with until I see what Intel announces at IDF in a few weeks.
  • Icewind - Monday, February 2, 2004 - link

    The message is clear: Im buying an Athlon 64.
  • Vanners - Sunday, February 1, 2004 - link

    Did anyone catch the error in Pipelining: 101?

    if you halve the time for a stage in the pipeline and double the number of stages. Yes this means you can run at 2GHz instead of 1GHz but the reality is you're still taking 5ns to complete the pipe.

    Look at it like a motorbike: You drop down a gear and rev harder; you make more noise but you are still doing the same speed.
    The only reasons to drop down a gear are to break through your gears (i.e. slow down) or to rev significantly higher than the change in gear ratio in order to move faster (with more torque).

    The trouble Intel has is that they drop down a gear then rev 6 months to a year later.
  • kamper - Sunday, February 1, 2004 - link

    Just curious, Anand or Derek: what board did you use to get the 3.72 GHz oc? Obviously it wasn't the intel board used in the benches. I guess we'll hear all about this in the compatibility review though :)

    keep up the good work, that last point about smaller margins at higher clockspeeds (vs. Northwood) was cool. Let's just hope the pattern continues.
  • Stlr22 - Sunday, February 1, 2004 - link

    Seems to me like people either got cought up in some of the hype and expected to much or some people expected to little and that history would repeat itself (Willamette vs Palomino)

    The fact that the Prescott fared much better in it's launch compared to the Willamette might be a hint to not underestimate it. Prescott isn't really looking bad now, and I think it will hit stride faster then the Willamette core did.

    The next couple of years are gonna be really interesting.

    Damn, ya just gotta love it!
  • ntrights - Sunday, February 1, 2004 - link

    Great review!
  • KF - Sunday, February 1, 2004 - link

    I've grown to appreciate CRAMITPAL. If you read around the opinionated diatribes, he has some good stuff that people avoid saying for fear of retaliation. I suppose if I were in love with Intel, he would tick me off.

    But, it does look like Intel has created a CPU that should ramp up to speeds high enough to beat the A64 in 32bit mode, and that is all they needed to do.

    Regardless of how much heat that is going to take, Intel must have some way in the works to handle it.

    Looks like they might not charge an arm and leg for it, which is the biggest shock.

Log in

Don't have an account? Sign up now