CHAPTER 4: The Pentium 4 crash landing

The Prescott failure

The Pentium 4 "Prescott" is, despite its innovative architecture, a failure. Intel expected to scale this Pentium 4 architecture to 5 GHz, and derivatives of this architecture were supposed to come close to 10 GHz. Instead, the Prescott was only able to reach 3.8 GHz after numerous revisions. And even then, the 3.8 GHz is losing up to 115 Watt, and about 35-50% (depending on the source) is lost to leakage power.

The Prescott project failed, but that doesn't mean that the architecture itself was not any good. In fact, the philosophy behind the enhanced Netburst architecture is very innovative and even brilliant. To understand why we state this, let me quickly refresh your memory on the software side of things.

IPC unfriendly software

First, consider that the average code does not allow the CPU to process a lot of instructions in parallel. To give you an idea, we found out that video encoding achieves about 0.6-0.8 instructions per clock cycle (IPC) on modern CPUs. Secondly, note that almost 20% of the instructions are branches, and 50% of them are memory operations. In case of video encoding, you may have less than 10% branches, and about 60% memory operations. Most of the instructions that are not branches or memory operations are additions, or "ADD"s. Some of the memory operations need to make use of the same units that perform the ADD instructions.

You should also know that many algorithms contain calculations, which need the results of a previous one: a dependency. So, you cannot issue the second calculation until the first is done.

Most studies show that realistically, a sophisticated CPU would be able to reach an IPC of a little more than 2, about twice as much as CPUs today.

Up close and personal

Now, take look at the scheme of the Prescott architecture below. Let us see how Prescott solves all the problems mentioned above.


Fig 7. Prescott's architecture.

Click to enlarge.

First of all, you want to make sure that memory operations happen quickly. Therefore, the Prescott doubled the L1 (data only) and L2-cache. It has also two dedicated Address Generation Units, one for stores and one for loads.

Build for 4 GHz and more, accesses to the main RAM are going to be costly in terms of clock pulses (latency), considering that DDR-II 533 runs at a 266 MHz clock. So, Prescott tries to minimize the damage of waiting for cache misses by increasing the big store buffers of Northwood from 24 to 32, and doubling the load request buffers. So, Prescott can have a lot of cache misses simultaneously outstanding . An intelligent hardware prefetcher is another way to avoid slowdowns due to high memory latency.

To battle branch misprediction, the Prescott Branch predictor has been tuned and predicts 10% of the mispredicted branches by Northwood correctly. That results in up to 20% better performance! And of course, the trace cache makes sure that a mispredicted branch does not need to restart the decoding stages. As a result, the misprediction penalty is not 39 stages, but 31 stages. The 8 stages of decoding do not need to happen again because in most cases, the Trace cache has the decoded instruction.


CHAPTER 3: Containing the epidemic problems CHAPTER 4 (con't)
Comments Locked

65 Comments

View All Comments

  • Cybercat - Tuesday, February 8, 2005 - link

    It's sad that software isn't moving in the direction of AMD's architectural emphasis, and instead heading toward a more media-oriented design. As said above, AMD is better at keeping in mind the future of their processors, by keeping up with low-leakage technologies (E0 stepping being a good example).

    I do think though that the whole dual-core thing is a gimmick. I certainly won't be buying into it any time soon.
  • fitten - Tuesday, February 8, 2005 - link

    Good read! I'm looking forward to the next installment.
  • reactor - Tuesday, February 8, 2005 - link

    Half of it went over my head, but was none the less very interesting. The prescott chapter was very informative.

    Well Done.
  • Rand - Tuesday, February 8, 2005 - link

    I'm still getting accustomed to seeing your byline on articles published on AnandTech, rather then AcesHardware :)

    As always, it's an excellent and fascinating read.

  • Regs - Tuesday, February 8, 2005 - link

    Pentium-M can't*
  • Regs - Tuesday, February 8, 2005 - link

    Thanks to this article I now know why the PM can reach faster clock cycles, and why AMD is still behind in multimedia tasks like video encoding.

    Awesome article! I see some one has been lurking the forums.
  • FinalFantasy - Tuesday, February 8, 2005 - link

    Nice article!
  • bersl2 - Tuesday, February 8, 2005 - link

    Yay! I get to use some of the stuff from my CS2110 class!
  • Gnoad - Tuesday, February 8, 2005 - link

    This is one hell of an in depth article! Great job!
  • WooDaddy - Tuesday, February 8, 2005 - link

    I have to say this is the most technical article from Anandtech I have read. Good thing I'm a hardware engineer... I think it could be a difficult read for someone with even average understand of microprocessor development.

    Good though.

Log in

Don't have an account? Sign up now