"What you have seen is a public demonstration of 4 GHz silicon straight off our manufacturing line. We have positive indications to be able to take Netburst to the 10 GHz space."

"While architectural enhancements are important, Intel intends to continue its lead in raw speed. Otellini demonstrated a new high-frequency mark for processors, running a Pentium 4 processor at 4.7 GHz."

The first assertion was made at IDF Spring 2002, and the second press release was broadcasted after Fall IDF 2002. Fast forward to the beginning of 2004, and we read in the Prescott presentation: "2005-2010: the era of thread level parallelism and multi-core CPU technology. " What happened to "the 10 GHz space"?

Fig 1. "2005-2010: the era of thread level parallelism and multi-core CPU technology ".

The presentation of the new 6xx Prescott even states that Intel is now committed to " Adding value beyond GHz". This sounds like Intel is not interested in clock speeds anymore, let alone 10 GHz CPUs.

Already, the hype is spreading: Dual core CPUs offer a much smoother computing experience; processing power will increase quickly from about 5 Gigaflops to 50 gigaflops and so on. It is almost like higher clock speeds and extracting more ILP (Instruction Level parallelism), which has been researched for decades now, are not important anymore.

At the same time, we are hearing that "Netburst is dead, Tejas is cancelled and AMD's next-generation K9 project is pushed back." Designs built for high clock speeds and IPC (Instructions per Clock) are no longer highly regarded as heroes, but black sheep. They are held responsible for all the sins of the CPU world: exploding power dissipation, diminishing performance increases and exorbitant investments in state of the art fabs to produce these high clock speed chips. A Prescott or Athlon 64 CPU in your system is out of fashion. If you want to be trendy, get a quad core P-m, also known as Whitefield [2], made in India.

To the point

I am exaggerating, of course. A good friend of mine, Chris Rijk, said: "PR departments having no 'middle gears': they either hype something to great lengths, or not at all." Trying to understand what is really going on is the purpose of this article. We are going to take a critical look at what the future CPU architectures have to offer. Is the traditional approach of increasing IPC and clock speed to get better performance doomed? Does multi-core technology overcome the hurdles that were too high for the single-core CPUs? Are multi-core CPUs the best solution for all markets? Will multi-core CPUs make a difference in the desktop and workstation market?

In this first instalment, we explore the problems that the current CPU architectures face. The intention is to evaluate whether the solution proposed by Intel and other manufactures is a long-term solution, one that really solves those problems. We will also investigate one CPU in particular, the Intel Prescott. So, basically there are 4 chapters in this article that will discuss:

  • The problems that CPU architects face today: Wire Delay, Power and the Memory wall.
    Chapter 1 - The brakes on CPU power
  • The reason why Intel and others propose dual core as a solution to these problems.
    Chapter 2 - Why single core CPUs are no longer "cool"
  • Whether or not these problems can be solved without dual core.
    Chapter 3 - Containing the epidemic problems
  • A case study of the Intel Prescott.
    Chapter 4 - The Pentium 4 crash landing

Although Intel is undeniably the industry leader in the CPU market, this doesn't always mean that the solutions proposed are the right ones. For example, remember MMX, which was a technology that should have turned the (x86-based) PC into a multimedia monster. In hindsight, the critics were right. MMX was little more than a marketing stunt to make people upgrade.

The first implementation of hyperthreading on Intel's Foster Xeon (Willamette Xeon) was turned off by default by all OEMs. And hyperpipelined CPUs with 30+ stages turned out to be an impressive, but pretty bad idea.

In other words, not all hypes have turned out to be beneficial for the customer. Millions of customers are still waiting for the rich content on the Internet that is enabled by and runs so much faster on the Netburst architecture...

CHAPTER 1: The brakes on CPU power


View All Comments

  • stephenbrooks - Wednesday, February 9, 2005 - link

    #28 - that's interesting. I was thinking myself just a few days ago "I wonder if those wires go the long way on a rectangular grid or do they go diagonally?" Looks like there's still room for improvement. Reply
  • Chuckles - Wednesday, February 9, 2005 - link

    The word comes from Latin. "mono" meaning one, "lithic" meaning stone. So monolithic refers to the fact that it is a single cohesive unit.
    The reason you associate "lithic" with old is only due to the fact that anthropologists use Paleolithic and Neolithic to describe time periods in human history in the Stone Age. The words translate as "old stone" and "new stone" respectively.
    I have seen plenty of monolithic benches around here. Heck, a slab granite countertop qualifies as a monolith.
  • theOracle - Wednesday, February 9, 2005 - link

    Very good article - looks like a university paper with all the references etc! Looking forward to part two.

    Re "monolithic", granted the word doesn't mean old but anything '-lithic' instantly makes me think ancient (think neolithic etc). -lithic means a period in stone use by humans, and a monolith is a (usually ancient) stone monument; I think its fair to say Intel were trying to make the audience think 'old technology'.
  • DavidMcCraw - Wednesday, February 9, 2005 - link

    Great article, but this isn't accurate:

    "Note the word "monolithic", a word with a rather pejorative meaning, which insinuates that the current single core CPUs are based on old technology."

    Neither the dictionary nor technical meanings of monolithic imply 'old technology'. Rather, it simply refers to the fact that the single-core CPU being referred to is as large as the two smaller chips, but is in one part.

    In the context of OS kernel architectures, the Linux kernel is a good example of monolithic technology... but I doubt many people consider it old tech!
  • IceWindius - Wednesday, February 9, 2005 - link

    Even this articles makes my head hurt, so much about CPU's is hard to understand and grasp. I wish I kneow how those CPU engineers do this for a living.

    I wish someone like Arstechinca would make something really built ground up like CPU's for morons so I could start understanding this stuff better.
  • JohanAnandtech - Wednesday, February 9, 2005 - link

    Jason and Anand have promised me (building some pressure ;-) a threaded comment system so I can answer more personally. Until then:

    1. Thanks for all the encouraging comments. It really gives a warm feeling to read them, and it is basically the most important motivation for writing more

    2. Slashbin (27): Typo. just typed with a small period of insanity. Voltage of course, fixed

    3. CSMR: the SPEC numbers of intel are artificially high, as they have been spending more and more time on aggressive compiler optimisations. All other benchmarks clearly show the slowdown.
  • CSMR - Tuesday, February 8, 2005 - link

    Excellent article. Couple of odd things you might want to amend in chapter one: "CPUs run 40 to 60% faster each year" contradicts the previous discussion about slowed CPU speed increases. Also power formula explanation on the same page doesn't really make sense as pointed out by #27. Reply
  • Doormat - Tuesday, February 8, 2005 - link

    Good article. The only real thing I wanted to bring up was something called the "X Consortium". I wrote a paper in my solid state circuit design class a few years ago. Basically instead of having all the interconnects within a chip laid out in a grid-like fashion, it allows them to be diagonal (and thus, a savings of, at most, 29% - for the math impaired it could be at most 1/sqrt(2)). Perhaps the tools arent there or its too patent encumbered. If interconnects are really an issue then they should move to this diagonal interconnect technology. I actually dont think they are a very pressing need right now - leakage current is the most pressing issue. The move to copper interconnects a while ago helped (increased conductivity over aluminum, smaller die sizes mean shorter distances to traverse, typically).

    It will be very interesting to see what IBM does with their Cell chips and SOI (and what clock speed AMD releases their next A64/Opteron chips at since they've teamed with IBM). If indeed these cell chips run at 4GHz and dont have leakage current issues then there is a good chance that issue is mostly remedied (for now at least).
  • slashbinslashbash - Tuesday, February 8, 2005 - link

    " In other words, dissipated power is linear with the e ffective capacitance, activity and frequency. Power increases quadratically with frequency or clock speed." (Page 2)

    Typo there? Frequency can't be both linear and quadratic..... from the equation itself, it looks like voltage is quadratic. (assuming the V is voltage)
  • AnnoyedGrunt - Tuesday, February 8, 2005 - link

    And of course I meant to refer to post 23 above.

Log in

Don't have an account? Sign up now