Conclusion

Prescott was built to adapt to the typical problems that made it hard to run x86 programs quickly: branches, dependencies, lots of memory and ADD operations. However, in order to do so, complex logic was used, which increased leakage power quickly. The wire delay problem and dependency problem were only solved by sacrificing a lot of energy. The combination of LVS double-pumped ALUs, tons of new features and 64 bit together created an avalanche of leaking logic. The result is an innovative architecture crushed into a thermal wall.

But the Prescott failure, the exploding leakage power and wire delay don't mean automatically that the single core CPUs have no future. Power leakage can be contained by introducing high-K materials and SOI. Wire delay has been solved by using repeaters - at the cost of some extra power - and Cu interconnects. Dual core is not a magical solution that is going to solve all the problems that Prescott and other modern CPU face.

The Prescott failure only tells us that right now, the ultra deep pipelined CPU is not the best solution. Intel went too quickly, too deep, and although many ingenious tricks were implemented to make the Prescott a real powerhouse, all those tricks together backfired with high leakage and dynamic power loss.

In the next article, we investigate what dual core technology can really bring us, besides a lot of hype, "paradigm shift" slogans everywhere and "much smoother system" claims.


References

[1] An In-Depth Look at Computer Performance Growth
CHALMERS UNIVERSITY OF TECHNOLOGY, Department of Computer Engineering, Göteborg 2004
http://www.ce.chalmers.se/~warg/papers/performancegrowth_tr-2004-9.pdf

[2] Intel Whitefield uncovered, The Register
http://www.theregister.co.uk/2004/05/01/intel_whitefield_uncovered/

[3] Implementing Power Management IP forDynamic and Static Power Reduction in Configurable Microprocessors using the Galaxy Design Platform at 130nm
Dan Hillman, Virtual Silicon
John Wei, Tensilica
http://www.tensilica.com/hillman_slides.pdf

[4] Leakage Power Modelling and Leakage Power Modelling and Minimization
Massoud Pedram
University of Southern California , Dept. of EE-Systems
http://atrak.usc.edu/~massoud/Papers/pedram-tutorial-iccad04.pdf

[5] Gigascale Integration-Challenges and Opportunities
By Shekhar Borkar
Intel Fellow, Director, Circuit Research
http://www.intel.com/research/mrl/research/circuit.htm
http://www.intel.com/cd/ids/developer/asmo-na/eng/strategy/182440.htm?page=1

[6] SUN Niagra Demo
http://www.sun.com/aboutsun/media/presskits/networkcomputing05q1/

[7] LVS Technology for the Intel® Pentium® 4 Processor on 90nm Technology
http://www.intel.com/technology/itj/2004/volume08issue01/art04_lvs_technology/p01_abstract.htm


Other Sources:

  1. Intel Silicon Innovation To Shape Direction Of The Digital World
    Multi-Core Processors, FALL IDF 2004
    http://www.intel.com/pressroom/archive/releases/20040907corp.htm
  2. Pentium 4 processor at 4.7 GHz, FALL IDF 2002
    http://www.intel.com/pressroom/archive/releases/20020909corp.htm
  3. Intel Developer Forum, Spring 2002
    Louis Burns Keynote, Netburst architecture scales up to 10 GHz.
    http://www.intel.com/pressroom/archive/speeches/burns20020227.htm
  4. The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software
    By Herb Sutter
    http://www.gotw.ca/publications/concurrency-ddj.htm
  5. Illinois researchers create world's fastest transistor ... again
    http://www.news.uiuc.edu/scitips/03/1106feng.html

CHAPTER 4 (con't)
Comments Locked

65 Comments

View All Comments

  • Momental - Wednesday, February 9, 2005 - link

    #41, I understood what he meant when he stated that AMD could only be so lucky to have something which was a technological failure, ie: Prescott, sell as well as it has. Even the article clearly summarizes that Prescott in and of itself isn't a piece of junk per se, only that is has no more room for evolution as Intel originally had hoped.

    #36 wasn't saying that it was a flop sales-wise, quite the contrary. The thing has sold like hotcakes!

    I, like many others here, literally got dizzy as I struggled to keep up with all of the technical terminology and mathmetical formulas. My brain is, as of this moment, threatening to strike if I don't get it a better health and retirement plan along with a shorter work week. ;)
  • Ivo - Wednesday, February 9, 2005 - link

    1. About the multiprocessing: Of coarse, there are many (important!) applications, which are more than satisfied with the existing mono-CPU performance. Some other will benefit from dual CPUs. Matrix 2CPU+2GPU combinations could be essential e.g. for stereo-visualization. Probably, desktop machines with enhanced voice/image analytical capabilities could require even more sophisticated CPU Matrices. I suppose, the mono- and multi-CPU solutions will coexist in the near future.

    2. About the leakage problem: New materials like SOI are part of the solution. Another part are the new techniques. Let us take a lesson from the nature: our blood-transportation system consists of tiny capillaries and much thicker arteries. Maybe it could make sense to combine 65 nm transistors e.g. in the cash memory and 90 nm transistors in the ALU?
  • Noli - Wednesday, February 9, 2005 - link

    "Netburst architecture is very innovative and even genial"

    genius-like?
    If by genial you mean 'having a pleasant or friendly disposition', it sounds weird. It can mean 'conducive to growth' in this context but that's not so intuitive because a) it wasn't and b) at best it was only theoretically genial.

    Presumably it's not genial as in 'of or relating to the chin' :)

    Agree monolithic was confusing but it was the intel dude who said it - I thought it meant 'large single unit' rather than 'old (as in technology)' as in: increasing processing power by increasing the size and complexity of a single core is now not as efficient as strapping two cores together - a duallithic unit :)

    Sorry to be a pedantic twat.
  • Xentropy - Wednesday, February 9, 2005 - link

    Some of the verbage in that final chapter makes me wonder how much better Prescott might have done if Intel had just left out everything 64-bit and developed an entirely different processor for 64-bit. Especially since we won't have a mainstream OS that'll even utilize those instructions for another few months, and it's already been about a year since release, they could have easily gotten away with putting 64-bit off for the next project. It's pretty obvious by now even the 32-bit Prescotts have those 64-bit transistors sitting around. Even if not active, they aren't exactly contributing to the power efficiency of the processor.

    I think one big reason Intel thinks dual core will be the savior of even the Prescott line is supposedly dual cores running at 3Ghz only require equivalent power draw to a single core at 3.6Ghz and should be just as fast in some situations (multitasking, at least). Dual core at 85% clockspeed will be slower for gaming, though, so dual core Prescott still won't close the gap with AMD for gaming enthusiasts (98% of this site's readership), and may even represent an even further drop in performance per watt. Here's hoping for Pentium-M on the desktop. :>
  • piroroadkill - Wednesday, February 9, 2005 - link

    #36 -- You really didn't read the article and get the point of it. It wasn't a failure from a sales point of view, and this article was not written from a sales point of view, but a technical point of view, and how the Prescott helped in furthering CPU technology.

    Thus, a failure.
  • ViRGE - Wednesday, February 9, 2005 - link

    Although I think I sank more than I swam, that was a very good and informative article Johan. I just have one request for a future article since I'm guessing the next one is on multi-core tech: will someone at AT run the full AT benchmark suite against a SMP Xeon machine so that we can get a good idea ahead of time what dual-core performance will be like against single core? My understanding is that the Smithfields aren't going to be doing much else new besides putting 2 cores on one die(i.e. no cache sharing or other new tech), so SMP benchmarks should be fairly close to dual-core benchmarks.
  • Griswold - Wednesday, February 9, 2005 - link

    Point and case as to why the marketing department is the most important (and powerful) part of any highly successful company. It's not the R&D labs who tell you what works and what comes next, it's the PR team.
  • quidpro - Wednesday, February 9, 2005 - link

    Someone needs to make a new Tron movie so I can understand this better.
  • tore - Wednesday, February 9, 2005 - link

    Great article, on page 3 you talk about BJT transistor with a base, collector and emitter, since all modern cpu's use mosfets should you talk about a mosfet with a gate, source and drain?
  • Questar - Wednesday, February 9, 2005 - link

    "The Pentium 4 "Prescott" is, despite its innovative architecture, a failure."


    AMD wishes they had a "failure" that sold like Prescott.


Log in

Don't have an account? Sign up now