Final Words

The best way to evaluate the impact of dual core CPUs on the desktop is to look at the impact by moving to a multiprocessor setup on the desktop. The vast majority of applications on the desktop are still single threaded, thus garnering no real performance benefit from moving to dual core. The areas that we saw improvements in thanks to Hyper Threading will see further performance improvements due to dual core on both AMD and Intel platforms, but in most cases buying a single processor running at a higher clock speed will end up yielding higher overall performance.

For the most part, it would seem that the dual core releases of 2005 are mostly to establish a foundation for future dual core CPU releases that will provide functionality such as power and thermal balancing across multiple cores. Next year Intel will be releasing a number of new processors, including the new 2MB L2 Prescott parts as well as the dual core x-series, but despite all of the new product launches, clock speeds will only increase by 200MHz in the next 14 months. If anything, the release of larger cache and dual core desktop processors is a way to continue to promote the "newer, faster, better" upgrades without necessarily improving performance all that much.

Today the slowest Prescott based Pentium 4s run at 2.8GHz and 3.0GHz - and a full year from now the slowest Prescott based Pentium 4s will run at 3GHz. This is the first time in recent history that the predicted roadmap for CPUs will remain relatively flat. It will take continued maturity in 90nm manufacturing, a smooth transition to 65nm as well as improvements in multi core designs to truly make the migration worth it.

The future of dual core doesn't lie in taking two identical cores and throwing them on the same die. The future and true potential is in the use of multiple cores with different abilities to help improve performance while keeping power consumption and thermal density at a minimum. The idea of putting two cores, one fast and one slow, in a CPU has already been proposed numerous times as a method of keeping power consumption low while continuing to improve performance.

Right now dual core is more of a manufacturing hurdle than anything else. Putting that many logic transistors on a single die without reducing yield is a tough goal. Intel will have a slightly harder time with the migration to dual core since their chips simply put our more heat, but in theory Intel has the superior manufacturing (although it's been very difficult to compare success at 90nm between AMD and Intel thanks to all of the variables Prescott introduced). Needless to say that we'd be very surprised if both companies met the current ship dates for dual core desktop chips simply based on how things have progressed in the past.

That being said, despite the end of 2005 being the time for dual core, the desktop world will be largely unchanged by its introduction. It will take application support more than anything to truly bring about performance improvements, but with an aggressive CPU ramp developers may be more inclined to invest in making their applications multithreaded as more users have dual core systems. The more we look at roadmaps, the more it seems like while 2005 will be the year of anticipation for dual core, 2006 may be when dual core actually gets interesting. Until then, we view dual core on the desktop as a nice way of getting attention away the fact that clock speeds aren't rising. It's a necessary move in order to gain more traction and support for multithreaded desktop applications but its immediate benefit to the end user will be limited. But then again, so has every other major architectural shift.

The Problem with Intel's Approach and AMD's Strategy
Comments Locked

59 Comments

View All Comments

  • GhandiInstinct - Friday, October 22, 2004 - link

    So why not test this technology and leave it in the labs instead of wasting consumers times. Obviously it's a waste of money if we don't have any software utilizing it. So, oh wow! KUDOS to the company to release it first, but remember the Prescott? First 90nm.... crossing the finish line first doesn't mean you earn that place.
  • dak - Friday, October 22, 2004 - link

    Good points :) Glad we don't use single cpu's at work lol
  • Brian23 - Friday, October 22, 2004 - link

    #25 There are several reasons why games aren't written multithreaded:

    1. multithreaded apps have more overhead so they run slower on single CPU systems.
    2. most gaming systems are single CPU.
    3. the threads need to communicate with each other to get the frames drawn. Since the threads have critical sections, running them on a single CPU will make the critical sections que up causing major lag and drop in framerate.

    Once multi CPU systems are the norm, I'm sure there will be games released for multi CPU systems.
  • dak - Friday, October 22, 2004 - link

    Hmm, I never really saw the big deal about thread creation. Who really cares if it takes a freaking tenth of a second to spawn a thread, if you're only doing 20 or so threads at the startup of a game? I can't think of the last time I used a temporary thread. I usually spawn 'em at startup for a pre-defined role. Can't be the overhead of thread creation, they could split one off for texture loading in the background, and obviously the network clients. Personally I think it would be harder to NOT thread games, but I guess I'm too used to threading...
  • stephenbrooks - Friday, October 22, 2004 - link

    Windows thread creation has a bigger overhead than Linux threading, but you can still shuffle them about quite a bit and still get benefits. I'd imagine if they could keep it at one fork per tick or frame, it'd be pretty good.

    No reason I can think of why video games aren't being designed for multi-processors. Apart from the fact someone should take their shiny FX-55s away and give them quad-2.0GHz things to work on instead - _then_ they'd take advantage of it.
  • dak - Friday, October 22, 2004 - link

    Strange, I'm kinda surprised that video games are single threaded. We write flight sims at work (*nix only), and we thread/fork all over the place. Flight sims are really just really big video games :)
    I would think with AI, physics engines, network clients for multiplayer, and oh yeah, that rendering loop thingy, that they'd be all over threading. I don't know about winders programming really, is the scheduler too borked for that? I can't imagine it would be, and I'm not one to give anything to microsoft....
  • stephenbrooks - Friday, October 22, 2004 - link

    #19, I assume you mean XP Home. I'm running XP Pro on dual hyperthreaded Xeons and get 4 showing up in task manager.
  • stephenbrooks - Friday, October 22, 2004 - link

    I was looking around some presentations on Intel's site - it seems that we're in a dead zone before some fundamental changes are made to their transistors in the 2007-08 time frame (metal gate, tri-gate and high-K something-or-other), which might give real performance and clock speed improvements again (mention is made of reducing leakage 100x, for example). All the weird stuff happens in the 45nm and 32nm processes, with the 65nm one being another "boring" one like 90nm, hence the focus on dual-core for the next few years, I guess.
  • HardwareD00d - Friday, October 22, 2004 - link

    Overclocking a dual core would be a waste because until software developers start to write games in a way that uses multiple cores, you're just going to have one OC'd core sitting there looking dumb (and probably putting out a shedload of heat).
  • HardwareD00d - Friday, October 22, 2004 - link

    er I mean #15 sorry

Log in

Don't have an account? Sign up now