Die shrinks are big deals in the PC industry; transitioning to smaller manufacturing processes means faster switching times and greater transistor density, usually resulting in cooler, faster and more feature-filled CPUs and GPUs.

Intel just recently began its transition from 65nm to 45nm transistors with the release of its Penryn based Core 2 CPUs. The benefits of smaller manufacturing processes are made clearly visible by the Penryn example; despite having 50% more cache than its predecessor and more features (e.g. SSE4), each Penryn die measures 107 mm^2 compared to a 65nm Conroe at 143 mm^2. Transistor density also went up tremendously, as Penryn crams 410 million transistors into less space than 291 million transistors with Conroe.

We just saw a more dramatic showcase of the improvements smaller transistors can bring to GPUs with AMD's new Radeon HD 3800 graphics cards. The RV670 GPU is built off of TSMC's 55nm process and very similar, architecturally, to the 80nm R600 used in the Radeon HD 2900 XT. The die size and transistor density have both improved tremendously thanks to the new process, as has power consumption. The table below should give you some hard numbers to look at:

 Microprocessor Manufacturing Process Die Size Transistor Count Transistor Density
Intel Core 2 Duo (Conroe) 65nm 143 mm^2 291M ~2.03M per mm^2
Intel Core 2 Duo (Penryn) 45nm 107 mm^2 410M ~3.83M per mm^2
AMD Radeon HD 2900 XT (R600) 80nm 408 mm^2 700M ~1.71M per mm^2
AMD Radeon HD 3870 (RV670) 55nm 192 mm^2 666M ~3.46M per mm^2

 

In both examples, the move to a smaller transistor feature size results in a tremendous increase in transistor density on the order of 90 - 100%. On the PC side, these increases are nothing new, Moore's Law has been hard at work for decades now and we keep reaping the benefits in the form of better, faster, cheaper products. With Game Consoles however, the story is a little different.

Game console hardware must remain largely unchanged throughout the life cycle of the system, which these days is somewhere in the 4 - 5 year range. The whole point to a closed game console system is that you have one spec of hardware to develop for, introducing faster CPUs and GPUs in the middle of the life cycle just wouldn't fly. Since adding features and performance isn't possible, the only real benefits to process shrinks for chips in game consoles are cost, heat and noise reduction, all of which are still important.

Microsoft just recently dropped the price of its Xbox 360 and around the same time, rumors crept up about a quiet introduction of 65nm CPUs into the bill of materials. The original Xbox 360 manufactured from 2005 up until August of this year all used 90nm chips; the CPU, GPU and eDRAM were all fabbed on a 90nm process, which was state of the art at the time. However, as you've undoubtedly noticed with Intel's recent move to 45nm, 90nm is more than dated now.

A move to 65nm would undoubtedly reduce power consumption, potentially make the console quieter and obviously make it cheaper to produce. With the Xbox 360 there's also another side effect that many surmised would result from a move to 65nm: increased reliability.

The Red Ring of Death
POST A COMMENT

46 Comments

View All Comments

  • ChristopherO - Sunday, November 18, 2007 - link

    It looked like you were familiar, but I partly mentioned that because other people reading might not have been. You never know when you can win over a convert.

    That's funny you have a Seasonic... We're probably running the same PSU, I have the S12 600, which I got for a steal when Silicon Acoustics went out of business. 2.5" drives are a great way to go, but I'm not willing to give up the performance of a Raptor for one. My Raptor 150 in suspension is near silent. I'm on Vista 64, with 4GB memory, so that helps the seek situation... Vista caches everything it can at boot time, so once you're through the initial power-on phase, everything is pretty sedate.

    Sure you can cap the DVD reads, but I use a dense foam padding that I use in lieu of the Sonata rails in my chassis and the net effect is that the drive is vastly quieter. It is the nosiest component of the system, but that doesn't bug me because hardly anything uses discs these days. Typically you install, and then you're done. Or you insert a game and run the "piracy check" and then the drive spins down. Inserting a movie, etc, doesn't spin the drive up so that's not even audible.
    Reply
  • AssBall - Saturday, November 17, 2007 - link

    I took my DVD ROM out of my case and, quite frankly, it is significatly louder in my hand than it was when being muffled by the case (no rubber washers). Reply
  • saiku - Friday, November 16, 2007 - link

    is there something that would wrap around the hd-dvd drive? would still have to leave the vent holes open but perhaps some material out there that damps noise? Reply
  • ChristopherO - Friday, November 16, 2007 - link

    I haven't seen the inside of the HD DVD add-on. Generally speaking you'd want to remove the drive and dampen that (perhaps building a custom chassis). Otherwise the HD DVD add-on will have metal to metal contact and thus be generating noise that you can't isolate. Wrapping the whole thing in foam isn't very practical. Reply
  • swaaye - Friday, November 16, 2007 - link

    It's the damn DVDROM. They have that thing cranking at full RPMs almost all of the time. Hell, I've heard the disk come out of its grip once and spin out inside. It's ridiculous, IMO. Reply
  • provoko - Friday, November 16, 2007 - link

    Thanks for opening up a 360 for us and testing it. =) I enjoyed the wattage charts, the same ones you use for CPUs. Reply
  • semo - Friday, November 16, 2007 - link

    what is a half-node?

    and
    quote:

    If we assume that the Xbox 360's GPU is at least as powerful as the PS3's, the cooling requirements should be somewhere similar; given that the PS3 basically had a GeForce 7800 GTX under its hood
    it still has hasn't it? when will the ps3 get a gpu shrink btw.
    Reply
  • ChristopherO - Friday, November 16, 2007 - link

    A half-node is basically a die shrink that doesn't require reworking the component.

    For instance you can't take a 90nm chip, and convert it to 65nm without redesigning the chip. Sure the end product would be substantially the same, but you will need to rework a considerable portion to fit the new size. 65nm to 45nm is the same, you need to redesign your chip.

    The 80nm shrink lets you use the same design as 90nm, but smaller.

    More than likely the GPU is an 80nm design.

    For example, shrinking from 90 to 80 gives you 88.8% size (which is close to Anand's measured 85%). Shrinking from 90 to 65 is 72.2% the size, however this will be less exact than the half-node since the chip itself is going to be somewhat different (75% is a good enough estimate). Both these sizes will result in a decent heat and power savings.

    No one can say if this will kill-off the RROD, but it stands a pretty good chance.

    I'm willing to guess that the failure rates will drop to something normal (a single-digit percentage). No guarantee on that, but they trimmed off 70 watts of power usage, that's a pretty remarkable savings which will go a long way towards improving the overall heat situation. Not to mention that their GPU heat sinks are substantially beefier, so you're dissipating less heat over a larger area. Even if the GPU was the same, it will have less contention with the CPU for air cooling.

    Too bad they didn't mount the DVD ROM on rubber grommets. It isn't bad when properly isolated, but the metal case is acting like an amplifier.
    Reply
  • psychobriggsy - Sunday, November 18, 2007 - link

    Um, 90nm to 80nm results in a theoretical die area of 79% of the 90nm one - remember there are two dimensions to the shrink.

    90nm to 65nm results in a die area of 52% of the 90nm one, ideally.

    Scaling never achieves that due to factors such as the spacing between transistors not scaling as well, or the shrink not scaling well in one axis compared to the other, or a million other reasons. Hence I believe that the GPU is 80nm, and the CPU is 65nm.

    The shrink indeed should reduce or halt the RROD situation if it was caused by excessive heat leading to warping that broke contacts.
    Reply
  • ChristopherO - Monday, November 19, 2007 - link

    Whoops, you were right. I typed fast and forgot to multiply the reduction by 2.

    In theory the 80nm shrink should scale more linearly since it is fundamentally similar to the 90nm part. The 65nm part has the potential to be significantly different. I guess the numbers bear that out, the GPU is much closer to predicted than the CPU.

    It makes me wonder what else they might have changed. In theory they could have made alterations to the chip that increased media performance, etc, without impacting general computations and expected behavior within a game.

    It's a weird quandary since they have a baseline level of performance which they aren't trying to diverge from.
    Reply

Log in

Don't have an account? Sign up now