This year's E3 has been, overall, a pretty big letdown. The show itself hasn't been very interesting simply because it's mostly current-gen titles and hardware. For the E3 just before the launch of Microsoft's Xbox 360, we were a bit disappointed not to see any working hardware at the show outside of the ATI booth.

With a relatively light schedule thanks to the small size of the show, we were able to spend quite a bit of time digging deeper on the two highlights of this year's E3 - ATI's Xbox 360 GPU, and NVIDIA's RSX, the GPU powering the PlayStation 3.

Given that both of the aforementioned GPU designs are very closely tied to their console manufacturers, information flow control was dictated by the console makers, not the GPU makers. And unfortunately, neither Microsoft or Sony were interested in giving away more information than their ridiculously light press releases.

Never being satisfied with the norm, we've done some digging and this article is what we've managed to put together. Before we get started, we should mention a few things:

1) Despite our best efforts, information will still be light because of the strict NDAs imposed by Microsoft and Sony on the GPU makers.

2) Information on NVIDIA's RSX will be even lighter because it is the more PC-like of the two solutions and as such, a lot of its technology overlaps with the upcoming G70 GPU, an item we currently can't talk about in great detail.

With those items out of the way, let's get started, first with what has already been announced.

The Xbox 360 GPU, manufactured by ATI, is the least PC-like of the two GPUs for a number of reasons, the most obvious being its 10MB of embedded DRAM. Microsoft announced that the 10MB of embedded DRAM has 256GB/s of bandwidth availble to it; keep this figure in mind, as its meaning isn't as clear cut as it may sound.

The GPU operates at 500MHz and has a 256-bit memory interface to 512MB of 700MHz GDDR3 system memory (that is also shared with the CPU).

Another very prominent feature of the GPU is that it implements ATI's first Unified Shader Architecture, meaning that there are no longer any discrete pixel and vertex shader units, they are instead combined into a set of universal execution units that can operate on either pixel shader or vertex shader instructions. ATI is characterizing the width of the Xbox 360 GPU as being 48 shader pipelines; we should caution you that these 48 pipelines aren't directly comparable to current 16-pipeline GPUs, but rest assured that the 360 GPU should be able to shade and texture more pixels per clock than ATI's fastest present-day GPU.

Now let's move on to NVIDIA's RSX; the RSX is very similar to a PC GPU in that it features a 256-bit connection to 256MB of local GDDR3 memory (operating at 700MHz). Much like NVIDIA's Turbo Cache products, the RSX can also render to any location in system memory, giving it access to the full 256MB of system memory on the PS3 as well.

The RSX is connected to the PlayStation 3's Cell CPU by a 35GB/s FlexIO interface and it also supports FP32 throughout the pipeline.

The RSX will be built on a 90nm process and features over 300 million transistors running at 550MHz.

Between the two GPUs there's barely any information contained within Microsoft's and Sony's press launches, so let's see if we can fill in some blanks.

More Detail on the Xbox 360 GPU


View All Comments

  • Shinei - Thursday, May 19, 2005 - link

    Jarred, I thought ATI made the XBox 2 GPU specifically for the console, and wasn't incorporating any of its features into the R520? I'm not sure I agree that spending most of your R&D on a "dead-end" GPU is the best tactic; nVidia's approach of optimizing an existing desktop GPU architecture seems to be the more efficient way to spend R&D capital.
    It also allows nVidia to take any lessons learned from the PS3 GPU and add/modify them when they finally release the G70 (hopefully with fully functional PureVideo, not just "sort of functional" PureVideo--I'm paying for the transistor real estate in price and heat, I better be able to use it this time!)...
  • JarredWalton - Thursday, May 19, 2005 - link

    Low Roller - I wouldn't put too much stock in that figure for the X360 GPU. The way the chip is designed (split in two pieces), I wouldn't be surprised to find that one piece is 150 million and the other is maybe 150 to 200 million.

    My DRAM knowledge is a bit fuzzy, and the "Embedded DRAM" is something we don't have specifics on, but 10MB of RAM represents 83,886,080 bits, and best case scenario you're using 1 transistor per bit. SRAM uses 6, and perhaps DRAM is 2? 2 transistors per bit would already put just the embedded RAM at 167,772,160 transistors. Heh. 150 million is WAY too small, no matter what IGN says.

    As a separate thought, I wouldn't be surprised to see the Xbox 360 GPU end up the more powerful of the two graphics chips. The reason is based on inference: R4xx is very similar to R3xx, meaning ATI didn't spend as much resources creating R4xx as NVIDIA spent on NV4x. If their R&D teams are about equal in size, where did ATI's extra efforts end up being spent? That's right: the Xbox 360 GPU. This is simply a deductive guess, and it could be wrong, but it's something to consider. NVIDIA spent a lot of effort recovering from the NV3x (FX) fiasco.

    What makes this all really entertaining to me is the following:

    If anything, it seems like the PS3 GPU is more of a PC design with less "future technology". In other words, everything said by MS and Sony is complete hype and should be taken with a massive helping of salt. :)
  • Iftekharalam - Thursday, May 19, 2005 - link

    Which graphics processor will be more powerful?
    The XBOX 360 or the PS3? The Nintendos future gaming console also uses ATI's GPU codenamed "Hollywood".
  • Low Roller - Thursday, May 19, 2005 - link

    AnandTech's article says they were not able to get a transistor count out of ATI for the Xbox 360.

    According to IGN, the Xbox 360's GPU only has 150 million transistors, compared to the G70's 300 million.
  • araczynski - Thursday, May 19, 2005 - link

    nice info.

    too bad i could care less which gpu is used in which console, i'm more interested in which console will have some original quality games...
  • R3MF - Thursday, May 19, 2005 - link

    sounds good, shame about the non unified shader model tho.

    maybe nvidia are right, but i like advanced tech. :p
  • Shinei - Thursday, May 19, 2005 - link

    nVidia's apparently pulling out the 16" battle cannons with RSX/G70--136 shader ops per clock is damn impressive, regardless of whether the GPU is console or desktop...
    And if nVidia says the desktop G70's going to be even more powerful than RSX, I'm willing to bet that there will be at least 10 shader units pumping to 24+ pipes in full FP32 quality. Nice. :)
  • AnandThenMan - Thursday, May 19, 2005 - link

    Very very interesting article. ATi and NVIDIA seem to have diverging paths. All things considered, this tells me that ATi has the more advanced GPU. Does that mean a faster GPU though. Reply
  • EODetroit - Thursday, May 19, 2005 - link

    How about anything about the new physics processor?
  • Garyclaus16 - Thursday, May 19, 2005 - link

    ...sooo much for my bfg6800U...seems like I'm already way behind again. Reply

Log in

Don't have an account? Sign up now