Inside the Xenos GPU

As previously mentioned, the 48 shaders will be able to run either vertex or pixel shader programs in any given clock cycle. To clarify, each block of 16 shader units is able to run a shader program thread. These shader units will function on a slightly higher than DX9.0c, but in order to take advantage of the technology, ATI and Microsoft will have to customize the API.

In order to get data into the shader units, textures are read from main memory. The eDRAM of the system is unable to assist with texturing. There are 16 bilinear filtered texture samplers. These units are able to read up to 16 textures per clock cycle. The scheduler will need to take great care to organize threads so that optimal use of the texture units are made. Another consideration to take into account is anisotropic filtering. In order to perform filtering at beyond bilinear levels, the texture will need to be run through the texture unit more than once (until the filtering is finished). If no filtering is required (i.e. if a shader program is simple reading stored data), the vetex fetch units can be used (either with a vertex or a pixel shader program).

In the PC space, we are seeing shifts to more and more complex pixel shaders. Large and larger textures are being used in order to supply data, and some predict that texture processing will eclipse color and z bandwidth in the not so distant future. We will have to see if the console and desktop space continue to diverge in this area.

One of the key aspects of performance for the Xbox 360 will be in how well ATI manages threads on their GPU. With the shift to the unified shader architecture, it is even more imperative to make sure that everything is running at maximum efficiency. We don't have many details on ATI's ability to context switch between vertex and pixel shader programs on hardware, but suffice it to say that ATI cannot afford to have any difficulties in managing threads on any level. As making good use of current pixel shader technology requires swapping out threads on shaders, we expect that this will go fairly well in this department. Thread management is likely one of the most difficult things ATI had to work out to make this hardware feasible.

Those who paid close attention to the amount of eDRAM (10MB) will note that this is not enough memory to store the entire framebuffer for displays larger than standard television with 4xAA enabled. Apparently, ATI will store the front buffer in the UMA area, while the back buffer resides on the eDRAM. In order to manage large displays, the hardware will need to render the back buffer in parts. This indicates that they have implemented some sort of very large grained tiling system (with 2 to 4 tiles). Usually tile based renderes have many more tiles than this, but this is a special case.

Performance of this hardware is a very difficult aspect to assess without testing the system. The potential is there for some nice gains over the current high end desktop part, but it is very difficult to know how easily software engineers will be able to functionally use the hardware before they fully understand it and have programmed for it for a while. Certainly, the learning curve won't be as steep as something like the PlayStation 2 was (DirectX is still the API), but knowing what works and what doesn't will take some time.

ATI's Modeling Engine

The adaptability of their hardware is something ATI is touting as well. Their Modeling Engine is really a name for a usage model ATI provides using their unified shaders. As each shader unit is more general purpose than current vertex and pixel shaders, ATI has built the hardware to easily allow the execution of general floating point math.

ATI's Modeling Engine concept is made practical through their vertex cache implementation. Data for general purpose floating point computations moves into the vertex cache in high volumes for processing. The implication here is that the vertex cache has enough storage space and bandwidth to accommodate all 48 shader units without starvation for an extended period of use. If the vertex cache were to be used solely for vertex data, it could be much less forgiving and still offer the same performance (considering common vertex processing loads in current and near term games). As we stated previously, pixel processing (for now) is going to be more resource intensive than vertex processing. Making it possible to fill up the shader units with data from the vertex cache (as opposed to the output of vertex shaders), and the capability of the hardware to dump shader output to main memory is what makes ATI's Modeling Engine possible.

But just pasting a name on general purpose floating point math execution doesn't make it useful. Programmers will have to take advantage of it, and ATI has offered a few ideas on different applications for which the Modeling Engine is suited. Global illumination is an intriguing suggestion, as is tone mapping. ATI also indicates that higher order surfaces could be operated on before tessellation, giving programmers the ability to more fluidly manipulate complex objects. It has even been suggested that physics processing could be done on this part. Of course, we can expect that Xbox 360 programmers will not implement physics engines on the Modeling Engine, but it could be interesting in future parts from ATI.

The Xbox 360 GPU: ATI's Xenos PlayStation 3’s GPU: The NVIDIA RSX
Comments Locked

93 Comments

View All Comments

  • BenSkywalker - Sunday, June 26, 2005 - link

    ""One thing is for sure, support for two 1080p outputs in spanning mode (3840 x 1080) on the PS3 is highly unrealistic. At that resolution, the RSX would be required to render over 4 megapixels per frame, without a seriously computation bound game it’s just not going to happen at 60 fps." -- Quote from page 10"

    First off 1080p doesn't support 60FPS as of this moment anyway, and there are an awful lot of games on consoles that aren't remotely close to being GPU bound anyway. Remember that the XBox has titles now that are pushing out 1080i and the RSX is easily far more then four times the speed of the GPU in the XBox.
  • tipoo - Wednesday, August 6, 2014 - link

    "RSX is easily far more then four times the speed of the GPU in the XBox."

    It's funny reading these comments years later, and seeing how crazy the PS3 hype machine was. I assume this insane comment reffered to the 1 terraflop RSX thing, which was a massive joke. RSX was worse than Xenon not only in raw gflops (180 vs over 200 I think), but since it didn't have unified shaders it could be bottlenecked by a scene having too much vertex or pixel effects and leaving shaders underused.
  • calimero - Sunday, June 26, 2005 - link

    Here is one tip about Cell:
    to play MP3 files (stereo) on PC you need 100MHz 486 CPU. Atari Falcon030 with MC68030 (16MHz) and DSP (32MHz) can do same thing!
    Everyone who know to program will find Cell outstanding and thrilling everyone else who pretend to be a programer please continue to waste CPU cycles with your shity code!
  • coolme - Sunday, June 26, 2005 - link

    "Supporting 1080p x2 may seem like overkill,"

    It's not gonna support 1080p x2

    "One thing is for sure, support for two 1080p outputs in spanning mode (3840 x 1080) on the PS3 is highly unrealistic. At that resolution, the RSX would be required to render over 4 megapixels per frame, without a seriously computation bound game it’s just not going to happen at 60 fps." -- Quote from page 10
  • nevermind4711 - Sunday, June 26, 2005 - link

    People have different ways of expressing the frequency of DDRAM. The correct memory frequency of 7800GTX is 256MB/256-bit GDDR3 at 600MHz, but as it is double rate some people say 1200 MHz.

    In the same way you can say the RSX memory is operating at 1400 MHz. How else could 128 bit result in a memory bandwidth of 22 GB/s for the RTX?

    #64 knitecrow, who is your source that the RSX does not contain e-dram, or is it just speculation?

    Besides, your conclusion from extrapolating the transistor count may be correct, but assuming the transistor count is proportional to the number of pixel pipelines is a rather big simplification, there is quite a lot of other stuff inside a GPU as well, stuff that does not scale proportionally to the pixel pipelines.
  • Furen - Sunday, June 26, 2005 - link

    The RSX is supposed to be clocked higher but will only have a 700MHz, 128bit memory bus (as opposed to the 1200MHz, 256bit memory bus on the 7800gtx).
  • knitecrow - Saturday, June 25, 2005 - link

    #61
    too bad you don't speak marketing.
    When they say near.. it means very close. Could be slightly under or over. If it was something like 320M... they will be hyp3ing 320M.


    #62 too bad you are wrong

    with 300M transistors, the RSX is a native 24 pixel pipeline card

    You can extrapolate the number by looking at:
    6800ultra - 16 - 222M
    6600GT - 8 - 144M

    it has no eDRAM.

    The features remain to be seen, but its going to be a G70 derivate -- just like XGPU for the xbox was a geforce3 derivative.

    There is absolutely no evidence to suggest that the RSX is going to be more powerful than 7800GTX.

    Just because a product comes out later doesn't make it better

    Exhibit A:
    Radeon 9700pro vs. 5800ultra

  • Darkon - Saturday, June 25, 2005 - link

    http://www.psinext.com/index.php?categoryid=3&...
  • Dukemaster - Saturday, June 25, 2005 - link

    I think it is very clear why the RSX gpu has the same number of transistors but still is more powerfull then the 7800GTX: the 7800GTX is a chip with 32 pipelines with 8 of them turned off.
  • nevermind4711 - Saturday, June 25, 2005 - link

    Interesting article. However, I find it strange that Anand and Derek do not comment on the difference in floating point capacity between the combatants. 1 TFlops for X360 vs. 2 TFlops for PS3. For X360 we know that the majority of flops come from the GPU, where probably the big part consists of massively paralell compare ops and such coming from the AA- and filtering circuitry integrated with the e-DRAM.
    It would be very interesting to know how the RSX provides 1.8 TFlops. I do not think the G70 has a capacity anything near that. Could it be possible that Sony will bring some e-DRAM to the party together with AA and filtering circuitry similar to X360. After all Sony has quite some experience of e-DRAM from PS2 and PSP.
    Anand and Derek wrote "Both the G70 and the RSX share the same estimated transistor count, of approximately 300.4 million transistors." Where do this information come from? Sony only said in its presentation the RSX will have 300+ mil t:s. G70 we now know contains 302 mil t:s.
    #48: Sony may very well have replaced some video en/de-coding circuitry of the G70 with some e-dram circuitry.

Log in

Don't have an account? Sign up now