ATI FireGL X3-256 Technology

The FireGL X3-256 is based on ATI's R420 architecture. While this isn't a surprise, it is interesting that the highest end AGP offering that ATI has on the table is based on the X800 Pro. On the PCI Express side, ATI is offering a higher performance part, but for now, the FireGL on AGP is a little more limited than on PCI Express. When we tackle the PCI Express workstation market, we'll bring out a clearer picture of how ATI's highest end workstation component stacks up against the rest of the competition. As the ATI part isn't positioned as an ultra high end workstation solution, we'll be focusing more on price performance. Unfortunately for ATI, the street price of the 3Dlabs Wildcat Realizm 200 comes in at just about the same as the Radeon FireGL X3-256 and is targeted at a higher performance point. But we'll have to see how that pans out when we've taken a look at the numbers. For now, let's pop open the hood on the ATI FireGL X3-256.

We will start out with the vertex pipeline as we did with the NVIDIA part. The overall flow of data is very similar to the Quadro, except, of course, that the ATI part runs with 12 pixel pipelines rather than 16. The internals are the differentiating factor.



We can see that the ATI vector unit supports the parallel operation of a 4x 32-bit vector unit and a 32-bit scalar unit. This allows the same type of operation that the NVIDIA GPU supports, but the FireGL lacks the VS 3.0 capabilities and support for vertex textures. Interestingly, in the documents that list the features of the FireGL X3, we see that "Full DX9 vertex shader support with 4 vertex units" is mentioned in addition to its "6 geometry engines". This obviously indicates that 2 of the geometry engines don't handle full DX9 functionality. This isn't of as much importance in a workstation part, as the fixed function path will be more often stressed, but it's worth noting that this core is based on the desktop part and we didn't pick up this information from any of our desktop briefings or data sheets.

The FireGL X3-256 employs the HyperZ HD engine that the Radeon uses, which combines early/hierarchical z hardware with a z/stencil cache and z compression. The hierarchical z engine looks at tiles of pixels (in the case of the FireGL 16x16 blocks), and if the entire block is occluded, up to 256 pixels can be eliminated in one clock. These pixels never need to touch the fragment/pixel processing hardware and save a lot of processing power. When we look at the pixel engine, we can see that ATI divides their pixels into "quad" pipes as well, but an NVIDIA and ATI quad is defined slightly differently. On ATI hardware, data out of setup is tiled into those 16x16 blocks for the hierarchical z pass. It's these blocks on which each quad pipe shares its efforts.



Inside each of the pixel pipes, we have something that also looks similar to the NVIDIA architecture. It is possible for ATI to handle completing two vector 3 operations and 2 scalar operations in combination with a texture operation every clock cycle. This is what the hardware ends up looking like:



Since the texture unit does not share hardware with either of the shader math units, ATI is able to handle theoretically more math per clock cycle in its pixel shaders than NVIDIA. The 3 + 1 arrangement is also not as robust as NVIDIA claims it to be, as NVIDIA is capable of handling 2 vector + 2 vector operations.

ATI is not as robust as either NVIDIA's architecture or 3Dlabs with only PS2.0 support. The FireGL can only support between 512 and 1536 shader instructions depending on the conditions, and uses fp24 for processing. The Radeon architecture has favored DirectX over OpenGL traditionally, so we will be very interested to see where these pre-dominantly OpenGL benchmarks will end up.

As far as rasterization is concerned, ATI does not support any floating point framebuffer display types. The highest accuracy framebuffer that the FireGL X3-256 supports is a 10-bit integer format, which is good enough for many applications today. As with both 3Dlabs' and NVIDIA's parts, the FireGL X3-256 includes dual 10-bit RAMDACs and 2 dual-link DVI-I connections allowing support of up to 9MP displays. Unlike the Wildcat Realizm and Quadro FX lines, there is no way to get any sort of genlock, framelock, or SDI output support for the FireGL line. This puts ATI behind when it comes to video editing, video walls, multi-system displays, and broadcast solutions.

The added features that ATI's FireGL X3-256 supports beyond the Radeon include:
  • Anti-aliased points and lines - Lines and points are smoothed as they're drawn in wireframe mode. This is much higher quality and faster than FSAA when used for wireframe graphics, and is of the utmost importance to designers who use workstations for wireframe manipulation (the majority of the 3D workstation market).
  • Two-sided lighting - In the fixed function pipeline, enabling two-sided lighting allows hardware lights to illuminate both sides of an object. This is useful for viewing cut-away objects. SM 3.0 supports two-sided lighting registers for programmable shaders, but these don't apply to the fixed function light sources.
  • OpenGL overlay planes - Overlays are useful for adding to a 3D accelerated viewport without making the buffer dirty. This can significantly speed up things like displaying pop-up windows or selection highlights in 3D applications.
  • 6 user defined clip planes - User defined clip planes allow the cutting away of surfaces in order to look inside objects in application that support their creation.
  • Quad-buffered stereo 3D support - This enables smooth real-time stereoscopic image output by supporting a front-left, back-left, front-right, and back-right buffer for display.
Undoubtedly, the FireGL line also features a different memory management setup and driver development focuses more heavily on OpenGL and stability. This is quite a different market than the consumer side, but ATI has quite a solid offering with the strength of the FireGL X3-256. Of course, we would rather see a 16-pipeline part, but we'll have to wait until we evaluate PCI Express graphics workstations for that.

NVIDIA Quadro FX 4000 Technology The Cards
POST A COMMENT

25 Comments

View All Comments

  • Jeanlou - Thursday, December 01, 2005 - link

    Hello,
    I just bumped into AnandTech Video Card Tests, and I'm really impressed !

    As a Belgian Vision Systems Integration Consultant (since 1979), I'm very interrested about the ability to compare these 3 cards (Realizm 200 vs FireGL X3 256 vs NVIDIA Quatro FX 4000).

    I just had a bad experience with the Realizm 200 (!)

    On a ASUS NCCH-DL motherboard, Dual Xeon 2.8GHz, 2GB DDR 400, Seagate SCSI Ultra 320 HDD, 2 EIZO monitors (Monitor N°1= L985EX at 1600x1200 px), (Monitopr N°2= L565 at 1280x1024 px), Windows XP Pro SP2 x32bit partition C:\ 16GB, Windows XP Pro x64bit edition partition D:\ 16GB, plus Extended partions (2 logical E:\ and F:\). All NTFS.

    Using the main monitor for images analyses (quality control) and the slave monitor for tools, I was unable to have a stable image at 1600 by 1200 pixels. While the Wildcat4 - 7110, or even the VP990 Pro have a very stable screen at maximum resolution. But the 7110 and the VP990 Pro don't have drivers for Window XP x64bit.

    Tried everything, latest BIOS, latest drive for ChipSet...
    Even 3Dlabs was unable to give me the necessary support and do not answer anymore !

    As soon I reduced the resolution from the main monitor to 1280 by 1024, was everything stable, but that's not what I want, I need the maximum resolution on the main monitor.

    The specs from 3Dlabs resolution table is giving 3840 by 2400 pixels maximum!

    I send it back, and I'm looking for an other card.

    I wonder if the FireGL X3 256 will do the job ?
    We also use an other monitor from EIZO (S2410W) with 1920 by 1200 pixels !
    What are exactly the several resolutions possible with the FireGL X3 256 using 2 monitors ? I cannot find it on the specs.

    Any comment will be appreciated,

    Best regards,
    Jean
    Reply
  • kaissa - Sunday, February 20, 2005 - link

    Excellent article. I hope that you make workstation graphic card comparision a regular article. How about an article on workstation notebooks? Thanks a lot. Reply
  • laverdir - Thursday, December 30, 2004 - link

    dear derek wilson,

    could you tell us how much is the performance
    difference between numa and uma in general
    on this tests..

    and it would be great if you could post maya
    related results for guadro 4k with numa enabled..


    seasonal greetings
    Reply
  • RedNight - Tuesday, December 28, 2004 - link

    This is the best workstation graphics card review I have read in ages. Not only does it present the positive and negatives of each the principal cards in question, it presents them in relationship to high end mainsteam cards and thereby helps many, including myself, understand the real differences in performance. Also, by inovatingly including AutoCAD and Gaming Tests one gets a clear indication of when the workstation cards are necessary and when they would be a waste of money. Thanks Reply
  • DerekWilson - Monday, December 27, 2004 - link

    Dubb,

    Thanks for letting us know about that one :-) We'll have to have a nice long talk with NV's workstation team about what exactly is going on there. They very strongly gave us the idea that the featureset wasn't present on geforce cards.

    #19, NUMA was disabled because most people running a workstation with 4 or fewer GB of RAM on a 32 machine will not be running with the pae kernel installed. We wanted to test with a setup most people would be running under the circumstances. We will test NUMA capabilities in the future.

    #20,

    When we test workstation CPU performance or system performance, POVRay will be a possible inclusion. Thanks for the suggestion.

    Derek Wilson
    Reply
  • mbhame - Sunday, December 26, 2004 - link

    Please include POVRay benchies in Workstation tests. Reply
  • Myrandex - Saturday, December 25, 2004 - link

    I wonder why NUMA was fully supported but yet disabled. Maybe instabilities or something. Reply
  • Dubb - Friday, December 24, 2004 - link

    http://newbietech.net/eng/qtoq/index.php

    http://forums.guru3d.com/showthread.php?s=2347485b...
    Reply
  • Dubb - Friday, December 24, 2004 - link

    uhhh.. my softquadro'd 5900 ultra begs to differ. as would all the 6800 > qfx4000 mods being done by people on guru3d's rivatuner forum.

    I thought you guys knew that just because nvida says something doesn't mean it's true?

    they must consider "physically different sillicon" to be "we moved a resistor or two"...
    Reply
  • DerekWilson - Friday, December 24, 2004 - link

    By high end features, I wasn't talking about texturing or prgrammatic vertex or fragment shading (which is highend in the consumer space).

    I was rather talking about hardware support for: AA lines and points, overlay plane support, two-sided lighting (fixed function path), logic operations, fast pixel read-back speeds, and dual 10-bit 400MHz RAMDACs and 2 dual-link DVI-I connectors supporting 3840x2400 on a single display (the IBM T221 comes to mind).

    There are other features, but these are key. In products like Maya and 3D Studio, not having overlay plane support creates an absolutely noticable performance hit. It really does depend on how you push the cards. We do prefer the in application benchmarks to SPECveiwperf. Even the SPECapc tests can give a better feel for where things will fall -- because the entire system is a factor rather than just the gfx card and CPU.

    #14, Dubb -- I hate to be the one to tell you this -- GeForce and Quadro are physically different silicon now (NV40 and NV40GL). AFAIK, ever since GF4/Quadro4, it has been impossible to softquadro an nvidia card. The Quadro team uses the GeForce as it's base core, but then adds on workstation class features.
    Reply

Log in

Don't have an account? Sign up now