Final Words

Looking back over the SPECviewperf benchmarks that we've shown today, the 3Dlabs Wildcat Realizm 800 is slower than the FireGL X3-256 under ensight and the Quadro FX 4000 under UGS.

Improvement over the Realizm 200 is generally between 15 and 30 percent. This seems to suggest to us that 3Dlabs lowered the clock speed of the GPUs a bit in order to compensate for the added heat. There is quite a bit of highly clocked silicon under the hood of the Wildcat Realizm 800, and it would definitely make sense to drop the clocks a little bit to compensate.

Of course, we would never expect 2x performance gain from this doubling of processing power with the added overhead of breaking up the scene. Aside from clock speed, there could be some driver/hardware issues associated with the way the scene is broken up, which could add to the bottleneck.

Regardless of the fact that we aren't seeing the 50 to 90 percent improvements that we would have expected, the Wildcat Realizm 800 is a high performance part across the board. Coming in at a street price of about $2000 USD, this part could pose some serious competition to the Quadro FX 4400 (~$2300 USD). Of course, we'll have to wait until we can get our hands on the top-of-the-line NVIDIA and ATI cards before we can really delve into that issue.

We are very interested in comparing this solution to NVIDIA's SLI option. If NVIDIA is able to adapt their SLI profiles to workstation applications effectively, we could see some very impressive numbers. NVIDIA isn't choosing to focus on performance as the main selling point of Quadro SLI though. They prefer to highlight the added features of the solution. The major advantages here are the massive display capabilities. We will be sure to pay attention to these key features as soon as we get our hands on a driver to test the system.

Until we are able to test the top-of-the-line NVIDIA and ATI graphics cards, the Wildcat Realizm 800 is on the top of the heap. But keep in mind that game developers will still prefer an NVIDIA or ATI based solution for their superior DirectX support. Hopefully after we get the rest of the cards together, our full review will be as interesting as our first look. Stay tuned!

SPECviewperf 8.0.1 Performance
POST A COMMENT

27 Comments

View All Comments

  • Athlex - Tuesday, April 05, 2005 - link

    I know this is a preview, but this article seemed a bit thin- no pictures of the actual hardware, no screenshots of driver config screens, would that break an NDA or something? Also, are "default professional settings" the same across brands? Seems like that might skew results if the driver defaults to different values between ProE/Solidworks/Maya, etc. Maybe a subjective appraisal of display quality could be part of this? Do these DVI ports also do analog output or is that unavailable with a dual-link DVI port?
    Might also be fun to see an OpenGL game benchmark on the pro cards to contrast the game cards running OpenGL apps...

    Can't wait to see the roundup!
    Reply
  • BikeDude - Thursday, March 31, 2005 - link

    I too would like to see some game tests. I want a dual-link capable PCIe card and this effectively rules out all of the consumer cards! Before I fork out a lot more money for a "professional" card, I'd sure as hell would like to know that I would be able to play a mean game of Doom3 on the darn thing... (and using Photoshop is a priority as well)

    --
    Rune
    Reply
  • Zebo - Wednesday, March 30, 2005 - link

    Why are there no game tests??? Lifes not all about work ya know.. stop and smell roses..specially if you have an office door like me. Reply
  • Calin - Tuesday, March 29, 2005 - link

    The "Professional 3D" cards have OpenGL performance several times greater than what you can obtain from consumer cards (by consumer I mean pro gaming). Even the ATI cards for professional 3D (FireGL series) are several times faster in OpenGL than their gaming counterparts.
    Also, "professional 3D" cards really needs high resolution/high refresh rates outputs, and multiple outputs.
    Reply
  • Draven31 - Tuesday, March 29, 2005 - link

    The difference is in the types of instructions used most often and the precision of many operations.

    And, more often than not in 3D, the number of polygons involved.
    Reply
  • JustAnAverageGuy - Saturday, March 26, 2005 - link

    Tom's had a good comparison.

    It's a little over twice the size of a dollar bill. :)

    http://www.tomshardware.com/business/20040813/sigg...
    Reply
  • cryptonomicon - Saturday, March 26, 2005 - link

    holy crap, that board is huge Reply
  • Jkames - Saturday, March 26, 2005 - link

    arg! I meant to say "Are the differences internal instructions?" Reply
  • Jkames - Saturday, March 26, 2005 - link

    I mean that are the differences internal instructions. Reply
  • Jkames - Saturday, March 26, 2005 - link

    What is the difference between workstation hardware and desktop hardware? I understand that workstation is more expensive and used for professional applications but would anyone be able to elaborate on the uses of workstation hardware? Is it the internal instructions such as MMX or junk like that?

    Any info would be appreciated thanx.
    Reply

Log in

Don't have an account? Sign up now