The Test

The test system is an IWill DK8N board powered by a 520W OCZ Powerstream PSU. The Dual Opteron 250 system had 2 GB of RAM (1 GB for each processor), and although the board supports NUMA, the feature was not enabled for this test. The IWill motherboard is simply an amazing workstation platform. It can handle up to 16GB of RAM, is loaded with PCI-X slots, and is jam-packed with features. Since the DK8N is a hybrid AMD chipset and nForce 3 motherboard, IWill is able to bring workstation users the best of the DP world and the desktop world in one package.

The dual configuration helps to keep the majority of the load on the graphics card in our testing. It may be interesting to experiment with single, dual and quad processor workstation scaling in the future. For now, this box will work beautifully for our tests.

The drivers that we chose to use for our workstation graphics cards were all beta or pre-release drivers, which each vendor assures us passes internal Q/A as far as image quality is concerned. NVIDIA sees the most performance improvement when moving from their 6x.xx series driver to the 70.41 series driver. In fact, when SPECviewperf 8 was launced in September, 3Dlabs Wildcat Realizm 200 cards lead performance in 7 out of 8 tests. The performance trends are quite different in today's lineup, as NVIDIA's driver team has done quite well to gain performance from professional level applications on the 6 Series architecture with the 7x.xx series driver. Of course, this makes us very interested in revisiting this test with a GeForce card when we have a 70 series ForceWare driver available.

Performance Test Configuration
Processor(s): 2 x AMD Opteron 250
RAM: 4 x 512MB OCZ PC3200 EL ECC Registered (2 per CPU)
Hard Drive(s): Seagate 120GB 7200RPM IDE (8MB Buffer)
Motherboard & IDE Bus Master Drivers: AMD 8131 APIC Driver
NVIDIA nForce 5.10
Video Card(s): 3Dlabs Wildcat Realizm 200
ATI FireGL X3-256
NVIDIA Quadro FX 4000
HIS Radeon X800 XT Platimum Edition IceQ II
Prolink GeForce 6800 Ultra Golden Limited
Video Drivers: 3Dlabs 4.04.0608 Driver
ATI FireGL 8.08-041111a-019501E Performance Driver
NVIDIA Quadro 70.41 (Beta)
NVIDIA ForceWare 67.03 (Beta)
ATI Catalyst 4.12
Operating System(s): Windows XP Professional SP2 (without pae kernel)
Motherboards: IWill DK8N v1.0 (AMD-81xx + NVIDIA nForce 3)
Power Supply: 520W OCZ Powerstream PSU

And to power our monster of a system, we needed a PSU that could deliver the juice. Once again, we turned to our OCZ Powerstream PSU. Even with 2 Opteron 250s, a GeForce 6800 Ultra, 2GB of RAM, and a couple of drives attached, the OCZ power supply had no problem keeping our machine fed. More importantly, the modular connectors allow us to hook up our PSU to a standard 20-pin ATX, 24-pin ATX12V like 915/925/nforce 4 boards use, and the 24-pin EPS12V that most workstation boards require.

We chose to run with a desktop resolution of 1280x1024x32 @85Hz. All the Windows XP eye candy was turned off and tuned for performance. Our virtual memory pagefile was set to 4092MB min and max, and system restore was turned off. After all applications were installed and all benchmarks were run once, the system was defragmented.

The Cards SPECViewperf 8.0.1 Performance
Comments Locked

25 Comments

View All Comments

  • Sword - Friday, December 24, 2004 - link

    Hi again,

    I want to add to my first post that there were 2 parts and a complex assembly (>110 very complex parts without simplified rep).

    The amount of data to process was pretty high (XP shows >400 Mb and it can goes up to 600 Mb).

    About the specific features, I believe that most of the CAD users do not use them. People like me, mechanical engineers and other engineers, are using the software like Pro/E, UGS, Solidworks, Inventor and Catia for solid modeling without any textures or special effects.

    My comment was really to point that the high end features seams useless in real world application for engineering.

    I still believe that for 3D multimedia content, there is place for high-end workstation and the specviewperf benchmark is a good tool for that.
  • Dubb - Friday, December 24, 2004 - link

    how about throwing in soft-quadro'd cards? when people realize with a little effort they can take a $350 6800GT to near-q4000 performance, that changes the pricing issue a bit.
  • Slaimus - Friday, December 24, 2004 - link

    If the Realizm 200 performs this well, it will be scary to see the 800 in action.
  • DerekWilson - Friday, December 24, 2004 - link

    dvinnen, workstation cards are higher margin -- selling consumer parts may be higher volume, but the competition is harder as well. Creative would have to really change their business model if they wanted to sell consumer parts.

    Sword, like we mentioned, the size of the data set tested has a large impact on performance in our tests. Also, Draven31 is correct -- a lot depends on the specific features that you end up using during your normal work day.

    Draven31, 3dlabs drivers have improved greatly with the Realizm from what we've seen in the past. In fact, the Realizm does a much better job of video overlay playback as well.

    Since one feature of the Quadro and Realizm cards is their ability to run genlock/framelock video walls, perhaps a video playback/editing test would make a good addition to our benchmark suite
  • Draven31 - Friday, December 24, 2004 - link

    Coming up with the difference between the spec viewperf tests and real-world 3d work means finding out which "high-end card' features that the test is using and then turning them off in the tests. With NVidia cards, this usually starts with antialiased lines. It also depends on whether the application you are running even uses these features... in Lightwave3D, the 'pro' cards and the consumer cards are very comparable performance-wise because it doesn't use these so-called 'high-end' features very extensively.

    And while they may be faster in some Viewperf tests, 3dLabs drivers generally suck. Having owned and/or used several, I can tell you any app that uses DirectX overlays as part of its display routines is going to either be slow or not work at all. For actual application use, 3dLabs cards are useless. I've seen 3dLabs cards choke on directX apps, and that includes both games and applications that do windowed video playback on the desktop (for instance, video editing and compositing apps)
  • Sword - Thursday, December 23, 2004 - link

    Hi everyone,

    I am a mechanical engineer in Canada and I am a fan of anandtech.

    I made last year a very big comparison of mainstream vs workstation video card for our internal use (the company I work for).

    The goal was to compare the different systems (and mainly video cards) to see if in Pro-Engineer and the kind of work with do we could take real advantage of high-end workstation video card.

    My conclusion is very clear : in specviewperf there is a huge difference between mainstream video card and workstation video card. BUT, in the day-to-day work, there is no real difference in our reaults.

    To summarize, I made a benchmark in Pro/E using the trail files with 3 of our most complex parts. I made comparison in shading, wireframe, hidden line and I also verified the regeneration time for each part. The benchmark was almost 1 hour long. I compared 3D Labs product, ATI professionnal, Nvidia professionnal and Nvidia mainstream.

    My point is : do not believe specviewperf !! Make your own comparison with your actual day-to-day work to see if you really have to spend 1000 $ per video cards. Also, take the time to choose the right components so you minimize the calculation time.

    If anyone at Anandtech is willing to take a look at my study, I am willing to share the results.

    Thank you
  • dvinnen - Thursday, December 23, 2004 - link

    I always wondered why Creative (they own 3dLabs) never made a consumer edition of the Wildcat. Seems like a smallish market when it wouldn't be all that hard to expand into consumer cards.
  • Cygni - Thursday, December 23, 2004 - link

    Im surprised by the power of the Wildcat, really... great for the dollar.
  • DerekWilson - Thursday, December 23, 2004 - link

    mattsaccount,

    glad we could help out with that :-)

    there have been some reports of people getting consumer level driver to install on workstatoin class parts, which should give better performance numbers for the ATI and NVIDIA parts under games if possible. But keep in mind that the trend in workstation parts is to clock them at lower speeds than the current highest end consumer level products for heat and stability reasons.

    if you're a gamer who's insane about performance, you'd be much better off paying $800 on ebay for the ultra rare uberclocked parts from ATI and NVIDIA than going out and getting a workstation class card.

    Now, if you're a programmer, having access to the workstation level features is fun and interesting. But probably not worth the money in most cases.

    Only people who want workstation class features should buy workstation class cards.

    Derek Wilson
  • mattsaccount - Thursday, December 23, 2004 - link

    Yes, very interesting. This gives me and lots of others something to point to when someone asks why they shouldn't get the multi-thousand dollar video card if they want top gaming performance :)

Log in

Don't have an account? Sign up now