Workstation Applications

SPECviewperf 8
SPECviewperf is a collection of application traces taken from some of the most popular professional applications, and compiled together in a single set of benchmarks used to estimate performance in the various applications in which the benchmark is used to model. With version 8, SPEC has significantly improved the quality of the benchmark, making it even more of a real world indicator of performance.

We have included SPEC's official description of each one of the 8 tests in the suite.

3dsmax Viewset (3dsmax-03)
"The 3dsmax-03 viewset was created from traces of the graphics workload generated by 3ds max 3.1. To insure a common comparison point, the OpenGL plug-in driver from Discreet was used during tracing.

The models for this viewset came from the SPECapc 3ds max 3.1 benchmark. Each model was measured with two different lighting models to reflect a range of potential 3ds max users. The high-complexity model uses five to seven positional lights as defined by the SPECapc benchmark and reflects how a high-end user would work with 3ds max. The medium-complexity lighting models uses two positional lights, a more common lighting environment.

The viewset is based on a trace of the running application and includes all the state changes found during normal 3ds max operation. Immediate-mode OpenGL calls are used to transfer data to the graphics subsystem."
SPECviewperf 8 - 3dsmax 3.1 Performance


CATIA Viewset (catia-01)
"The catia-01 viewset was created from traces of the graphics workload generated by the CATIATM V5R12 application from Dassault Systems.

Three models are measured using various modes in CATIA. Phil Harris of LionHeart Solutions, developer of CATBench2003, supplied SPEC/GPC with the models used to measure the CATIA application. The models are courtesy of CATBench2003 and CATIA Community.

The car model contains more than two million points. SPECviewperf replicates the geometry represented by the smaller engine block and submarine models to increase complexity and decrease frame rates. After replication, these models contain 1.2 million vertices (engine block) and 1.8 million vertices (submarine).

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older SPECviewperf viewsets.

Mirroring the application, draw arrays are used for some tests and immediate mode used for others."
SPECviewperf 8 - CATIA V5R12 Performance


Lightscape Viewset (light-07)
"The light-07 viewset was created from traces of the graphics workload generated by the Lightscape Visualization System from Discreet Logic. Lightscape combines proprietary radiosity algorithms with a physically based lighting interface.

The most significant feature of Lightscape is its ability to accurately simulate global illumination effects by precalculating the diffuse energy distribution in an environment and storing the lighting distribution as part of the 3D model. The resulting lighting "mesh" can then be rapidly displayed."
SPECviewperf 8 - Lightscape Visualization System Performance


Maya Viewset (maya-01)
"The maya-01 viewset was created from traces of the graphics workload generated by the Maya V5 application from Alias.

The models used in the tests were contributed by artists at NVIDIA. Various modes in the Maya application are measured.

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

As in the Maya V5 application, array element is used to transfer data through the OpenGL API."
SPECviewperf 8 - Maya V5 Performance


Pro/ENGINEER (proe-03)
"The proe-03 viewset was created from traces of the graphics workload generated by the Pro/ENGINEER 2001TM application from PTC.

Two models and three rendering modes are measured during the test. PTC contributed the models to SPEC for use in measurement of the Pro/ENGINEER application. The first of the models, the PTC World Car, represents a large-model workload composed of 3.9 to 5.9 million vertices. This model is measured in shaded, hidden-line removal, and wireframe modes. The wireframe workloads are measured both in normal and antialiased mode. The second model is a copier. It is a medium-sized model made up of 485,000 to 1.6 million vertices. Shaded and hidden-line-removal modes were measured for this model.

This viewset includes state changes as made by the application throughout the rendering of the model, including matrix, material, light and line-stipple changes. The PTC World Car shaded frames include more than 100MB of state and vertex information per frame. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

Mirroring the application, draw arrays are used for the shaded tests and immediate mode is used for the wireframe. The gradient background used by the Pro/E application is also included to better model the application workload."
SPECviewperf 8 - Pro/ENGINEER Performance


SolidWorks Viewset (sw-01)
"The sw-01 viewset was created from traces of the graphics workload generated by the Solidworks 2004 application from Dassault Systemes.

The model and workloads used were contributed by Solidworks as part of the SPECapc for SolidWorks 2004 benchmark.

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

Mirroring the application, draw arrays are used for some tests and immediate mode used for others."
SPECviewperf 8 - Solidworks 2004 Performance


Unigraphics (ugs-04)
"The ugs-04 viewset was created from traces of the graphics workload generated by Unigraphics V17.

The engine model used was taken from the SPECapc for Unigraphics V17 application benchmark. Three rendering modes are measured -- shaded, shaded with transparency, and wireframe. The wireframe workloads are measured both in normal and anti-alised mode. All tests are repeated twice, rotating once in the center of the screen and then moving about the frame to measure clipping performance.

The viewset is based on a trace of the running application and includes all the state changes found during normal Unigraphics operation. As with the application, OpenGL display lists are used to transfer data to the graphics subsystem. Thousands of display lists of varying sizes go into generating each frame of the model.

To increase model size and complexity, SPECviewperf 8.0 replicates the model two times more than the previous ugs-03 test."
SPECviewperf 8 - Unigraphics V17


3D Rendering Final Words
Comments Locked

56 Comments

View All Comments

  • fishbits - Monday, June 27, 2005 - link

    "Why are you still doing Mozilla 1.4 testing?"

    Because this is a CPU review, and they already have a slew of other CPUs tested with Mozilla 1.4? Did you think they retested it on every chip each time they got a new chip in? This keeps it apples-to-apples so we can see the relative performance of the new contender against ones who've already been benchmarked.

    Or was there a specific score you needed to see for a specific version of Firefox to make or break your personal decision on whether to buy the FX-57 or not?

    AnandTech: Please continue to isolate the performance of the item being reviewed as much as practical. Last thing we need is extra hardware and software variables thrown in, until you're ready to move the whole gaggle over to a new set of yardsticks.
  • ravedave - Monday, June 27, 2005 - link

    Good straight forward article. Not much text though, expect your avg page view times to be tiny.

    What happened to all the suggestions people gave
    when Anand asked for tests for this article in his blog? Comon it's a speedbump, do something interesting.At least provide some slow old processor in the rankings so we can all laugh and point.

  • Backslider - Monday, June 27, 2005 - link

    With Half-Life 2 being the most CPU dependant game, I was suprised to see it missing from the benches.
  • acejj26 - Monday, June 27, 2005 - link

    Let me be the first to offer my services as an editor for articles here. I hate seeing articles here marred by poor grammar, spelling errors, and errors in the graphs.
  • Kocur - Monday, June 27, 2005 - link

    blckgrffn,

    Yes, you are right that DDR400 low latency will be better than DDR500 with very relaxed timings. However, you can buy now DDR memory with really nice timings at PC4000 speeds. See the memory tests at Anandtech and how much does FX53 gain from faster memory speeds at reasonable timings:). The same would hold for FX57 even to greater extent.

    Moreover, I think that FX57 should have been tested on a really good, mature platform, for example, DFI LP. You cannot use some crapy reference mobo until the socket 939 dies.

    Kocur.
  • Tallon - Monday, June 27, 2005 - link

    My god, my eyes are fucking bleeding.

    Losing: http://dictionary.reference.com/search?q=losing
    Loosing: http://dictionary.reference.com/search?q=loosing

    Please learn the difference.
  • blckgrffn - Monday, June 27, 2005 - link

    Dear lord! Do you know nothing of A64's and latency! DDR400 LL will stomp DDR500 @ relaxed timings, no problem! Further more, the baseline needs to stay the same, so they can't switch mobo and ram for every review.

    Next, mozilla 1.4 ~ Firefox. It is MOZILLA FIREFOX. Let's use our brain for that one, in that paragraph you said didn't make anysense, he laid it out for you.

    I do have one gripe - how can everything be slower at UT2k4 than Doom3? Tell me that was 1600*1200 w/aa&af! Otherwise, that benchmark should have much higher scores, imho....

    Other than that and that weird fluke where the 57 lost to the 55 and 4000+, thanks for the great article, Derek :)
  • Kocur - Monday, June 27, 2005 - link

    johnsonx,

    Well, it actually might be correct. Remember that they are using for this test the first old reference motherboard they got with FX55 last year. Thus, the results for other AMD processors come from October last year (well, at least for FX55). At that time they might have used another hard disk.

    Look also at the wierd results of P4 670 in some office tests with regard to other Intel processors. This is clearly disk/controller issue.

    Kocur.
  • DrMrLordX - Monday, June 27, 2005 - link

    Agreed #16, it's odd that the FX-55 and 4000+ win the Communication Sysmark 2004 bench. The FX-57 should have taken it easily.
  • johnsonx - Monday, June 27, 2005 - link

    It seems a bit odd that the FX-57 looses to the FX-55 and 4000+ in the third benchmark.... perhaps the labels are mixed up?

Log in

Don't have an account? Sign up now