Workstation Applications

Visual Studio 6

Carried over from our previous CPU reviews, we continue to use Visual Studio 6 for a quick compile test. We are still using the Quake 3 source code as our test and measure compile time in seconds.

Visual Studio 6 Compiler Performance

SPECviewperf 8

For our next set of professional application benchmarks we turn to SPECviewperf 8. SPECviewperf is a collection of application traces taken from some of the most popular professional applications, and compiled together in a single set of benchmarks used to estimate performance in the various applications the benchmark is used to model. With version 8, SPEC has significantly improved the quality of the benchmark, making it even more of a real world indicator of performance.

We have included SPEC's official description of each one of the 8 tests in the suite.

3dsmax Viewset (3dsmax-03)

"The 3dsmax-03 viewset was created from traces of the graphics workload generated by 3ds max 3.1. To insure a common comparison point, the OpenGL plug-in driver from Discreet was used during tracing.

The models for this viewset came from the SPECapc 3ds max 3.1 benchmark. Each model was measured with two different lighting models to reflect a range of potential 3ds max users. The high-complexity model uses five to seven positional lights as defined by the SPECapc benchmark and reflects how a high-end user would work with 3ds max. The medium-complexity lighting models uses two positional lights, a more common lighting environment.

The viewset is based on a trace of the running application and includes all the state changes found during normal 3ds max operation. Immediate-mode OpenGL calls are used to transfer data to the graphics subsystem."

SPECviewperf 8 - 3dsmax 3.1 Performance

CATIA Viewset (catia-01)

"The catia-01 viewset was created from traces of the graphics workload generated by the CATIATM V5R12 application from Dassault Systems.
Three models are measured using various modes in CATIA. Phil Harris of LionHeart Solutions, developer of CATBench2003, supplied SPEC/GPC with the models used to measure the CATIA application. The models are courtesy of CATBench2003 and CATIA Community.

The car model contains more than two million points. SPECviewperf replicates the geometry represented by the smaller engine block and submarine models to increase complexity and decrease frame rates. After replication, these models contain 1.2 million vertices (engine block) and 1.8 million vertices (submarine).

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older SPECviewperf viewsets.

Mirroring the application, draw arrays are used for some tests and immediate mode used for others."

SPECviewperf 8 - CATIA V5R12 Performance

Lightscape Viewset (light-07)

"The light-07 viewset was created from traces of the graphics workload generated by the Lightscape Visualization System from Discreet Logic. Lightscape combines proprietary radiosity algorithms with a physically based lighting interface.

The most significant feature of Lightscape is its ability to accurately simulate global illumination effects by precalculating the diffuse energy distribution in an environment and storing the lighting distribution as part of the 3D model. The resulting lighting "mesh" can then be rapidly displayed."

SPECviewperf 8 - Lightscape Visualization System Performance

Maya Viewset (maya-01)

"The maya-01 viewset was created from traces of the graphics workload generated by the Maya V5 application from Alias.

The models used in the tests were contributed by artists at NVIDIA. Various modes in the Maya application are measured.

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

As in the Maya V5 application, array element is used to transfer data through the OpenGL API."

SPECviewperf 8 - Maya V5 Performance

Pro/ENGINEER (proe-03)

"The proe-03 viewset was created from traces of the graphics workload generated by the Pro/ENGINEER 2001TM application from PTC.

Two models and three rendering modes are measured during the test. PTC contributed the models to SPEC for use in measurement of the Pro/ENGINEER application. The first of the models, the PTC World Car, represents a large-model workload composed of 3.9 to 5.9 million vertices. This model is measured in shaded, hidden-line removal, and wireframe modes. The wireframe workloads are measured both in normal and antialiased mode. The second model is a copier. It is a medium-sized model made up of 485,000 to 1.6 million vertices. Shaded and hidden-line-removal modes were measured for this model.

This viewset includes state changes as made by the application throughout the rendering of the model, including matrix, material, light and line-stipple changes. The PTC World Car shaded frames include more than 100MB of state and vertex information per frame. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

Mirroring the application, draw arrays are used for the shaded tests and immediate mode is used for the wireframe. The gradient background used by the Pro/E application is also included to better model the application workload."

SPECviewperf 8 - Pro/ENGINEER Performance

SolidWorks Viewset (sw-01)

"The sw-01 viewset was created from traces of the graphics workload generated by the Solidworks 2004 application from Dassault Systemes.

The model and workloads used were contributed by Solidworks as part of the SPECapc for SolidWorks 2004 benchmark.

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

Mirroring the application, draw arrays are used for some tests and immediate mode used for others."

SPECviewperf 8 - Solidworks 2004 Performance

Unigraphics (ugs-04)

"The ugs-04 viewset was created from traces of the graphics workload generated by Unigraphics V17.

The engine model used was taken from the SPECapc for Unigraphics V17 application benchmark. Three rendering modes are measured -- shaded, shaded with transparency, and wireframe. The wireframe workloads are measured both in normal and anti-alised mode. All tests are repeated twice, rotating once in the center of the screen and then moving about the frame to measure clipping performance.

The viewset is based on a trace of the running application and includes all the state changes found during normal Unigraphics operation. As with the application, OpenGL display lists are used to transfer data to the graphics subsystem. Thousands of display lists of varying sizes go into generating each frame of the model.

To increase model size and complexity, SPECviewperf 8.0 replicates the model two times more than the previous ugs-03 test."

SPECviewperf 8 - Unigraphics V17

3D Rendering Power Consumption Comparison
Comments Locked

42 Comments

View All Comments

  • mrdudesir - Monday, November 15, 2004 - link

    Great idea including the Benchmark summary tables at the beginning of the article. I for one don't like having to always comb through the benchmark tables and pick out each specific test when its just a new processor being introduced. Keep up the great work guys.
  • thebluesgnr - Monday, November 15, 2004 - link

    To include IE render times you have to keep in mind that it's also very dependent on the chipset. If you really wanted to compare the two processors ideally you would use two motherboards with the same southbridge (SiS, VIA and now ATI).
  • jimmy43 - Monday, November 15, 2004 - link

    A lot more often than i make spreadsheets in excel.
  • KristopherKubicki - Monday, November 15, 2004 - link

    jimmy43: Although IE render time is a good test, Windows startup times seem kind of pointless. How often are you restarting your PC?

    Furthermore, virus scans are almost entirely bottlenecked on the HD.

    Hope that helps,

    Kristopher
  • jimmy43 - Sunday, November 14, 2004 - link

    Personally, I would love to seem some actual real world benchmarks such as these:

    -Windows Xp startup times.
    -Internet Explorer startup/render time.
    -Virus scan times
    -THOROUGH multitasking tests.

    I really dont understand why these are not included. Most uses will spend 90% of their time doing such tasks (except gaming, where AMD is the obvious leader) , and as such, these benchmarks are CRUICIAL. Obviosly, one can extrapolate results for these from synthetic benchmarks, but i personally would much rather see real world benchmarks. Thank you!
  • skunkbuster - Sunday, November 14, 2004 - link

    i personally never put too much stock in synthetic benchmarks

    but thats just me

  • Xspringe2 - Sunday, November 14, 2004 - link

    Woops sorry wrong comment section :)
  • Xspringe2 - Sunday, November 14, 2004 - link

    Do you guys plan on testing any dual opteron nforce4 motherboards?
  • stephenbrooks - Sunday, November 14, 2004 - link

    Well saying their recommendation is split doesn't mean to say it's split _equally_. ;)
  • KeithDust2000 - Sunday, November 14, 2004 - link

    Anand, you say "Had AMD released a 2.6GHz Athlon 64 4000+ Intel would have had a more difficult time with the 570J, but given that things are the way they are our CPU recommendation is split between the two."

    I don´t think it´s a good idea to recommend the 3.8Ghz P4 at this point. While A64 still has the advantage of Cool´n´quiet (while INTEL has rather the opposite), apparently INTEL thinks 64bit support (and Cnq)
    is important enough to introduce for desktops next quarter. As you know, 64bit can e.g. speed up applications like DIVX encoding by 15-25%, others even more, and will give a performance advantage of roughly 1 speed grade or more rather soon. Not taking that into account, and recommending the rather future-unproof 3.8 Ghz P4
    doesn´t seem wise at all. You´ve seen in the Linux tests as well what AMD64 is capable of. Buying a 32bit CPU for more than $600 now just looks like a dumb idea at this point.

Log in

Don't have an account? Sign up now