Workstation Applications

Visual Studio 6

Carried over from our previous CPU reviews, we continue to use Visual Studio 6 for a quick compile test. We are still using the Quake 3 source code as our test and measure compile time in seconds.

Visual Studio 6 Compiler Performance

The Pentium M is very competitive with the Pentium 4 in our compiler test, but still not enough to compete with the Athlon 64.

SPECviewperf 8

For our next set of professional application benchmarks, we turn to SPECviewperf 8. SPECviewperf is a collection of application traces taken from some of the most popular professional applications, and compiled together in a single set of benchmarks used to estimate performance in the various applications that the benchmark is used to model. With version 8, SPEC has significantly improved the quality of the benchmark, making it even more of a real world indicator of performance.

We have included SPEC's official description of each one of the 8 tests in the suite.

As you can guess, with the majority of performance in 3D workstation applications being determined by floating point performance and memory bandwidth, the Pentium M doesn't fare too well:

3dsmax Viewset (3dsmax-03)

"The 3dsmax-03 viewset was created from traces of the graphics workload generated by 3ds max 3.1. To ensure a common comparison point, the OpenGL plug-in driver from Discreet was used during tracing.

The models for this viewset came from the SPECapc 3ds max 3.1 benchmark. Each model was measured with two different lighting models to reflect a range of potential 3ds max users. The high-complexity model uses five to seven positional lights as defined by the SPECapc benchmark and reflects how a high-end user would work with 3ds max. The medium-complexity lighting models use two positional lights, a more common lighting environment.

The viewset is based on a trace of the running application and includes all the state changes found during normal 3ds max operation. Immediate-mode OpenGL calls are used to transfer data to the graphics subsystem."
SPECviewperf 8 - 3dsmax 3.1 Performance

CATIA Viewset (catia-01)

"The catia-01 viewset was created from traces of the graphics workload generated by the CATIATM V5R12 application from Dassault Systems.

Three models are measured using various modes in CATIA. Phil Harris of LionHeart Solutions, developer of CATBench2003, supplied SPEC/GPC with the models used to measure the CATIA application. The models are courtesy of CATBench2003 and CATIA Community.

The car model contains more than two million points. SPECviewperf replicates the geometry represented by the smaller engine block and submarine models to increase complexity and decrease frame rates. After replication, these models contain 1.2 million vertices (engine block) and 1.8 million vertices (submarine).

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older SPECviewperf viewsets.

Mirroring the application, draw arrays are used for some tests and immediate mode used for others."
SPECviewperf 8 - CATIA V5R12 Performance

Lightscape Viewset (light-07)

"The light-07 viewset was created from traces of the graphics workload generated by the Lightscape Visualization System from Discreet Logic. Lightscape combines proprietary radiosity algorithms with a physically based lighting interface.

The most significant feature of Lightscape is its ability to simulate global illumination effects accurately by precalculating the diffuse energy distribution in an environment and storing the lighting distribution as part of the 3D model. The resulting lighting "mesh" can then be rapidly displayed."
SPECviewperf 8 - Lightscape Visualization System Performance

Maya Viewset (maya-01)

"The maya-01 viewset was created from traces of the graphics workload generated by the Maya V5 application from Alias.

The models used in the tests were contributed by artists at NVIDIA. Various modes in the Maya application are measured.

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

As in the Maya V5 application, array element is used to transfer data through the OpenGL API."
SPECviewperf 8 - Maya V5 Performance

Pro/ENGINEER (proe-03)

"The proe-03 viewset was created from traces of the graphics workload generated by the Pro/ENGINEER 2001TM application from PTC.

Two models and three rendering modes are measured during the test. PTC contributed the models to SPEC for use in measurement of the Pro/ENGINEER application. The first of the models, the PTC World Car, represents a large-model workload composed of 3.9 to 5.9 million vertices. This model is measured in shaded, hidden-line removal, and wireframe modes. The wireframe workloads are measured both in normal and antialiased mode. The second model is a copier. It is a medium-sized model made up of 485,000 to 1.6 million vertices. Shaded and hidden-line-removal modes were measured for this model.

This viewset includes state changes as made by the application throughout the rendering of the model, including matrix, material, light and line-stipple changes. The PTC World Car shaded frames include more than 100MB of state and vertex information per frame. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

Mirroring the application, draw arrays are used for the shaded tests and immediate mode is used for the wireframe. The gradient background used by the Pro/E application is also included to better model the application workload."
SPECviewperf 8 - Pro/ENGINEER Performance

SolidWorks Viewset (sw-01)

"The sw-01 viewset was created from traces of the graphics workload generated by the Solidworks 2004 application from Dassault Systemes.

The model and workloads used were contributed by Solidworks as part of the SPECapc for SolidWorks 2004 benchmark.

State changes as made by the application are included throughout the rendering of the model, including matrix, material, light and line-stipple changes. All state changes are derived from a trace of the running application. The state changes put considerably more stress on graphics subsystems than the simple geometry dumps found in older viewsets.

Mirroring the application, draw arrays are used for some tests and immediate mode used for others."
SPECviewperf 8 - Solidworks 2004 Performance

Unigraphics (ugs-04)

"The ugs-04 viewset was created from traces of the graphics workload generated by Unigraphics V17.

The engine model used was taken from the SPECapc for Unigraphics V17 application benchmark. Three rendering modes are measured -- shaded, shaded with transparency, and wireframe. The wireframe workloads are measured both in normal and anti-alised mode. All tests are repeated twice, rotating once in the center of the screen and then moving about the frame to measure clipping performance.

The viewset is based on a trace of the running application and includes all the state changes found during normal Unigraphics operation. As with the application, OpenGL display lists are used to transfer data to the graphics subsystem. Thousands of display lists of varying sizes go into generating each frame of the model.

To increase model size and complexity, SPECviewperf 8.0 replicates the model two times more than the previous ugs-03 test."
SPECviewperf 8 - Unigraphics V17

3D Rendering Overclocking to Save the Day?
Comments Locked

77 Comments

View All Comments

  • fitten - Tuesday, February 8, 2005 - link

    Also, it's interesting that there are many benchmarks chosen which are known to stress the weaknesses of the Pentium-M... not that it isn't interesting information. For example, there seems to be a whole lot of FPU intensive benchmarks (around 15 or so, all of which the Pentium-M should lose handily - known before they are even run) so kind of just hammering the point home I guess.

    Anyway, the Dothans held up pretty well from what I can see... Most of the time (except for the notable FPU intensive and memory bandwidth intensive benchmarks), the Dothan compares quite well with Athlon64s of the same clock speed that have the advantage of dual channel memory.
  • fitten - Tuesday, February 8, 2005 - link

    The other interesting thing about the Athlon64 vs. Dothan comparison is that even with dual channel memory bandwidth on the Athlon64's side, the single channel memory bandwidth of the Dothan still keeps it very close in many of the benchmarks and can even beat the dual channel Athlon64s at 400MHz higher clock in some.

    Anyway, the Pentium-M family is a good start. Some tweaking here and there (improved FPU with better FPU performance and maybe another FPU execution unit, improved memory subsystem to make good use of dual channel) and it will be at least as good as the Athlon64s across the board.

    I own three Athlon64 desktops, two AthlonXP desktops, and two Pentium-M laptops and the laptops are by no means "slow" at doing work.
  • KristopherKubicki - Tuesday, February 8, 2005 - link

    teutonicknight: We purposely don't change our test platform too often. Even though we are using a slightly older version of Premiere, it is the same version we have used in our other processor analyses.

    Hope that helps,

    Kristopher
  • kmmatney - Tuesday, February 8, 2005 - link

    There's also a Celeron version that would have been intersting to review. The small L2 cache should hurt the performance, though. I think the celeron version using something like 7 Watts. It would make no sense to put a celeron-M in such an expensive motherboard, though.
  • Slaimus - Tuesday, February 8, 2005 - link

    I think this indirectly shows how AMD needs to update its caching architecture on the K8. They basically carried over the K7 caches, which is just too slow when paired with its memory controller. Instead of being as large as possible (as evidenced by the exclusive caches) at the expense of latency, the K8 needs faster caches. The memory bandwith of L2 vs system memory is only about 2 to 1 on the K8, which is to say the L2 cache is not helping the system memory much.
  • sandorski - Monday, February 7, 2005 - link

    I think the Pentium M mythos can now be laid to rest.
  • mjz5 - Monday, February 7, 2005 - link

    to #29:

    your 2800 is the 754 pin.

    the 3000+ reviewed is the 939 pin which is 1.8. the 3000+ for the 754 is 2.0 ghz
  • kristof007 - Monday, February 7, 2005 - link

    I don't know if anyone else noticed but the charts are a bit off. My A64 2800+ is running at a stock 1.8 ghz .. while in the review the A64 3000+ is running at 1.8 ... weird!
  • knitecrow - Monday, February 7, 2005 - link

    #25

    1) Intel and AMD measure TDP differently... and TDP is not the same as actual power dissipation. The actual dissipation of 90nm A64 is pretty darn good.

    2) A microprocessor is not made of Lego... you can't rearrange/tweak parts to make it faster. It takes a lot of time, energy and talent to make changes -- even then it may not work for the best. Prescott anyone?


    Frankly I’ve been waiting for a good review of P-M's actual performance. I really don't trust those "other" sites.
  • k00kie - Monday, February 7, 2005 - link

Log in

Don't have an account? Sign up now