The Test

Testing the Sun Fire V40z is not something that we can easily reference, since the server configurations that we have in our review portfolio are generally Windows based or in a dual configuration. Our quad processor database analysis from early last year goes into specific detail about database performance analysis, and Jason's Opteron 252 article from a week ago adds more depth to that data. Johan wrote a very thorough article detailing some of the differences between various database benchmarks, and we will be using some of his analysis procedure from the Sun Fire V20z benchmarked as well.

To give a baseline performance in our benchmarks, we took some data from our Sun W2100z analysis.


 Test Configurations
Machine: Sun W2100z Sun Fire V40z
Processor(s): (2) AMD Opteron 250 (4) AMD Opteron 850
RAM: 4 x 1024MB PC-3200 8 x 1024MB PC-2700
Hard Drives: SCSI u320 Seagate Cheetah 10,000RPM SCSI u320 Seagate Cheetah 10,000RPM
Memory Timings: Default
Operating System(s): SuSE 9.1 Professional
RedHat 9
JDS 2.0
RedHat 9
Kernel: Linux 2.6.8
Linux 2.4 (JDS 2.0)
Linux 2.4.21
Compiler: linux:~ # gcc -v
Reading specs from /usr/local/lib/gcc/i686-pc-linux-gnu/3.4.2/specs
Configured with: ./configure
Thread model: posix
gcc version 3.4.2
Other than the two additional processors, differences between these two machines are very small. However, take notice that the V40z is running slower memory than our workstation baseline. As stated in the introduction of this analysis, the "E4" stepping on the Opteron 8xx lineup allows for PC-3200 in a four/eight-way configuration.

We will put the majority of our emphasis on database benchmarks for this analysis because a quad Opteron server with 8GB of memory is typically an ideal platform for a database. Our rendering benchmarks are also important, but our compilation benchmarks represent the best real-world analysis in our testing.

The core of our benchmarks today run on Red Hat 9 (kernel 2.4.21), which is NUMA aware. Anand wrote a little introduction to NUMA almost two years ago during the Opteron launch, which should illustrate the importance of NUMA in our particular configuration. Some of our benchmarks won't need more than a few hundred megabytes of data and it becomes much more efficient to copy all of this data into the memory of each processor bank. All of our tests run on x86_64 kernels and environments. With the exception of Mental Ray and Shake, all binaries are 64-bit as well.

Thermals, Acoustic Database Benchmarks
Comments Locked

38 Comments

View All Comments

  • tironside - Thursday, February 24, 2005 - link

    I agree with dwnwrd. the lom part of it is not great for remote console etc. the lom that the hp stuff has is pretty slick, with a java / web interface. The other main problem I have with this is it offers only raid 1 unless you buy a rather expensive add on card to do raid 5, kind of a teaser to put 6 drive bays and only let you do raid 1... It's a good start, but sun needs to make some changes before it can go mission critical. (raid and lom enhancements imho) while I like cli stuff, trying to get junior people to do complicated cli stuff is dangerous...

  • dwnwrd - Thursday, February 24, 2005 - link

    I have some V20s and a V40. The service processor is pretty great except if you try to direct the Linux serial console to it then connect to the "serial over LAN" you'll get a flood of "serial8250: too much work for irq4" and a sleepy system.

    http://supportforum.sun.com/hardware/index.php?t=m...
  • Pontius - Thursday, February 24, 2005 - link

    I am curious what they are using when they benchmark the linux kernel compile times. They use the time command which spits out three times - real, user & sys. Are they using the sum of all these? If not, something is wrong. Because I did the same test, on the same 2.6.4 kernel using -j2 on a dual 2.8GHz Nocona system and I got a "real" time of 147s. That doesn't seem right because the Opterons are way faster at compilation. On the other hand, if I take the sum of the 3 times, I get 420s. Any thoughts?

  • jlee123 - Wednesday, February 23, 2005 - link

    RedHat 9, are you joking!!?? This has got to be a mistake, I can't understand how Sun could be shipping a 64-bit server with a 32-bit OS that's reached End Of Life. It's the equivalent of buying a workstation with Windows ME on it. Also, there was never a official port of RH9 to x86-64, the first x86-64 RedHat was RHEL3, the Fedora team later released FC1 x86-64. If Sun doesn't wish to pay licensing, they'd be better off shipping with FC2, FC3 or CentOS, a free rebuild of RHEL. This hardware isn't even going to begin to be utilized till it's running something more modern like RHEL4 x86-64.
  • JustAnAverageGuy - Wednesday, February 23, 2005 - link

    I could think of one use for these. :)

    http://forums.anandtech.com/categories.aspx?catid=...
  • lauwersw - Wednesday, February 23, 2005 - link

    Standard rule for parallel make is to use 2xnumber of processors available. This gives most optimal results to hide disk latencies and seems to be correct in most cases I've seen.
  • phaxmohdem - Wednesday, February 23, 2005 - link

    Call me what you will, but I would like to see some quad/dual Xeon scores to compare to as well (along with price tages for comparison :) )

    And yes, If I were a rich man who knew what to do with that much computing power, I would have a dozen of these babies in my basement! Who needs women anymore once you have 48 Opteron x50 or x52 cpus humming at your disposal. And drool core? Ahhhhhhhhhhh.
  • JustAnAverageGuy - Tuesday, February 22, 2005 - link

    That thing is a BEAST.

    I have no idea what I'd do with a computer like that.
  • MrEMan - Tuesday, February 22, 2005 - link

    Kristopher,

    Thanks for the clarification about the reduced media tag.

    E
  • KristopherKubicki - Tuesday, February 22, 2005 - link

    RyanVM: The system used 850s.

    Kristopher

Log in

Don't have an account? Sign up now