Thermals, Noise

Sonic

Having spent a considerable amount of time working on high end workstations in the past, the nagging thermal and audible "gotchas" are not issues to overlook. Many workstations perform outstanding when compared to an equivalent desktop setup, but they sound like a helicopter taking off. As we mentioned previously in the analysis, we were impressed by the simplicity and elegance of the w2100z internals. The true test of their worth seems to be when we start up the system and actually start using it. When we first set up our w2100z and the machine POST for the first time, we were greeted by the 120mm fan running at full speed, well above 50dBA! Fortunately, the initial fan speed setting seemed to only be part of the initial self-diagnostic test, and we never hear the fan peak over 30dBA for the rest of the analysis.




Click to enlarge.


Above, you can see the sole source of the w2100z's exhaust. If you look really closely at the EMI shield, you can see where the PS/2 ports are covered up still.

By contrast, our whitebox server setup is almost always running at or above 50dBA. The form factor is considerably different and the machine is more accustomed to sitting in a rack than on our desk. The comparison is apples to oranges, but our w2100z never makes the same amount of noise as our whitebox configuration.

Thermal

Equally important to any audible testing is a thermal analysis. We should approach any thermal testing in the same manner that one would approach any other enclosure review. In our previous enclosure analyses, we generally prefer bigger, slower moving fans over smaller, faster ones. It has been scientifically proven that when two sounds of equal dBA are compared side by side, humans tend to perceive the one with a higher pitch as "louder". Based on this premise, and the premise that larger fans move air over a larger area than smaller ones, Sun's layout for their exhaust makes a lot of sense. During our review, we did not see the fan peak over 2500 RPM, and generally, it averaged around 1100 RPM.

     
Click to enlarge.

Sun's heatsink/fan (HSF) combos are made almost entirely of aluminum, but unique because they employ a side fan and a system of heat pipes. Lately, every third-party manufacturer seems to have some heat pipes in their CPU coolers, so we weren't particularly surprised that they were adopted in the w2100z as well. The heatsinks play an integral role in Sun's very carefully designed thermal footprint - they are both placed in line with the 120mm exhaust and the second heatsink has fins to vortex hot air into the path of the 120mm fan. These small, professional attentions to detail compose part of our good impression confirming Sun's ability as a premiere workstation designer.

We added some other test points for this review to provide everyone with some thermal cross sections. The first graph shows the ambient air temperature taken from a sensor attached just off the inner wall of the side panel. You may download the CSV worksheet with each individual location tested here (under load). All positions on the graph below are measured using a digital thermometer with remote sensors at a constant ambient temperature of 22.3 degrees Celsius. After 30 minutes of looping SpecViewperf 8, we measured our workstation.




Click to enlarge.


You can see that the warmest ambient air resides near the bottom of the case. As we expected, there are pockets of warm air grouping around the video card and hard drive where we have no active cooling moving air around. Most importantly, the AGP, PCI-X and SCSI tunnels are operating at significantly high temperatures and contributing to this warm air pocket. You can see the graph below, which denotes various surface temperatures of some of the components in the system. Hovering over the image reveals the temperatures of the CPU mezzanine as well.



Hold your mouse over for the Overlay.


The CPUs themselves were both measured at 52 degrees Celsius under load, certainly not poor results. There is definitely a correlation between the surface temperatures of some of our components and the ambient air graph that we already saw. Temperatures on the SCSI controller and PCI tunnels are a little high and all of these components are passively cooled. Obviously, the components under the CPU mezzanine register as very warm, since they are not getting the benefits of the exhaust fan.

While testing the thermal components of the system, every time we removed the side panel, our rear exhaust fan shot up several hundred RPMs. Without the negative pressure pull across the inside of the case, the system almost immediately detects increased heat.

Encryption Benchmarks Conclusions
Comments Locked

47 Comments

View All Comments

  • t - Thursday, October 28, 2004 - link

    oh... i better clarify before i get labelled as a 'zealot' or a 'mac hater' or a 'pc lover'

    by 'cache starved' i mean that the power4 is _very_ dependent upon its cache architecture, take some of that away and u of course impact performance... a power4 and a G5 at the same clockspeed, the power4 wins. The G5 is still an impressive chip.

    t.
  • t - Thursday, October 28, 2004 - link

    heh...this thread is hilarious...can u ppl like talk past each other some more?

    please :)

    G5 = cut down, somewhat cache starved power4
    Blue Gene/L = power4+

    they are a fair bit different: l3, l2, altivec, dual v. single core...just for a couple.
  • gromm - Wednesday, October 27, 2004 - link

    "and I'm sure that there's nothing special about the way the G5's were linked... "

    Actually, they have a communication network based on InfiniBand, which isn't something that you'd buy for home (especially considering how much it costs). The cards themselves are $200+ each (in quantities of 10,000 for the only price I could find for their HCA cards) and I can't even find how much the switches cost (I'm sure several thousand dollars each).
  • Reflex - Wednesday, October 27, 2004 - link

    Thanks for the correction, its been a while since I read up on that stuff. However you still illustrated my point that this is pretty much an irrelevant benchmark for general purpose computing. People do not simply use thier PC's for floating point performance...
  • slashbinslashbash - Wednesday, October 27, 2004 - link

    Reflex: "First off, once again, you are misunderstanding what you are looking at. Total number of CPU's is only part of the equation. There are *many* factors that go into the 'most powerful supercomputer' equation. How much memory and what type/speed? How are they linking the individual nodes? What kind of software optimizations have been done, and what software is being used to benchmark it?"

    Wrong. The top supercomputers are rated solely by FLOPS (Floating Point Operations Per Second, as I'm sure you know) as measured by the Linpack benchmark. See www.top500.org. I've never heard of memory having an impact on FLOPS; I guess it *could* if you absolutely starved the CPUs of work, but presumably all of these computers are balanced enough that the RAM can keep up with the CPUs. The nodes can be linked in any way; presumably they're linked optimally for price/performance, and I'm sure that there's nothing special about the way the G5's were linked... you don't build a supercomputing cluster and let the linking drag the performance down. As for the optimization question, I'm not sure but I'll bet Linpack is optimized for every platform/architecture.
  • Reflex - Wednesday, October 27, 2004 - link

    I have not been arguing about superiority for the PC platform. My point is that they are not directly comparable as related to this particuliar review. The product being reviewed does not have an equivilent on the Mac side of things, so going on about how this article proves that the 'price' arguement is wrong is rediculous.

    My original question has not been answered, and that is that I am wondering who is building these since I have an identical workstation here on my lab bench but with an IBM label on it.
  • karlreading - Wednesday, October 27, 2004 - link

    enuff of the mac vs. pc B$ dudes!!!
    this is a comments section about a opteron workstation, not about how a g5 spanks / get spanked by opterons ass.

    that said there is one part of this that gets me excited. Whilst coming across mac / pc arguments on forums, i have noticed one trend. AMD is now always the PC's defender. i never hear anybody citing the p4 / xeon as a mac comparison. its always opteron / a64 vs. g5. this is excellent news from AMD's standpoint, as it cements the trend that AMD is a respectable company, and also is impressive to see AMD as the lead PC saviour in the ongoing annoying "my pc's better than your mac " debate. Intel should be worried, very very worried. never thaught id see the day when my beloved AMD were championing the pc / x86 cause :)
  • gromm - Wednesday, October 27, 2004 - link

    As far as cost, I'd like to see how much Apple (and other sponsors) subsidized X. The networking infrastructure it has alone would normally be massively expensive and I can't see how it fits into the $10M pricetag quoted by them along with all compute hardware.

    From benchmarks I have run, the G5 and the Athlon64 are neck-to-neck in performance (actually the difference is small enough to be noise) in 'normal' codes (mostly FPU) and the Athlons are a little faster in integer performance. I haven't seen what Altivec/SSE2 optimizations would do for either.

    If you want some rough estimates, go to Ars Technica and look in the Battlefront forum under the Cinebench thread. There are lots of scores under there to compare for this benchmark (includes a raytracer and some other stuff).
  • michael2k - Wednesday, October 27, 2004 - link

    I'd like to see Anand run SpecViewPerf on his Dual G5, now :)
  • Reflex - Wednesday, October 27, 2004 - link

    I am aware of Apple's 'server' aspirations. That does not change anything at all. They do not have the kind of corporate support Sun or other large venders provide, and as a result the Xserve is not a large player in the market. Furthermore, its only proof that your comparing the wrong product to the Sun product this review was about. Xserve was designed to compete with workstations like this, not the PowerMac which is a desktop system.

    My comment about Sun relates to the fact that for a long time they were a detriment to the industry at a whole, pushing concepts like Java PC's with no local storage, trying to keep prices very high, and generally siding with 'Big Iron' in the market rather than embracing the future. In the past year, as Microsoft/Intel/AMD have made the Sparc obsolete they have had to get with the program, choosing AMD as their partner made perfect sense as they had no motivation to improve Intel's position considering they are still competing with them in some markets.

    In summation, Sun is finally seeing the light, however their past is one of high prices, legal shennanigans(especially in Europe where price fixing has been a common charge against them), and a strategy of defining themselves as Microsoft/Intel's opposition rather than charging their own course. The future will tell where they go, and I'll cross my fingers and hope that x86 Solaris and Opteron workstations are a sign that they are finally producing products their customers demand, rather than locking them into a model and then telling them what they need...

Log in

Don't have an account? Sign up now