What Makes Server Applications Different?

The large caches and high integer core (cluster) count in one Orochi die (four CMT module Bulldozer die) made quite a few people suspect that the Bulldozer design first and foremost was created to excel in server workloads. Reviews like our own AMD FX-8150 launch article have revealed that single-threaded performance has (slightly) regressed compared to the previous AMD CPUs (Istanbul core), while the chip performs better in heavy multi-threaded benchmarks. However, high performance in multi-threaded workstation and desktop applications does not automatically mean that the architecture is server centric.

A more in depth analysis of the Bulldozer architecture and its performance will be presented in a later article as it is out of the scope of this one. However, many of our readers are either hardcore hardware enthusiasts or IT professionals that really love to delve a bit deeper than just benchmarks showing if something is faster/slower than the competition, so it's good to start with an explanation of what makes an architecture better suited for server applications. Is the Bulldozer architecture a “server centric architecture”?

What makes a server application different anyway?

There have been extensive performance characterizations on the SPEC CPU benchmark, which contains real-world HPC (High Performance Computing), workstation, and desktop applications. The studies of commercial web and database workloads on top of real CPUs are less abundant, but we dug up quite a bit of interesting info. In summary we can say that server workloads distinguish themselves from the workstation and desktop ones in the following ways.

They spend a lot more time in the kernel. Accessing the network stack, the disk subsystem, handling the user connections, syncing high amounts of threads, demanding more memory pages for expending caches--server workloads make the OS sweat. Server applications spend about 20 to 60% of their execution time in the kernel or hypervisor, while in contrast most desktop applications rarely exceed 5% kernel time. Kernel code tends to be very low IPC  (Instructions Per Clockcycle) with lots of dependencies.

That is why for example SPECjbb, which does not perform any networking and disk access, is a decent CPU benchmark but a pretty bad server benchmark. An interesting fact is that SPECJBB, thanks to the lack of I/O subsystem interaction, typically has an IPC of 0.5-0.9, which is almost twice as high as other server workloads (0.3-0.6), even if those server workloads are not bottlenecked by the storage subsystem.

Another aspect of server applications is that they are prone to more instruction cache misses. Server workloads are more complex than most processing intensive applications. Processing intensive applications like encoders are written in C++ using a few libraries. Server workloads are developed on top of frameworks like .Net and make of lots of DLLs--or in Linux terms, they have more dependencies. Not only is the "most used" instruction footprint a lot larger, dynamically compiled software (such as .Net and Java) tends to make code that is more scattered in the memory space. As a result, server apps have much more L1 instruction cache misses than desktop applications, where instruction cache misses are much lower than data cache misses.

Similar to the above, server apps also have more L2 cache misses. Modern desktop/workstation applications miss the L1 data cache frequently and need the L2 cache too, as their datasets are much larger than the L1 data cache. But once there, few applications have significant L2 cache misses. Most server applications have higher L2 cache misses as they tend to come with even larger memory footprints and huge datasets.

The larger memory footprint and shrinking and expanding caches can cause more TLB misses too. Especially virtualized workloads need large and fast TLBs as they switch between contexts much more often.

As most server applications are easier to multi-thread (for example, a thread for each connection) but are likely to work on the same data (e.g. a relational database), keeping the caches coherent tends to produce much more coherency traffic, and locks are much more frequent.

Some desktop workloads such as compiling and games have much higher branch misprediction ratios than server applications. Server applications tend to be no more branch intensive than your average integer applications.

Quick Summary

The end result is that most server applications have low IPC. Quite a few workstation applications achieve 1.0-2.0 IPC, while many server applications execute 3 to 5 times fewer instructions on average per cycle. Performance is dominated by Memory Level Parallelism (MLP), coherency traffic, and branch prediction in that order, and to a lesser degree integer processing power.

So is "Bulldozer" a server centric architecture? We'll need a more in-depth analysis to answer this question properly, but from a high level perspective, yes, it does appear that way. Getting 16 threads and 32MB of cache inside a 115W TDP power consumption envelope is no easy feat. But let the hardware and benchmarks now speak.

Introducing AMD's Opteron 6200 Series Inside Our Interlagos Test System
Comments Locked

106 Comments

View All Comments

  • duploxxx - Thursday, November 17, 2011 - link

    Very interesting review as usual Johan, thx. It is good to see that there are still people who want to thoroughly make reviews.

    While the message is clear on the MS OS of both power and performance i think it isn't on the VMware. First of all it is quite confusing to what settings exactly have been used in BIOS and to me it doesn't reflect the real final conclusion. If it ain't right then don't post it to my opinion and keep it for further review....

    I have a beta version of interlagos now for about a month and the performance testing depending on bios settings have been very challenging.

    When i see your results i have following thoughts.

    performance: I don't think that the current vAPU2 was able to stress the 2x16core enough, what was the avarage cpu usage in ESXTOP during these runs? On top of that looking at the result score and both response times it is clear that the current BIOS settings aren't optimal in the balanced mode. As you already mentioned the system is behaving strange.
    VMware themselves have posted a document for v5 regarding the power best practices which clearly mentions that these needs to be adapted. http://www.vmware.com/files/pdf/hpm-perf-vsphere5....

    To be more precise, balanced has never been the right setting on VMware, the preferred mode has always been high performance and this is how we run for example a +400 vmware server farm. We rather use DPM to reduce power then to reduce clock speed since this will affected total performance and response times much more, mainly on the virtualization platform and OEM bios creations (lets say lack of in depth finetuning and options).

    Would like to see new performance results and power when running in high performance mode and according the new vSphere settings....
  • JohanAnandtech - Thursday, November 17, 2011 - link

    "l it is quite confusing to what settings exactly have been used in BIOS and to me it doesn't reflect the real final conclusion"

    http://www.anandtech.com/show/5058/amds-opteron-in...
    You can see them here with your own eyes.
    + We configured the C-state mode to C6 as this is required to get the highest Turbo Core frequencies

    "performance: I don't think that the current vAPU2 was able to stress the 2x16core enough, what was the avarage cpu usage in ESXTOP during these runs?"

    93-99%.

    "On top of that looking at the result score and both response times it is clear that the current BIOS settings aren't optimal in the balanced mode."

    Balanced and high performance gave more or less the same performance. It seems that the ESX power manager is much better at managing p-states than the Windows one.

    We are currently testing Balanced + c-states. Stay tuned.
  • duploxxx - Thursday, November 17, 2011 - link

    thx for answers, i read the whole thread, just wasn't sure that you took the same settings for both windows and virtual.

    according to Vmware you shouldn't take balanced but rather OS controlled, i know my BIOS has that option, not sure for the supermicro one.

    quite a strange result with the ESXTOP above 90% with same performance results, there just seems to be a further core scaling issue on the vAPU2 with the performance results or its just not using turbo..... we know that the module doesn't have the same performance but the 10-15% turbo is more then enough to level that difference which would still leave you with 8 more cores

    When you put the power mode on high performance it should turbo all cores for the full length at 2.6ghz for the 6276, while you mention it results in same performance are you sure that the turbo was kicking in? ESXTOP CPU higher then 100%? it should provide more performance....
  • Calin - Friday, November 18, 2011 - link

    You're encrypting AES-256, and Anand seem to encryrpt AES-128 in the article you liked to in the Other Tests: TrueCrypt and 7-zip page
  • taltamir - Friday, November 18, 2011 - link

    Conclusion: "Intel gives much better performance/watt and performance in general; BD gives better performance/dollar"

    Problem: Watts cost dollars, lots of them in the server space because you need to some some pretty extreme cooling. Also absolute performance per physical space matters a lot because that ALSO costs tons of money.
  • UberApfel - Sunday, November 20, 2011 - link

    A watt-year is about $2.

    The difference in cost between a X5670 & 6276; $654

    On Page 7...
    X5670: 74.5 perf / 338 W
    6276: 71.2 perf / 363 W

    adjusted watt-per-performance for 6276: 363 * (74.5 / 71.2) = 380

    difference in power consumption: 42W

    If a server manages an average of 50% load over all time; the Xeon's supposed superior power-efficiency would pay for itself after only 31 years.

    Of course you're not taking into consideration that this test is pretty much irrelevant to the server market. Additionally, as the author failed to clarify when asked, Anandtech likely didn't use newer compilers which show up to a 100% performance increase in some applications ~ looky; http://www.phoronix.com/scan.php?page=article&...
  • Thermalzeal - Monday, November 21, 2011 - link

    Good job AMD, you had one thing to do, test your product and make sure it beat competitors at the same price, or gave comparable performance for a lower price.

    Seriously, wtf are you people doing?
  • UberApfel - Tuesday, November 22, 2011 - link

    Idiots like this is exactly why I say the review is biased. How can anyone with the ability to type be able to scan over this review and come to such a conclusion. At least with the confidence to comment.
  • zappb - Tuesday, November 29, 2011 - link

    completely agree - some very strange comments along these lines over the last 11 pages
  • zappb - Tuesday, November 29, 2011 - link

    posted by ars technica - incredibly tainted in intels favour

    The title is enough:

    "AMD's Bulldozer server benchmarks are here, and they're a catastrophe"

Log in

Don't have an account? Sign up now