What Makes Server Applications Different?

The large caches and high integer core (cluster) count in one Orochi die (four CMT module Bulldozer die) made quite a few people suspect that the Bulldozer design first and foremost was created to excel in server workloads. Reviews like our own AMD FX-8150 launch article have revealed that single-threaded performance has (slightly) regressed compared to the previous AMD CPUs (Istanbul core), while the chip performs better in heavy multi-threaded benchmarks. However, high performance in multi-threaded workstation and desktop applications does not automatically mean that the architecture is server centric.

A more in depth analysis of the Bulldozer architecture and its performance will be presented in a later article as it is out of the scope of this one. However, many of our readers are either hardcore hardware enthusiasts or IT professionals that really love to delve a bit deeper than just benchmarks showing if something is faster/slower than the competition, so it's good to start with an explanation of what makes an architecture better suited for server applications. Is the Bulldozer architecture a “server centric architecture”?

What makes a server application different anyway?

There have been extensive performance characterizations on the SPEC CPU benchmark, which contains real-world HPC (High Performance Computing), workstation, and desktop applications. The studies of commercial web and database workloads on top of real CPUs are less abundant, but we dug up quite a bit of interesting info. In summary we can say that server workloads distinguish themselves from the workstation and desktop ones in the following ways.

They spend a lot more time in the kernel. Accessing the network stack, the disk subsystem, handling the user connections, syncing high amounts of threads, demanding more memory pages for expending caches--server workloads make the OS sweat. Server applications spend about 20 to 60% of their execution time in the kernel or hypervisor, while in contrast most desktop applications rarely exceed 5% kernel time. Kernel code tends to be very low IPC  (Instructions Per Clockcycle) with lots of dependencies.

That is why for example SPECjbb, which does not perform any networking and disk access, is a decent CPU benchmark but a pretty bad server benchmark. An interesting fact is that SPECJBB, thanks to the lack of I/O subsystem interaction, typically has an IPC of 0.5-0.9, which is almost twice as high as other server workloads (0.3-0.6), even if those server workloads are not bottlenecked by the storage subsystem.

Another aspect of server applications is that they are prone to more instruction cache misses. Server workloads are more complex than most processing intensive applications. Processing intensive applications like encoders are written in C++ using a few libraries. Server workloads are developed on top of frameworks like .Net and make of lots of DLLs--or in Linux terms, they have more dependencies. Not only is the "most used" instruction footprint a lot larger, dynamically compiled software (such as .Net and Java) tends to make code that is more scattered in the memory space. As a result, server apps have much more L1 instruction cache misses than desktop applications, where instruction cache misses are much lower than data cache misses.

Similar to the above, server apps also have more L2 cache misses. Modern desktop/workstation applications miss the L1 data cache frequently and need the L2 cache too, as their datasets are much larger than the L1 data cache. But once there, few applications have significant L2 cache misses. Most server applications have higher L2 cache misses as they tend to come with even larger memory footprints and huge datasets.

The larger memory footprint and shrinking and expanding caches can cause more TLB misses too. Especially virtualized workloads need large and fast TLBs as they switch between contexts much more often.

As most server applications are easier to multi-thread (for example, a thread for each connection) but are likely to work on the same data (e.g. a relational database), keeping the caches coherent tends to produce much more coherency traffic, and locks are much more frequent.

Some desktop workloads such as compiling and games have much higher branch misprediction ratios than server applications. Server applications tend to be no more branch intensive than your average integer applications.

Quick Summary

The end result is that most server applications have low IPC. Quite a few workstation applications achieve 1.0-2.0 IPC, while many server applications execute 3 to 5 times fewer instructions on average per cycle. Performance is dominated by Memory Level Parallelism (MLP), coherency traffic, and branch prediction in that order, and to a lesser degree integer processing power.

So is "Bulldozer" a server centric architecture? We'll need a more in-depth analysis to answer this question properly, but from a high level perspective, yes, it does appear that way. Getting 16 threads and 32MB of cache inside a 115W TDP power consumption envelope is no easy feat. But let the hardware and benchmarks now speak.

Introducing AMD's Opteron 6200 Series Inside Our Interlagos Test System
Comments Locked

106 Comments

View All Comments

  • geoxx - Friday, December 9, 2011 - link

    Sorry but neotiger is totally right, choice of benchmark sucks. We are not helped *at all* by your review.
    What company 32-core server is being used for 3D rendering, cinebench, file compression, truecrypt encryption??
    You benchmarked it like it was a CPU of the nineties for a home enthusiast.

    You are probably right pointing us to http://www.anandtech.com/show/2694 but your benchmarks don't reflect that AT ALL. Where are file compression, encryption, 3D rendering and cinebench in that chart?

    Even performances per watt is not very meaningful because when one purchases a 2-socket or 4-socket server, electricity cost is not an issue. Companies want to simplify deployment with such a system, they want this computer to run as fast as a cluster, in order not to be bound to cluster databases which are a PAIN. So people want to see scalability of applications to full core count on this kind of system, not so much performances per watt.

    Virtualization is the ONLY senseful benchmark you included.

    TPC as suggested is a totally right benchmark, that's the backend and bottleneck for most of the things you see in your charts at http://www.anandtech.com/show/2694 , and objection on storage is nonsense, just fit a database in ramdisk (don't tell me you need a database larger than 64GB for a benchmark), export as block device, then run the test. And/or use one PCI-e based SSD which you certainly have.

    http://www.anandtech.com/show/2694 mentions software development: how much effort does it require to set up a linux kernel compile benchmark?

    http://www.anandtech.com/show/2694 mentions HPC: can you set up a couple of bioinformatics benchmarks such as BLAST (integer computation, memory compare), GROMACS (matrix FPU computations) and Fluent? Please note that none of your tests includes memory compares and FPU which are VERY IMPORTANT in HPC. Gromacs and fluent would cover the hole. Bioinformatics is THE hpc of nowdays and there are very few websites, if any, which help with the choice of CPUs for HPC computing.

    For email servers (37%!) and web servers (14%) also I am sure you can find some benchmarks.
  • Iketh - Tuesday, November 15, 2011 - link

    I'm not sure how the discovery of cores running in their power-saving state for far too long is anything new. My 2600k refuses to ramp up clocks while previewing video in a video editor even though a core is pegged at 100%. If I intervene and force it to 3.4ghz, preview framerate jumps from 8 fps to 16fps.

    This has been happening for YEARS! My old quad Phenom 2.2ghz did the exact same thing!

    It's extremely annoying and pisses me off I can't benefit from the power savings, let alone turbo.
  • MrSpadge - Tuesday, November 15, 2011 - link

    Sounds like you're running linux or some other strange OS, then. Or you may need a bios update. Generally Intel has its power management quit under control. In the AMD camp physical power state switches often take longer than the impatient OS expects, and thus average frequency is hurt. This was pretty bad for Phenom 1.

    MrS
  • Iketh - Tuesday, November 15, 2011 - link

    win7 home premium x64 and the phenom was with xp 32bit... i haven't found another scenario that causes this, only streaming video that's rendered on-the-fly
  • Zoomer - Wednesday, November 16, 2011 - link

    You have a 2600k and aren't running it at 4+ GHz?
  • Iketh - Wednesday, November 16, 2011 - link

    4.16 @ 1.32v when encoding, 3.02 @ 1.03v for gaming/internet
  • haplo602 - Wednesday, November 16, 2011 - link

    you do know that Linux did not have any problems with Phenom I power management unlike Windows ? Same is not with BD. Linux benchmarks look quite different from Windows and the gap is not that dramatic there.
  • BrianTho2010 - Tuesday, November 15, 2011 - link

    This whole review, the only thought I have is that there are no sandy bridge chips in it. When SB based Xeon chips come out I bet that Interlagos will be completely dominated.
  • Beenthere - Tuesday, November 15, 2011 - link

    Not really. SB chips don't fit in AMD sockets. AMD's installed customer base like the significant performance increase and power savings by just plugging in a new Opteron 6200/4200.
  • C300fans - Tuesday, November 15, 2011 - link

    It will. 2x6174 (24 cores) perform quite similar to 2x6274(32 cores). WTF

Log in

Don't have an account? Sign up now