What Makes Server Applications Different?

The large caches and high integer core (cluster) count in one Orochi die (four CMT module Bulldozer die) made quite a few people suspect that the Bulldozer design first and foremost was created to excel in server workloads. Reviews like our own AMD FX-8150 launch article have revealed that single-threaded performance has (slightly) regressed compared to the previous AMD CPUs (Istanbul core), while the chip performs better in heavy multi-threaded benchmarks. However, high performance in multi-threaded workstation and desktop applications does not automatically mean that the architecture is server centric.

A more in depth analysis of the Bulldozer architecture and its performance will be presented in a later article as it is out of the scope of this one. However, many of our readers are either hardcore hardware enthusiasts or IT professionals that really love to delve a bit deeper than just benchmarks showing if something is faster/slower than the competition, so it's good to start with an explanation of what makes an architecture better suited for server applications. Is the Bulldozer architecture a “server centric architecture”?

What makes a server application different anyway?

There have been extensive performance characterizations on the SPEC CPU benchmark, which contains real-world HPC (High Performance Computing), workstation, and desktop applications. The studies of commercial web and database workloads on top of real CPUs are less abundant, but we dug up quite a bit of interesting info. In summary we can say that server workloads distinguish themselves from the workstation and desktop ones in the following ways.

They spend a lot more time in the kernel. Accessing the network stack, the disk subsystem, handling the user connections, syncing high amounts of threads, demanding more memory pages for expending caches--server workloads make the OS sweat. Server applications spend about 20 to 60% of their execution time in the kernel or hypervisor, while in contrast most desktop applications rarely exceed 5% kernel time. Kernel code tends to be very low IPC  (Instructions Per Clockcycle) with lots of dependencies.

That is why for example SPECjbb, which does not perform any networking and disk access, is a decent CPU benchmark but a pretty bad server benchmark. An interesting fact is that SPECJBB, thanks to the lack of I/O subsystem interaction, typically has an IPC of 0.5-0.9, which is almost twice as high as other server workloads (0.3-0.6), even if those server workloads are not bottlenecked by the storage subsystem.

Another aspect of server applications is that they are prone to more instruction cache misses. Server workloads are more complex than most processing intensive applications. Processing intensive applications like encoders are written in C++ using a few libraries. Server workloads are developed on top of frameworks like .Net and make of lots of DLLs--or in Linux terms, they have more dependencies. Not only is the "most used" instruction footprint a lot larger, dynamically compiled software (such as .Net and Java) tends to make code that is more scattered in the memory space. As a result, server apps have much more L1 instruction cache misses than desktop applications, where instruction cache misses are much lower than data cache misses.

Similar to the above, server apps also have more L2 cache misses. Modern desktop/workstation applications miss the L1 data cache frequently and need the L2 cache too, as their datasets are much larger than the L1 data cache. But once there, few applications have significant L2 cache misses. Most server applications have higher L2 cache misses as they tend to come with even larger memory footprints and huge datasets.

The larger memory footprint and shrinking and expanding caches can cause more TLB misses too. Especially virtualized workloads need large and fast TLBs as they switch between contexts much more often.

As most server applications are easier to multi-thread (for example, a thread for each connection) but are likely to work on the same data (e.g. a relational database), keeping the caches coherent tends to produce much more coherency traffic, and locks are much more frequent.

Some desktop workloads such as compiling and games have much higher branch misprediction ratios than server applications. Server applications tend to be no more branch intensive than your average integer applications.

Quick Summary

The end result is that most server applications have low IPC. Quite a few workstation applications achieve 1.0-2.0 IPC, while many server applications execute 3 to 5 times fewer instructions on average per cycle. Performance is dominated by Memory Level Parallelism (MLP), coherency traffic, and branch prediction in that order, and to a lesser degree integer processing power.

So is "Bulldozer" a server centric architecture? We'll need a more in-depth analysis to answer this question properly, but from a high level perspective, yes, it does appear that way. Getting 16 threads and 32MB of cache inside a 115W TDP power consumption envelope is no easy feat. But let the hardware and benchmarks now speak.

Introducing AMD's Opteron 6200 Series Inside Our Interlagos Test System
Comments Locked

106 Comments

View All Comments

  • UberApfel - Wednesday, November 16, 2011 - link

    If anyone finds me a madman; let me explain this simply by example. Benchmark choices aside...

    If this test were to compare any of the top or middle-tier processors on the "AMD vs. Intel 2-socket SKU Comparison" chart ( http://www.anandtech.com/show/5058/amds-opteron-in... ) with their matching competition; this article would tell a different story in essence. Which does in fact, regardless of how fair the written conclusion may be, makes it biased.

    Examples:
    X5650 vs 6282 SE
    E5649 vs 6276
    E5645 vs 6272
  • JohanAnandtech - Thursday, November 17, 2011 - link

    "Yet handpicking the higher clocked Opteron 6276 (for what good reason?) seems to be nothing but an aim to make the new 6200 series seem un-remarkable in both power consumption and performance"

    Do you realize you are blaming AMD? That is the CPU they sent us.

    "The 6272 is cheaper, more common, and would beat the Xeon X5670 in power consumption which half this review is weighted on."

    The 6272 is nothing more than a lower speedbin of the 6276. It has the same power consumption but slightly lower performance. Performance/wat is thus worse.

    "PostgreSQL/SQLite? Facebook's HipHop? Node.js? Java? Something relevant to servers and not something obscure enough to sound professional? "

    We use Zimbra, Phpbb, Apache, MySQL. What is your point? that we don't include every server software on the planet? If you look around how many publications are running good repeatable server benchmarks? If it would be so easy as running Cinebench or Truecrypt, I think everybody would be.

    "Even the chart on Page 1 is designed to make Intel look superior all-around. For what reason would you exclude the Opteron 4274 HE (65W TDP) or the Opteron 4256 EE (35W TDP) from the 'Power Optimized' section?"

    To be honest, those CPUs were not even in AMD's presentation that we got. We were only briefed about Interlagos.
  • UberApfel - Thursday, November 17, 2011 - link

    Did they send you the Xeon X5670 also? I suppose who's ever handling media relations at AMD is either careless or disgruntled. eg. Sending a slightly overclocked processor with a 30% staple that happens to scale unusually bad in terms of power efficiency.

    Please just answer this honestly; if you had compared a Opteron 6272 w/ a E5645 ... would your article present a different story?

    Fair as you may have tried to be; you don't have to look far to find a comment here that came to the "BD is a joke" conclusion.

    ---

    Using a phpbb stress test is hardly useful or relevent as a server benchmark; nevermind under a VM. Unless configured extensively; it's I/O bound. "Average Response Time" is also irrelevant; how is the reader to know if your 'response time' does not favor processors better with single-threaded applications?

    Additionally; VM's on a better single-threaded processor will score higher in benchmarks due to the overhead as parallelism isn't optimized. Yet these results make zero sense in real-world usage. It contradicts the value of VM's; flexible scalability for low-usage applications.

    Finally; I'd estimate that less than 5% of servers are virtual (if that). VM's are most popular with web servers and even there they have a small market share as they only appeal to small clients. Large clients use clusters of dedicated; tiny clients use shared dedicated.

    Did you even use gcc 4.7 or Open64? In some applications; the new versions yield up to 300% higher performance for Bulldozer.
  • JohanAnandtech - Thursday, November 17, 2011 - link

    "if you had compared a Opteron 6272 w/ a E5645 ... would your article present a different story?"

    You want us to compare a $551 80W TDP Intel cpu with a $774 115 AMD CPU?

    "Unless configured extensively; it's I/O bound."
    We know how to monitor with ESX top. There is a reason why we have a disk system of 2 SDDs and 6 x 15k SAS disks.

    "Average Response Time" is also irrelevant
    Huh? That is like saying that 0-60 mph acceleration times are irrelevant to sports cars.

    "Finally; I'd estimate that less than 5% of servers are virtual (if that)"
    ....Your estimate unfortunately was true in 2006. We are 2011 now. Your estimate is 10x off, maybe more.
  • UberApfel - Thursday, November 17, 2011 - link

    "You want us to compare a $551 80W TDP Intel cpu with a $774 115 AMD CPU?"
    $539

    "The 6272 is nothing more than a lower speedbin of the 6276. It has the same power consumption but slightly lower performance. Performance/wat is thus worse."
    By your logic; the FX-8120 and FX-8150 have equal power consumption. They don't.

    "We know how to monitor with ESX top. There is a reason why we have a disk system of 2 SDDs and 6 x 15k SAS disks."
    It's still I/O bound unless configured extensively.

    "Huh? That is like saying that 0-60 mph acceleration times are irrelevant to sports cars."
    Yeah; it is if you're measuring the distance traveled by a number of cars. The opteron is obviously slower in handling single requests but it can handle maybe twice as many at the same time. Unless your stress test made every request @ T=0 and your server successfully qued them all, dropped none, and included the que time in the response time... it would favor the xeon immensely. Perhaps it does do all this; which is why I said "how is the reader to know" when you could have just as easily done 'Average Requests Completed Per Second'.

    "....Your estimate unfortunately was true in 2006. We are 2011 now. Your estimate is 10x off, maybe more."
    Very funny. Did the salesman that told you that also recommend these benchmarks? Folklore tells that Google alone has over a million servers, 20X that of Rackspace or ThePlanet, and they aren't running queries on VM's.
  • boomshine - Thursday, November 17, 2011 - link

    I hope you included MS SQL 2008 performance just like in opteron 6174 review:

    http://www.anandtech.com/show/2978/amd-s-12-core-m...
  • JohanAnandtech - Thursday, November 17, 2011 - link

    Yes, that test failed to be repeatable for some weird reason. We will publish it as soon as we get some reliable numbers out of it.
  • JohanAnandtech - Thursday, November 17, 2011 - link

    "SMT can only execute a single thread at once. "

    The whole point of SMT is to have one thread in one execution and another thread in the other execution slot.

    In fact, the very definition of SMT is that two or more threads can execute in parallel on a superscalar execution engine.
  • TC2 - Thursday, November 17, 2011 - link

    another joke from AMD with their BD "server-centric" architecture - bla-bla! amd 8\16 against intel 6\12 and again can't win!
  • pcfxer - Thursday, November 17, 2011 - link

    " make of lots of DLLs--or in Linux terms, they have more dependencies"

    Libraries is the word you're looking for.

    I also see the mistake of mixing programming APIs/OS design/Hardware design...

    Good software has TLBs, asynchronous locking where possible, etc, as does hardware but they are INDEPENDENT. The glue as you know, is how compiled code is treated at the uCode level. IMO, AMD hardware is fully capable of outperforming Intel hardware, but AMD uCode is incredibly good.

Log in

Don't have an account? Sign up now