SAP S&D Benchmark

The SAP SD (sales and distribution, 2-tier internet configuration) benchmark is an interesting benchmark as it is a real world client-server application. We looked at SAP's benchmark database for these results. The results below all run on Windows 2003 Enterprise Edition and MS SQL Server 2005 database (both 64-bit). Every 2-tier Sales & Distribution benchmark was performed with SAP's latest ERP 6 enhancement package 4. These results are NOT comparable with any benchmark performed before 2009. The new 2009 version of the benchmark produces scores that are 25% lower. We analyzed the SAP Benchmark in-depth in one of our earlier articles. The profile of the benchmark has remained the same:

  • Very parallel resulting in excellent scaling
  • Low to medium IPC, mostly due to "branchy" code
  • Somewhat limited by memory bandwidth
  • Likes large caches (memory latency!)
  • Very sensitive to sync ("cache coherency") latency

SAP Sales & Distribution 2 Tier benchmark

There is no doubt here: the Westmere-EX Xeon delivers with 30% higher performance than the previous x86 quad CPU record. The 40-core, 80-thread quad Xeon server can not beat the 32-core, 128-thread IBM Power 750, the RISC champion; however, the high-end IBM servers start at $100,000, two to three times more than a comparable Xeon system.

The 30% extra performance that the new 32 nm Xeon delivers over its predecessor also increases the gap with AMD. The best quad Xeon now offers 50% more performance than the best quad Opteron. The ERP market is a market where RAS, scalability, and performance are the top priorities and hardware pricing is only a secondary thought. There is little doubt in our mind that Intel will continue to dominate the x86 ERP server market.

Test Servers and Benchmark Setup Virtual Performance on vSphere 4
Comments Locked

62 Comments

View All Comments

  • extide - Monday, June 6, 2011 - link

    When you spend $100,000 + on the S/W running on it, the HW costs don't matter. Recently I was in a board meeting for launching a new website that the company I work for is going to be running. These guys don't know/care about these detailed specs/etc. They simply said, "Cost doesn't matter just get whatever is the fastest."
  • alpha754293 - Thursday, May 19, 2011 - link

    Can you run the Fluent and LS-DYNA benchmarks on the system please? Thanks.
  • mosu - Thursday, May 19, 2011 - link

    A good presentation with onest conclusions, I like this one.
  • ProDigit - Thursday, May 19, 2011 - link

    What if you would compare it to 2x corei7 desktops, running Linux and free server software, what keeps companies from doing that?
  • Orwell - Thursday, May 19, 2011 - link

    Most probably the lack of support for more than about 48GiB of RAM, the lack of ECC in the case of Intel and the lack of multisocket-support, just to name a few.
  • ganjha - Thursday, May 19, 2011 - link

    There is always the option of clusters...
  • L. - Friday, May 20, 2011 - link

    Err... like you're going to go cheap for the CPU and then put everything on infiniband --
  • DanNeely - Thursday, May 19, 2011 - link

    Many of the uses for this class of server involve software that won't scale across multiple boxes due to network latency, or monolithic design. The VM farm test was one example that would; but the lack of features like ECC support would preclude it from consideration by 99% of the buyers of godbox servers.
  • erple2 - Thursday, May 19, 2011 - link

    I think that more and more people are realizing that the issue is more about lack of scaling linearly than anything like ECC. Buying a bullet proof server is turning out to cost way too much money (I mean ACTUALLY bullet proof, not "so far, this server has been rock solid for me").

    I read an interesting article about "design for failure" (note, NOT the same thing as "design to fail") by Jeff Atwood the other day, and it really opened my eyes. Each extra 9 in 99.99% uptime starts costing exponentially more money. That kind of begs the question, should you be investing more money into a server that shouldn't fail, or should you be investigating why your software is so fragile as to not be able to accommodate a hardware failure?

    I dunno. Designing and developing software that can work around hardware failures is a very difficult thing to do.
  • L. - Thursday, May 19, 2011 - link

    Well ./ obvious.

    Who has a fton of servers ? Google
    How do they manage availability ?
    So much redundancy that resilience is implicit and "reduced service" isn't even all that reduced.

    And no designing / dev s/w that works around h/w failures is not that hard at all, and it is in fact quite common (load balancing, active/passive stuff, virtualization helps too etc.).

Log in

Don't have an account? Sign up now