vApus Mark II

vApus Mark II is our newest benchmark suite that tests how well servers cope with virtualizing "heavy duty applications". We explained the benchmark methodology here. We used vSphere 4.1 Update 1, based upon the 64 bit ESX 4.1.0 b348481 hypervisor.

vApus Mark II score
* with 128GB of RAM

Benchmarks cannot be interpreted easily, and virtualization adds another layer of complexity. As always, we need to explain quite a few details and nuances.

First of all, we tested most servers with 64GB of RAM. However, the memory subsystem of the Quad Xeon needs 32 DIMMs before it can deliver maximum bandwidth. As some of these server systems will get those 32 DIMMs while others will not, we tested both with 16 (64GB) and 32 DIMMs (128GB). Our vApus mark test requires only 11GB per tile: 4GB for the OLTP database, 4GB for the OLAP and 1GB for each of the three web applications (3GB in total). So even a five tile test demands only 55GB. Thus, in this particular benchmark there is no real advantage to having 128GB of RAM other than the bandwidth advantage for the quad Xeon platform. That is why we do not test the the Quad Opteron with more than 64GB: it makes no difference and makes the graph even more complex.

Then there's the problem that every virtualization benchmark encounters: the number of tiles (a tile is a group of VMs). With VMmark, the benchmark folks add tiles until the total throughput begins to decline. The problem with this approach is that you favor throughput over response time. In the real world, response time is more important than throughput. We test with both four (20 VMs, 72 vCPUs) and five tiles (25 VMs, 90 vCPUs). Which benchmark gives you the most accurate number for a given system? Let us delve a little deeper and take the response time into account.

SAP S&D Benchmark Virtualized Performance: Response Time
Comments Locked

62 Comments

View All Comments

  • Casper42 - Thursday, May 19, 2011 - link

    HP makes the BL620c G7 Blade server that is a 2P Nehalem EX (soon to offer Westmere EX)
    And believe it or not, but the massive HP DL980 G7 (8 Proc Nehalem/Westmere EX is actually running 4 pair of EX CPUs. HP has a custom ASIC Bridge chip that brings them all together. This design MIGHT actually support running the 2P models as each Pair goes through the bridge chip.

    Dell makes the R810 and while its a 4P Server, the memory subsystem actually runs best when its run as a 2P Server. That would be a great platform for the 2P CPUs as well.
  • mczak - Friday, May 20, 2011 - link

    2P E7 looks like a product for a very, very small niche to me. In terms of pure performance, a 2P Westmere-EP pretty much makes up for the deficit in cores with the higher clock - sure it also has less cache, but for the vast majority of cases it is probably far more cost effective (not to mention at least idle power consumption will be quite a bit lower). Which relegates the 2P E7 to cases where you don't need more performance than 2P Westmere-EP, but depend on some of the extra features (like more memory possible, RAS, whatever) the E7 offers.
  • DanNeely - Thursday, May 19, 2011 - link

    If anyone wants an explanation of what changed between these types of memory, simmtester.com has a decent writeup and illustrations. Basically each LR-DIMM has a private link to the buffer chip, instead of each dimm having a very high speed buffer daisychained to the next dimm on the channel.

    http://www.simmtester.com/page/news/showpubnews.as...
  • jcandle - Saturday, May 21, 2011 - link

    I love the link but your comment is a bit misleading.

    The main difference isn't the removal of the point-to-point connections but the reversion to a parallel configuration similar to classic RDIMMs. The issues with FBDIMMs stemmed from their absurdly clocked serial bus that required 4x greater operating frequency over the actual DRAM clock.

    So in dumb terms... FBDIMM = serial, LRDIMM = parallel
  • ckryan - Thursday, May 19, 2011 - link

    It must be very, very difficult to generate a review for server equipment. Once you get into this class of hardware it seems as though there aren't really any ways to test it unless you actually deploy the server. Anyway, kudos for the effort in trying to quantify something of this caliber.
  • Shadowmaster625 - Thursday, May 19, 2011 - link

    Correct me if I'm wrong, but isnt an Opteron 6174 just $1000? And it is beating the crap out of this "flagship" intel chip by a factor of 3:1 in performance per dollar, and beats it in performance per watt also? And this is the OLD AMD architecture? This means that Interlagos could pummel intel by something like 5:1. At what point does any of this start to matter?

    You know it only costs $10,000 for a quad opteron 6174 server with 128GB of RAM?
  • alpha754293 - Thursday, May 19, 2011 - link

    That's only true IF the programs/applications that you're running on it isn't licensed by sockets/processor/core-counts.

    How many of those Opteron systems would it take to match the performance? And the cost of the systems? And the cost of the software, if they're licensed on a per core basis?
  • tech6 - Thursday, May 19, 2011 - link

    Software licensing is a part of the overall picture (particularly if you have to deal with Oracle) but the point is well taken that AMD delivers much better bang for the buck than Intel. An analysis of performance/$ would be an interesting addition to this article.
  • erple2 - Thursday, May 19, 2011 - link

    The analysis isn't too hard. If you're licensing things on a per core cost (Hello, Oracle, I'm staring straight at you), then how much does the licensing cost have to be per core before you've made up that 20k difference in price (assuming AMD = 10k, intel = 30k)? Well, it's simple - 20k/8 cores per server more for the AMD = $2500 cost per core. Now, if you factor in that on a per core basis, the intel server is between 50 and 60% faster, things get worse for AMD. Assuming you could buy a server from AMD that was 50% more powerful (via linearly increasing core count), that would be 50% more of a server, but remember each server has 20% more cores. So it's really about 60% more cores. Now you're talking about an approximately 76.8 core server. That's 36 more cores than intel. So what's the licensing cost gotta be before AMD isn't worth it for this performance level? well, 20k/36 = $555 per core.

    OK, fair enough. Maybe things are licensed per socket instead. You still need 50% more sockets to get equivalent performance. So that's 2 more sockets (give or take) for the AMD to equal the intel in performance. Assuming things scale linearly with price, that "server" will cost roughly 15k for the AMD server. Licensing costs now have to be more than 7.5k (15k difference in price between the AMD and intel servers divided by 2 extra sockets) higher per socket to make the intel the "better deal" per performance. Do you know how much an Oracle Suite of products costs? I'll give you a hint. 7.5k isn't that far off the mark.
  • JarredWalton - Thursday, May 19, 2011 - link

    There are so many factors in the cost of a server that it's difficult to compare on just price and performance. RAS is a huge one -- the Intel server targets that market far more than the AMD used. Show me two identical servers in all other areas other than CPU type and socket count, and then compare the pricing. For example, here are two similar HP ProLiant setups:

    HP ProLiant DL580 G7
    http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/1535...
    2 x Xeon E7-4830
    64GB RAM
    64 DIMM slots
    Advanced ECC
    Online Spare
    Mirrored Memory
    Memory Lock Step Mode
    DIMM Isolation
    Total: $13,679
    +$4000 for 2x E7-4830


    HP ProLiant DL585 G7
    http://h10010.www1.hp.com/wwpc/us/en/sm/WF06b/1535...
    4 x Opteron 6174
    64GB RAM
    32 DIMM slots
    Advanced ECC
    Online Spare
    Total: $14,889

    Now I'm sure there are other factors I'm missing, but seriously, those are comparable servers and the Intel setup is only about $3000 more than the AMD equivalent once you add in the two extra CPUs. I'm not quite sure how much better/worse the AMD is relative to the Intel, but when people throw out numbers like "OMG it costs $20K more for the Intel server" by comparing an ultra-high-end Intel setup to a basic low-end AMD setup, it's misguided at best. By my estimates, for relatively equal configurations it's more like $3000 extra for Intel, which is less than a 20% increase in price--and not even factoring in software costs.

Log in

Don't have an account? Sign up now