Low Level Measurements

Before we start with the rather complex virtualization benchmarks, it is good to perform a few low level benchmarks. First, we measured the bandwidth in Linux with our Pathscale compiled Stream binary on the latest Ubuntu Linux. For more details about our Stream binary, check here.

Stream Triad (Linux 12

To get a better understanding we tested the 8GB RDIMMs at both the rated speed (1600MHz) as well as configured (via the BIOS) as 1333MHz DIMMs. The comparison of 1333MHz RDIMM and LRDIMM allows us to measure the impact of the iMB buffer. That impact is small but measurable: RDIMMs offer about 5% more bandwidth than LRDIMMs at the same speed. 1600MHz RDIMMs offer 14% higher bandwidth than LRDIMMs.

Of course, bandwidth only matters when you run out of it. Latency always matters, although a 15MB (up to 20MB) L3 cache can hide a lot of memory latency. We tested memory latency with AIDA64.

AIDA Latency

The iMB adds about 11% if we disable turbo and compare the LRDIMM and RDIMM at the same clockspeed (1333MHz). Somewhat interesting is that the latency of the RDIMM at 1600MHz is higher. The memory chips are accessed with a significantly higher CAS and RAS to CAS latency at 1600MHz, which explains this counter intuitive result. Once we enable turbo, the latency differences get very small. The iMB causes only 2% extra latency, which is negligible.

The conclusion so far: the iMB decreases bandwidth by a measureable but small amount, but the latency impact is hardly measurable.

The Purpose of this Test vApus Mark Mixed Performance
Comments Locked

26 Comments

View All Comments

  • koinkoin - Friday, August 3, 2012 - link

    For HPC solutions I like the Dell C6220, dense, and with 2 or 4GB of memory per cpu core you get a good configuration in a 2U chassis for 4 servers.

    But for VMware, servers like the R720 give you more room to play with memory and IO slots.

    Not counting that those dense server don’t offer the same level of management and user friendliness.
  • JohanAnandtech - Friday, August 3, 2012 - link

    A few thoughts:

    1. Do you still need lots of I/O slots now that we can consolidate a lot of gigabit Ethernets in Two 10GBe

    2. Management: ok, a typical blade server can offer a bit more, but the typical remote management solutions that Supermicro now offers are not bad at all. We have been using them for several years now.

    Can you elaborate what you expect from the management solution that you won't expect to see in a dense server?
  • alpha754293 - Friday, August 3, 2012 - link

    re: network consolidation
    Network consolidation comes at a cost premium. You can still argue that an IB QDR will give you better performance/bandwith, but a switch is $6k and other systems that don't have IB QDR built in, it's about $1k per NIC. Cables are at least $100 a piece.

    If you can use it and justify the cost, sure. But GbE is cheap. REALLY REALLY cheap now that it's been in the consumer space for quite some time.

    And there aren't too many cases when you might exceed GbE (even the Ansys guys suggest investing in better hardware rather than expensive interconnects). And that says a LOT.

    re: management
    I've never tried Supermicro's IMPI, but it looks to be pretty decent. Even if that doesn't work, you can also use 3rd party like logmein and that works quite well too! (Although not available for Linux, but there are Linux/UNIX options available out there as well).

    Supermicro also has an even higher density version of this server (4x half-width, 1U DP blade node.)
  • JonBendtsen - Monday, August 6, 2012 - link

    I have tried Supermicro IPMI, works nicely. I can power on/off the machine and let it boot from a .iso image I have on my laptop. This means that in case I have to boot from a rescue CD, then I do not even have to plug a CD drive into the machine. Everything can be done from my laptop, even when I am not in the office, or even the country.
  • bobbozzo - Tuesday, August 7, 2012 - link

    Can you access boot screens and the BIOS from the IPMI?

    For Linux, I use SSH (or VNC server), but when you've got memory or disk errors, etc., it's nice to see the BIOS screens.

    Bob
  • phoenix_rizzen - Thursday, August 9, 2012 - link

    Using either the web interface on the IPMI chip itself, or the IPMIView software from SuperMicro, you get full keyboard, mouse, console redirection. Meaning, you can view the POST, BIOS, pre-boot, boot, and console of the system.

    You can also configure the system to use a serial console, and configure the installed OS to use a serial console, and then connect to the serial console remotely using the ipmitool program.

    The IPMI implementation in SuperMicro motherboards (at least the H8DG6/H8DGi series, which we use) is very nice. And stable. And useful. :)
  • ForeverAlone - Friday, August 3, 2012 - link

    Only 128GB RAM? Unacceptable!
  • Guspaz - Monday, August 20, 2012 - link

    It starts to matter more when you're pouring on the VMs. With two sockets there, you're talking 16 cores, or 32 threads. That's the kind of machine that can handle a rather large number of VMs, and with only 128GB of RAM, that would be the limitation regarding how many VMs you could stick on there. For example, if you wanted to have a dedicated thread per VM, you're down to only 4GB per VM, which is kind of low for a server.
  • darking - Friday, August 3, 2012 - link

    I think the price on the webpage is wrong. or atleast it differs by market.

    i just checked the Danish and the British webstores, and the 32GB LRDIMMS are priced at around 2200$ not the 3800$ that the US webpage has.
  • JohanAnandtech - Friday, August 3, 2012 - link

    They probably changed it in the last few days as HP as lowered their price to $2000 a while ago. But when I checked, it was $3800

Log in

Don't have an account? Sign up now