The Purpose of this Test

We will not state that we had the most optimal testing configuration for our objectives. The most interesting LRDIMMs are the 32GB ones, and we had the cheaper (and thus easier to borrow) 16GB parts. The most interesting Supermicro Server is the 6027TR-D70RF+; we had the slightly older but very similar 6027TR-D71FRF. As always, we try to make the best with what we have in the lab. We believe that with this testing configuration we can still answer the questions that will pop up when you consider the different server configuration such as:

  • How much bandwidth and latency will you sacrifice when buying LRDIMMs instead of RDIMMs?
  • Does investing in expensive high capacity DIMMs pay off?
  • Can you really host twice as many VMs in twice as much memory?
  • How much performance do you gain by giving your VMs more physical memory?

Those two last questions seem silly at first sight but they are not. In most cases, virtual machines get (much) more memory than they really need. Most administrators prefer to give the VM quite a bit of memory headroom. It is a good practice, but it means that it is not necessarily a bad thing if the total amount of virtual memory is quite a bit higher than the total available physical memory. Unless all VMs are working hard, a modern hypervisor can make sure that the "needy" VMs get what they need and the "lazy" ones get only the bare minimum of physical RAM.

Indeed, an advanced hypervisor such as ESXi (especially ESXi) has a lot of tricks up its sleeve to make sure that even if you don't have enough physical memory, your VM will still run fine. Physical memory use is optimized by:

  • Transparent page sharing (TPS): the Hypervisor will only claim one page for several pages of several different VMs with identical content (e.g. the Windows kernel and HAL in several Windows based VMs)
  • Ballooning: the Hypervisor reclaims memory that a VM does not use and gives it to more "needy" VMs
  • Memory compression: pages that need to be swapped out (see further) are checked if they have a high compression ratio. If that is the case they are compressed and kept in a memory cache. As a result...
  • Hypervisor swapping: memory that is not active and not compressible can be swapped to disk, similar to a "normal" OS (Supervisors). This does not necessarily result in a large performance hit, as pages swapped out are rarely used.

So we thought it would be interesting to design a scenario where we could measure the performance differences between a system with lots of memory and a more budget limited one.

 

Benchmarking Configuration LRDIMM Performance
Comments Locked

26 Comments

View All Comments

  • koinkoin - Friday, August 3, 2012 - link

    For HPC solutions I like the Dell C6220, dense, and with 2 or 4GB of memory per cpu core you get a good configuration in a 2U chassis for 4 servers.

    But for VMware, servers like the R720 give you more room to play with memory and IO slots.

    Not counting that those dense server don’t offer the same level of management and user friendliness.
  • JohanAnandtech - Friday, August 3, 2012 - link

    A few thoughts:

    1. Do you still need lots of I/O slots now that we can consolidate a lot of gigabit Ethernets in Two 10GBe

    2. Management: ok, a typical blade server can offer a bit more, but the typical remote management solutions that Supermicro now offers are not bad at all. We have been using them for several years now.

    Can you elaborate what you expect from the management solution that you won't expect to see in a dense server?
  • alpha754293 - Friday, August 3, 2012 - link

    re: network consolidation
    Network consolidation comes at a cost premium. You can still argue that an IB QDR will give you better performance/bandwith, but a switch is $6k and other systems that don't have IB QDR built in, it's about $1k per NIC. Cables are at least $100 a piece.

    If you can use it and justify the cost, sure. But GbE is cheap. REALLY REALLY cheap now that it's been in the consumer space for quite some time.

    And there aren't too many cases when you might exceed GbE (even the Ansys guys suggest investing in better hardware rather than expensive interconnects). And that says a LOT.

    re: management
    I've never tried Supermicro's IMPI, but it looks to be pretty decent. Even if that doesn't work, you can also use 3rd party like logmein and that works quite well too! (Although not available for Linux, but there are Linux/UNIX options available out there as well).

    Supermicro also has an even higher density version of this server (4x half-width, 1U DP blade node.)
  • JonBendtsen - Monday, August 6, 2012 - link

    I have tried Supermicro IPMI, works nicely. I can power on/off the machine and let it boot from a .iso image I have on my laptop. This means that in case I have to boot from a rescue CD, then I do not even have to plug a CD drive into the machine. Everything can be done from my laptop, even when I am not in the office, or even the country.
  • bobbozzo - Tuesday, August 7, 2012 - link

    Can you access boot screens and the BIOS from the IPMI?

    For Linux, I use SSH (or VNC server), but when you've got memory or disk errors, etc., it's nice to see the BIOS screens.

    Bob
  • phoenix_rizzen - Thursday, August 9, 2012 - link

    Using either the web interface on the IPMI chip itself, or the IPMIView software from SuperMicro, you get full keyboard, mouse, console redirection. Meaning, you can view the POST, BIOS, pre-boot, boot, and console of the system.

    You can also configure the system to use a serial console, and configure the installed OS to use a serial console, and then connect to the serial console remotely using the ipmitool program.

    The IPMI implementation in SuperMicro motherboards (at least the H8DG6/H8DGi series, which we use) is very nice. And stable. And useful. :)
  • ForeverAlone - Friday, August 3, 2012 - link

    Only 128GB RAM? Unacceptable!
  • Guspaz - Monday, August 20, 2012 - link

    It starts to matter more when you're pouring on the VMs. With two sockets there, you're talking 16 cores, or 32 threads. That's the kind of machine that can handle a rather large number of VMs, and with only 128GB of RAM, that would be the limitation regarding how many VMs you could stick on there. For example, if you wanted to have a dedicated thread per VM, you're down to only 4GB per VM, which is kind of low for a server.
  • darking - Friday, August 3, 2012 - link

    I think the price on the webpage is wrong. or atleast it differs by market.

    i just checked the Danish and the British webstores, and the 32GB LRDIMMS are priced at around 2200$ not the 3800$ that the US webpage has.
  • JohanAnandtech - Friday, August 3, 2012 - link

    They probably changed it in the last few days as HP as lowered their price to $2000 a while ago. But when I checked, it was $3800

Log in

Don't have an account? Sign up now