Motherboard

A number of vendors exist in the dual processor workstation motherboard market. At the time of the build, LGA 2011 Xeons had already been introduced, and we decided to focus on boards supporting those processors. Since we wanted to devote one physical disk and one network interfaces to each VM, it was essential that the board have enough PCI-E slots for multiple quad-ported server NICs as well as enough native SATA ports. For our build, we chose the Asus Z9PE-D8 WS motherboard with an SSI EEB form factor..

Based on the C602 chipset, this dual LGA 2011 motherboard supports 8 DIMMs and has 7 PCIe 3.0 slots. The lanes can be organized as (2 x16 + 1 x16 + 1x8 or 4 x8 + 1x16 + 1 x8). All the slots are physically 16 lanes wide. The Intel C602 chipset provides two 6 Gbps SATA ports and eight SATA 3 Gbps ports. A Marvell PCIe 9230 controller provides four extra 6 Gbps ports making for a total of 14 SATA ports. This allows us to devote two ports to the host OS of the workstation and one port to each of the twelve planned VMs. The Z9PE-D8 WS motherboard also has two GbE ports based on the Intel 82574L. Two Gigabit LAN controllers are not going to be sufficient for all our VMs. We will address this issue further down in the build.

The motherboard also has 4 USB 3.0 ports, thanks to an ASMedia USB 3.0 controller. The Marvell SATA - PCIe bridge and the ASMedia USB3 controller are connected to the 8 PCIe lanes in the C602. All the PCIe 3.0 lanes come from the processors. Asus also provides support for SSD caching (where any installed SSD can be used as a cache for frequently accessed data, without any size limitations) in the motherboard. The Z9PE-D8 WS also has a Realtek ALC898 HD audio codec, but neither of the above aspects are of relevance to our build.

CPUs

One of the main goals of the build was to ensure low power consumption. At the same time, we wanted to run twelve VMs simultaneously. In order to ensure smooth operation, each VM needs at least one vCPU allocated exclusively to it. The Xeon E5-2600 family (Sandy Bridge-EP) has CPUs with core counts ranging from 2 to 8, with TDPs from 60 W to 150 W. Each core has two threads. Keeping in mind the number of VMs we wanted to run, we specifically looked at the 6 and 8 core variants, as two of those processors would give us 12 and 16 cores. Within these, we restricted ourselves to the low power variants. These included the hexa-core E5-2630L (60 W TDP) and the octa-core E5-2648L / E5-2650L (70 W TDP).

CPU decisions for machines meant to run VMs have to be usually made after taking the requirements of the workload into consideration. In our case, the workload for each VM involved IOMeter and Intel NASPT (more on these in the software infrastructure section). Both of these softwares tend to be I/O-bound, rather than CPU-bound, and can run reliably on even Pentium 4 processors. Therefore, the per-core performance of the three processors was not a factor that we were worried about.

Out of the three processors, we decided to go ahead with the hexa-core Xeon E5-2630L. The cores run at 2 GHz, but can Turbo up to 2.5 GHz when just one core is active. Each core has a 256 KB L2 cache, with a common 15 MB L3. With a TDP of just 60W, it enabled us to focus on energy efficiency. Two Xeon E5-2630Ls (a total of 120W TDP) enabled us to proceed with our plan to run 12 VMs concurrently.

Coolers

The choice of coolers for the processors is dictated by the chassis used for the build. At the start of the build, we decided to go with a tower desktop configuration. Asus recommended the Dynatron R17 for use with the Z9PE-D8 WS, and we went ahead with their suggestion.

The R17 coolers are meant for the LGA 2011 sockets for 3U and above rackmount form factors as well as tower desktop and workstation solutions. They are made of aluminium fins with four copper heat pipes. A thermal compound is pre-printed at the base. Installation of the R17s was quite straightforward, but care had to be taken to ensure that the side meant to mount the cooler’s fans didn’t face the DIMM slots on the Z9PE-D8 WS.

The fans on the R17 operate between 1000 and 2500 rpm, and consume between 0.96W and 3W at these speeds. Noise levels are respectable and range from 17 dbA to 32 dbA. The R17 has the ability to cool CPUs with up to 160W TDP. The 60W E5-2630Ls were effectively maintained between 45C and 55C even under our full workloads by the Dynatron R17s.

Introduction & Goals of the Build Hardware Build - Memory and Storage
Comments Locked

74 Comments

View All Comments

  • dertechie - Wednesday, September 5, 2012 - link

    That is a helluva test bench.

    I'd love to see what a HP N40L Microserver does with 4 disks in it if you throw that at it (use the on-motherboard USB port for the OS). It's certainly not a plug-and-play solution like most NAS boxes, but assuming the performance is there it should be a far more flexible one for the money if you throw a *nix based OS on it.
  • bsd228 - Wednesday, September 5, 2012 - link

    I've taken advantage of the 5th internal port of the N36L to add an SSD that is used by ZFS for both read and write caching. Strictly speaking, mirrored write caches are advised, but it's connected to a UPS to eliminate much of that risk.

    I think HP has given us the perfect platform for low power, high performance with flexibility.
  • extide - Thursday, September 6, 2012 - link

    Cache? or L2ARC?

    Mirrored Cache drives are NOT suggested for ZFS, but Mirrored L2ARC devices are.
  • coder543 - Wednesday, September 5, 2012 - link

    running Windows Server.........
  • ganeshts - Wednesday, September 5, 2012 - link

    What alternatives do you have in mind?

    We needed a platform which was well supported by the motherboard. To tell the truth, I found Hyper-V and the virtualization infrrastructure to be really good and easy to use compared to VMWare's offerings.
  • ender8282 - Wednesday, September 5, 2012 - link

    I assume coder543 was going for a Linux based host, and possibly Linux based clients as well. If you had gone with Linux you wouldn't have needed extra software for SSH or the ram disk. It even looks like IOMeter is supported for Linux. Had you gone that route you likely could have automated the whole task so that it was just a matter of typing go on the host and coming back hours later to collect the results. OTOH most of your audience is probably more likely to be using Windows clients so it probably makes more sense to provide information clearly relevant to the average reader.

    I found the article interesting. The one thing that I'd be curious about is whether or not there were any major performance differences using Samba/CIFS type shares vs NFS, or a mixture of the two.

    I'd love to see more Linux coverage in general, but I respect that you know your audience and write the articles that they generally want to read.

    Great Job keep it up!
  • Ratman6161 - Thursday, September 6, 2012 - link

    I should run on that platform just great. On the other hand, when all is said and done, as nice as this setup is, to me it is basically a full blown server/virtualization platform; not really a "NAS" at all. I would typically think of a NAS as being a dedicated storage device - possibly used as an IScsi target with the brains of the operation living elsewhere.
  • ganeshts - Thursday, September 6, 2012 - link

    This is a testbed for evaluating NAS units, not a NAS. Not sure why readers are getting an impression that this is a NAS by itself.
  • bsd228 - Wednesday, September 5, 2012 - link

    Ganesh- I think this test bed sets up very well for testing the $500-1000 4 bay type NAS devices we've been seeing of late that could actually serve a small office. However, I'm less sure that it delivers meaningful data to the home crowd. Like with your SSD tests, I see a place for a "light" load versus the heavy. I think testing against 4 VMs with, for sake of example, the following load types would work:
    1- 2 VMs streaming video - 1 DVD, 1 H.264 HDTV - are there any interruptions?
    2- 1 VM streaming audio off a mt-daapd (or actual itunes since you're using windows as the server) - again, is there any dropoffs.
    3- same VM as #2 is also doing content creation - like importing 1000 RAW images into Lightroom using this storage space
    4- last VM is copying large files (or small) to the storage server.

    The Thecus 4800 should handle this with ease, but there are many cheaper solutions out there that may or may not meet this level of need. I got so tired of poorly performing consumer units that 4 years ago I switched to an AMD x2 4800 running solaris, and more recently to the HP36L and 40L. At $300 plus $60 for 8 gigs of ECC I think this is a better value than the Thecus for those who can run solaris or even Windows Home Server. You're not reliant on the release of modules to support a particular service.

    Also, it seems that all of these benchmarks are based on SMB transfers. It's worth checking to see if nfs and iscsi performance (when made available by the NAS) shows different numbers. In the past, it certainly did, especially on the consumer devices where NFS smoked SMB1. But perhaps this is a moot point with SMB2/windows 7 where it seems like the NIC or the hard drives are the limiting factors, not the transfer protocol.
  • Rick83 - Thursday, September 6, 2012 - link

    I agreem test the different protocols provided by the devices.
    iSCSI, SMB, NFS as well as the media streaming protocols, FTP and whatever else it offers.
    If encrypted transfers are offered, test those as well (eg. sshfs / scp).

    Additionally, have a look at one of the cluster-ssh solutions, that allows simultaneous connections/commands to all machines.

Log in

Don't have an account? Sign up now