Motherboard

A number of vendors exist in the dual processor workstation motherboard market. At the time of the build, LGA 2011 Xeons had already been introduced, and we decided to focus on boards supporting those processors. Since we wanted to devote one physical disk and one network interfaces to each VM, it was essential that the board have enough PCI-E slots for multiple quad-ported server NICs as well as enough native SATA ports. For our build, we chose the Asus Z9PE-D8 WS motherboard with an SSI EEB form factor..

Based on the C602 chipset, this dual LGA 2011 motherboard supports 8 DIMMs and has 7 PCIe 3.0 slots. The lanes can be organized as (2 x16 + 1 x16 + 1x8 or 4 x8 + 1x16 + 1 x8). All the slots are physically 16 lanes wide. The Intel C602 chipset provides two 6 Gbps SATA ports and eight SATA 3 Gbps ports. A Marvell PCIe 9230 controller provides four extra 6 Gbps ports making for a total of 14 SATA ports. This allows us to devote two ports to the host OS of the workstation and one port to each of the twelve planned VMs. The Z9PE-D8 WS motherboard also has two GbE ports based on the Intel 82574L. Two Gigabit LAN controllers are not going to be sufficient for all our VMs. We will address this issue further down in the build.

The motherboard also has 4 USB 3.0 ports, thanks to an ASMedia USB 3.0 controller. The Marvell SATA - PCIe bridge and the ASMedia USB3 controller are connected to the 8 PCIe lanes in the C602. All the PCIe 3.0 lanes come from the processors. Asus also provides support for SSD caching (where any installed SSD can be used as a cache for frequently accessed data, without any size limitations) in the motherboard. The Z9PE-D8 WS also has a Realtek ALC898 HD audio codec, but neither of the above aspects are of relevance to our build.

CPUs

One of the main goals of the build was to ensure low power consumption. At the same time, we wanted to run twelve VMs simultaneously. In order to ensure smooth operation, each VM needs at least one vCPU allocated exclusively to it. The Xeon E5-2600 family (Sandy Bridge-EP) has CPUs with core counts ranging from 2 to 8, with TDPs from 60 W to 150 W. Each core has two threads. Keeping in mind the number of VMs we wanted to run, we specifically looked at the 6 and 8 core variants, as two of those processors would give us 12 and 16 cores. Within these, we restricted ourselves to the low power variants. These included the hexa-core E5-2630L (60 W TDP) and the octa-core E5-2648L / E5-2650L (70 W TDP).

CPU decisions for machines meant to run VMs have to be usually made after taking the requirements of the workload into consideration. In our case, the workload for each VM involved IOMeter and Intel NASPT (more on these in the software infrastructure section). Both of these softwares tend to be I/O-bound, rather than CPU-bound, and can run reliably on even Pentium 4 processors. Therefore, the per-core performance of the three processors was not a factor that we were worried about.

Out of the three processors, we decided to go ahead with the hexa-core Xeon E5-2630L. The cores run at 2 GHz, but can Turbo up to 2.5 GHz when just one core is active. Each core has a 256 KB L2 cache, with a common 15 MB L3. With a TDP of just 60W, it enabled us to focus on energy efficiency. Two Xeon E5-2630Ls (a total of 120W TDP) enabled us to proceed with our plan to run 12 VMs concurrently.

Coolers

The choice of coolers for the processors is dictated by the chassis used for the build. At the start of the build, we decided to go with a tower desktop configuration. Asus recommended the Dynatron R17 for use with the Z9PE-D8 WS, and we went ahead with their suggestion.

The R17 coolers are meant for the LGA 2011 sockets for 3U and above rackmount form factors as well as tower desktop and workstation solutions. They are made of aluminium fins with four copper heat pipes. A thermal compound is pre-printed at the base. Installation of the R17s was quite straightforward, but care had to be taken to ensure that the side meant to mount the cooler’s fans didn’t face the DIMM slots on the Z9PE-D8 WS.

The fans on the R17 operate between 1000 and 2500 rpm, and consume between 0.96W and 3W at these speeds. Noise levels are respectable and range from 17 dbA to 32 dbA. The R17 has the ability to cool CPUs with up to 160W TDP. The 60W E5-2630Ls were effectively maintained between 45C and 55C even under our full workloads by the Dynatron R17s.

Introduction & Goals of the Build Hardware Build - Memory and Storage
Comments Locked

74 Comments

View All Comments

  • Zarquan - Thursday, September 6, 2012 - link

    I might be missing something really obvious here .. but if the highest power consumption was 146.7 W (IOMeter 100% Seq 100% Reads [ 12 VMs ]), then why did you need a 850W power supply ?

    Either the system is using a lot more than the 146.7 W you quoted in your power consumption figures, or the power supply is way over specified.
    http://www.anandtech.com/show/6241/building-the-20...
  • ganeshts - Thursday, September 6, 2012 - link

    This is not the only workload we plan to run on the machine.

    We were ready to put up with some inefficiency just to make sure we didn't have to open up the machine and put in a more powerful PSU down the road. The 850W PSU should serve the testbed well for future workloads which might be more stressful.
  • ydafff - Thursday, September 6, 2012 - link

    I’m VCP:5 / 4 and MCSE and MCITP:VA / EA
    This setup for 12 VMs way overkill..
    Best for this test bad will be VMware vSphere Hypervisor( Free ESXi) – much better memory and vCPU and storage management or MS Hyper-V 2008 R2 free server - try to use free Hyper-V 2008 server much less HD space and compute resources needed
    Regarding VMs density you could easy run all 12 VMs(1-2 GB memory) from single Sandy Bridge-E CPU or 1155 Xeon(i7) CPU with really good performance. Storage 2x intel 320 series 600GB SSD in RAID 1(you will need Redundancy) with thin provisioning will do trick.
  • ganeshts - Thursday, September 6, 2012 - link

    ydaff, Thanks for the inputs.

    We are working towards increasing the VM density in the current testbed itself. As another reader pointed out, 12 VMs were not enough to stress the Thecus N4800.

    I decided not go with the Hyper-V 2008 R2 free server because I needed to run some programs / scripts in the host OS and the Z9PE-D8 WS had drivers specifically for Win Server 2008 R2.
  • eanazag - Thursday, September 6, 2012 - link

    Seems like a lot of people are talking about it being over the top. I agree with the route Anandtech took - could have even went farther. How far can they be pushed is my question? I want to see when they start smoking NAS's. The article and concept is great. I like to know how the site sets up its test scenarios and equipment. It lets me know if my use case is higher or lower and what the device being reviewd can do. I look at your testing methods to decide if your data is worth considering. I continue to be an avid reader here because of the effort placed. If you had one PC with one NIC, anyone in their house can test it like that. Why even write reviews about NAS's if that is how far you are going to test? Great job, Anandtech.

    I have some applications at work I would like to create repeatable tests for. An article about how to automate applications for testing would be helpful. I saw that we got a little in this article. I would also like to see more enterprise equipment being tested if you can swing it.
  • KingHerod - Friday, September 7, 2012 - link

    NAS devices are convenient and generally low-power, but it would be nice to see a comparison to some real metal with a real server OS like Server 2k8R2. Maybe a repurposed older computer with a couple drives mirrored and an actual, low end server with some SAS drives.
  • dbarth1409 - Friday, September 7, 2012 - link

    Ganesh,

    Good work. I'm looking forward to seeing some future test results.
  • dijuremo - Monday, September 10, 2012 - link

    This asus motherboard is not truly ACPI compliant, ASUS knows it and they do not want to fix it. Their tech support has given stupid excuses to posts from users trying to run Windows 8 and 2012 server on it.

    If you boot either Windows 8 or 2012 server RTM on it, it blue screens with error:
    0xA5: ACPI_BIOS_ERROR

    You just need to check the reviews at the egg to confirm.

    http://www.newegg.com/Product/Product.aspx?Item=N8...
  • ganeshts - Monday, September 10, 2012 - link

    Looks like Asus has updated support files for Windows 8.
  • VTArbyP - Monday, September 10, 2012 - link

    I wonder what would happen if you did use Linux for the host and VM oses? I suppose that would become a test of Linux vs Windows! Heh.
    More seriously, why not add at least one VM of "the current popular distro" of Linux and and a Mac OS X machine Use them with NTFS drivers and / or reformat a NAS partition to native ext# and another to HFS+. Point being, how does the NAS react to mixed client loads and not all smb, as someone commented above. The other test this beast seems ideal for is comparisons of several non-local storage solutions - someone mentioned iSCSI, and I can imagine tryiing some types of SANs - might add an infiniband adapter - being of interest. The point of that would simply be to see what form of non-local storage was fastest, best value, easiest to maintain, etc, etc for us mortals who want to connect 6 - 12 machines, We, being the folks who DON'T run lans for a living and are not up to speed on what IT people already know

Log in

Don't have an account? Sign up now