Many thanks to...

We must thank the following companies for kindly donating hardware for our test bed:

OCZ for donating the Power Supply and USB testing SSD
Micron for donating our SATA testing SSD
Kingston for donating our ECC Memory
ASUS for donating AMD GPUs
ECS for donating NVIDIA GPUs

Test Setup

Test Setup
Processor 2x Intel Xeon E5-2690
8 Cores, 16 Threads, 2.9 GHz (3.8 GHz Turbo) each
Motherboards Gigabyte GA-7PESH1
Cooling Intel AIO Liquid Cooler
Corsair H100
Power Supply OCZ 1250W Gold ZX Series
Memory Kingston 1600 C11 ECC 8x4GB Kit
Memory Settings 1600 C11
Video Cards ASUS HD7970 3GB
ECS GTX 580 1536MB
Video Drivers Catalyst 12.3
NVIDIA Drivers 296.10 WHQL
Hard Drive Corsair Force GT 60 GB (CSSD-F60GBGT-BK)
Optical Drive LG GH22NS50
Case Open Test Bed - DimasTech V2.5 Easy
Operating System Windows 7 64-bit
SATA Testing Micron RealSSD C300 256GB
USB 2/3 Testing OCZ Vertex 3 240GB with SATA->USB Adaptor

Power Consumption

Power consumption was tested on the system as a whole with a wall meter connected to the OCZ 1250W power supply, with a single 7970 GPU installed.  This power supply is Gold rated, and as I am in the UK on a 230-240 V supply, leads to ~75% efficiency > 50W, and 90%+ efficiency at 250W, which is suitable for both idle and multi-GPU loading.  This method of power reading allows us to compare the power management of the UEFI and the board to supply components with power under load, and includes typical PSU losses due to efficiency.  These are the real world values that consumers may expect from a typical system (minus the monitor) using this motherboard.

Power Consumption - One 7970 @ 1250W Gold

Using two E5-2690 processors would mean a combined TDP of 270W.  If we make the broad assumption that the processors combined use 270W under loading, this places the rest of the motherboard at around 110-130W, which is indicated by our idle numbers (despite PSU efficiency). 

POST Time

Different motherboards have different POST sequences before an operating system is initialized.  A lot of this is dependent on the board itself, and POST boot time is determined by the controllers on board (and the sequence of how those extras are organized).  As part of our testing, we are now going to look at the POST Boot Time - this is the time from pressing the ON button on the computer to when Windows starts loading. (We discount Windows loading as it is highly variable given Windows specific features.)  These results are subject to human error, so please allow +/- 1 second in these results.

POST (Power-On Self-Test) Time

The boot time on this motherboard is a lot longer than anything I have ever experienced.  Firstly, when the power supply is switched on, there is a 30 second wait (indicated by a solid green light that turns into a flashing green light) before the motherboard can be switched on.  This delay is to enable the management software to be activated.  Then, after pressing the power switch, there is around 60 seconds before anything visual comes up on the screen.  Due to the use of the Intel NICs, the LSI SAS RAID chips and other functionality, there is another 53 seconds before the OS actually starts loading.  This means there is about a 2.5 minute wait from power at the wall enabled to a finished POST screen.  Stripping the BIOS by disabling the extra controllers gives a sizeable boost, reducing the POST time by 35 seconds.

Gigabyte GA-7PESH1 Software Talking a Little About Processors
Comments Locked

64 Comments

View All Comments

  • Hulk - Saturday, January 5, 2013 - link

    I had no idea you were so adept with mathematics. "Consider a point in space..." Reading this brought me back to Finite Element Analysis in college! I am very impressed. Being a ME I would have preferred some flow models using the Navier-Stokes equations, but hey I like chemistry as well.
  • IanCutress - Saturday, January 5, 2013 - link

    I never did any FEM so wouldn't know where to start. The next angle of testing would have been using a C++ AMP Fluid Dynamics Simulation and adjusting the code from the SDK example like with the n-Body testing. If there is enough interest, I could spend a few days organising it for the normal motherboard reviews :)

    Ian
  • mayankleoboy1 - Saturday, January 5, 2013 - link

    How the frick did you get the i7-3770K to *5.4GHZ* ? :shock:
    How the frick did you get the i7-3770K to *5.0GHZ* ? :shock:
  • IanCutress - Saturday, January 5, 2013 - link

    A few members of the Overclock.net HWBot team helped testing by running my benchmark while they were using DICE/LN2/Phase Change for overclocking contests (i.e. not 24/7 runs). The i7-3770K will go over 7 GHz if (a) you get a good chip, (b) cool it down enough, and (c) know what you are doing. If you're interested in competitive overclocking, head over to HWBot, Xtreme Systems or Overclock.net - there are plenty of people with info to help you get started.

    Ian
  • JlHADJOE - Tuesday, January 8, 2013 - link

    The incredible performance of those overclocked Ivy bridge systems here really hammers home the importance of raw IPC. You can spend a lot of time optimizing code, but IPC is free speed when it's available.
  • jd_tiger - Saturday, January 5, 2013 - link

    http://www.youtube.com/watch?v=Ccoj5lhLmSQ
  • smonsees - Saturday, January 5, 2013 - link

    You might try modifying your algorithm to pin the data to a specific core (therefore cache) to keep the thrashing as low as possible. Google "processor affinity c++". I will admit this adds complexity to your straightforward algorithm. In C#, I would use a parallel loop with a range partition to do it as a starting point: http://msdn.microsoft.com/en-us/library/dd560853.a...
  • nickgully - Saturday, January 5, 2013 - link

    Mr. Cutress,
    Do you think with all the virtualized CPU available, researchers will still build their own system as it is something concrete to put into a grant application, versus the power-by-the-hour of cloud computing?

    Thanks.
  • IanCutress - Saturday, January 5, 2013 - link

    We examined both scenarios. Our university had cluster time to buy, and there is always the Amazon cloud. In our calculation, getting a 16 thread machine from Dell paid for itself in under six months of continuous running, and would not require a large adjustment in the way people were currently coding (i.e. staying in Windows rather than moving to Linux), and could also be passed down the research group when newer hardware is released.

    If you are using production level code and manipulating it each time to get results, and you can guarantee the results will be good each time, then power-by-the-hour could work. As we were constantly writing and testing new code for different scenarios, the build/buy your own workstation won out. Having your own system also helps in building GPU codes, if you want to buy a better GPU card it is easier to swap out rather than relying on a cloud computing upgrade.

    Ian
  • jtv - Sunday, January 6, 2013 - link

    One big consideration is who the researchers are. I work in x-ray spectroscopy (as a computational theorist). Experimentalists in this field use some of our codes without wanting to bother with having big computational resources. We have looked at trying to provide some of our codes through some cloud-based service so that it can be used on demand.

    Otherwise I would agree with Ian's reply. When I'm improving code, debugging code, or trying to implement new theoretical approaches I absolutely want my own hardware to do it on.

Log in

Don't have an account? Sign up now