USB Speed

For this benchmark, we run CrystalDiskMark to determine the ideal sequential read and write speeds for the USB port using our 240 GB OCZ Vertex3 SSD with a SATA 6 Gbps to USB 3.0 converter.  Then we transfer a set size of files from the SSD to the USB drive using DiskBench, which monitors the time taken to transfer.  The files transferred are a 1.52 GB set of 2867 files across 320 folders – 95% of these files are small typical website files, and the rest (90% of the size) are the videos used in the Sorenson Squeeze test. 

USB 2.0 Sequential Read Speed

USB 2.0 Sequential Write Speed

USB 2.0 Copy Test

USB speed is dictated by the chipset and the BIOS implementation, and the GA-7PESH1 performance is comparable to our Z77/X79 testing.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing.  In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority.  Critical interrupts will be handled as soon as possible, whereas lesser priority requests, such as audio, will be further down the line.  So if the audio device requires data, it will have to wait until the request is processed before the buffer is filled.  If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time, resulting in an empty audio buffer – this leads to characteristic audible pauses, pops and clicks.  Having a bigger buffer and correctly implemented system drivers obviously helps in this regard.  The DPC latency checker measures how much time is processing DPCs from driver invocation – the lower the value will result in better audio transfer at smaller buffer sizes.  Results are measured in microseconds and taken as the peak latency while cycling through a series of short HD videos - under 500 microseconds usually gets the green light, but the lower the better.

DPC Latency Maximum

As a workstation motherboard, having a low DPC latency would be critical for recording and analyzing time sensitive information.  Scoring 110 microseconds at its peak latency is great for this motherboard.

Compression and Video Conversion Final Words
Comments Locked

64 Comments

View All Comments

  • Hulk - Saturday, January 5, 2013 - link

    I had no idea you were so adept with mathematics. "Consider a point in space..." Reading this brought me back to Finite Element Analysis in college! I am very impressed. Being a ME I would have preferred some flow models using the Navier-Stokes equations, but hey I like chemistry as well.
  • IanCutress - Saturday, January 5, 2013 - link

    I never did any FEM so wouldn't know where to start. The next angle of testing would have been using a C++ AMP Fluid Dynamics Simulation and adjusting the code from the SDK example like with the n-Body testing. If there is enough interest, I could spend a few days organising it for the normal motherboard reviews :)

    Ian
  • mayankleoboy1 - Saturday, January 5, 2013 - link

    How the frick did you get the i7-3770K to *5.4GHZ* ? :shock:
    How the frick did you get the i7-3770K to *5.0GHZ* ? :shock:
  • IanCutress - Saturday, January 5, 2013 - link

    A few members of the Overclock.net HWBot team helped testing by running my benchmark while they were using DICE/LN2/Phase Change for overclocking contests (i.e. not 24/7 runs). The i7-3770K will go over 7 GHz if (a) you get a good chip, (b) cool it down enough, and (c) know what you are doing. If you're interested in competitive overclocking, head over to HWBot, Xtreme Systems or Overclock.net - there are plenty of people with info to help you get started.

    Ian
  • JlHADJOE - Tuesday, January 8, 2013 - link

    The incredible performance of those overclocked Ivy bridge systems here really hammers home the importance of raw IPC. You can spend a lot of time optimizing code, but IPC is free speed when it's available.
  • jd_tiger - Saturday, January 5, 2013 - link

    http://www.youtube.com/watch?v=Ccoj5lhLmSQ
  • smonsees - Saturday, January 5, 2013 - link

    You might try modifying your algorithm to pin the data to a specific core (therefore cache) to keep the thrashing as low as possible. Google "processor affinity c++". I will admit this adds complexity to your straightforward algorithm. In C#, I would use a parallel loop with a range partition to do it as a starting point: http://msdn.microsoft.com/en-us/library/dd560853.a...
  • nickgully - Saturday, January 5, 2013 - link

    Mr. Cutress,
    Do you think with all the virtualized CPU available, researchers will still build their own system as it is something concrete to put into a grant application, versus the power-by-the-hour of cloud computing?

    Thanks.
  • IanCutress - Saturday, January 5, 2013 - link

    We examined both scenarios. Our university had cluster time to buy, and there is always the Amazon cloud. In our calculation, getting a 16 thread machine from Dell paid for itself in under six months of continuous running, and would not require a large adjustment in the way people were currently coding (i.e. staying in Windows rather than moving to Linux), and could also be passed down the research group when newer hardware is released.

    If you are using production level code and manipulating it each time to get results, and you can guarantee the results will be good each time, then power-by-the-hour could work. As we were constantly writing and testing new code for different scenarios, the build/buy your own workstation won out. Having your own system also helps in building GPU codes, if you want to buy a better GPU card it is easier to swap out rather than relying on a cloud computing upgrade.

    Ian
  • jtv - Sunday, January 6, 2013 - link

    One big consideration is who the researchers are. I work in x-ray spectroscopy (as a computational theorist). Experimentalists in this field use some of our codes without wanting to bother with having big computational resources. We have looked at trying to provide some of our codes through some cloud-based service so that it can be used on demand.

    Otherwise I would agree with Ian's reply. When I'm improving code, debugging code, or trying to implement new theoretical approaches I absolutely want my own hardware to do it on.

Log in

Don't have an account? Sign up now