SATA Controller Performance

Both NVIDIA and Intel offer support for NCQ in their SATA controllers, and given our recently renewed interest in NCQ performance, we decided to find out if there were any performance differences between the two SATA controllers.  However, as we've found in the past, coming up with tests that stress NCQ is quite difficult. Luckily, there is a tool that works perfectly for controlling the type of disk accesses that you want to test: iometer. 

An Intel developed tool, iometer allows you to control the size, randomness and frequency, among other things, of disk accesses, and measure performance using data generated according to these specifications.  Given that NCQ truly optimizes performance when disk accesses are random in nature, we decided to look at how performance varied according to what percentage of the disk accesses were random.  At the same time, we wanted the tests to be modeled on a multitasking desktop system, so we did some investigation by setting up a computer and running through some of our multitasking scenarios on it. 

What we found is that on modern day hard drives, the number of outstanding IOs (IO Queue Depth) is rarely above 10 on even a moderately taxed system.  Only when you approach extremely heavy multitasking loads (heavier than anything that we've ever tested) do you break into queue depths beyond 32.  So, we put together two scenarios, one with a queue depth of 8 and one with a queue depth of 32 - the latter being more of an extreme condition. 

In each scenario, we sent the drives a series of 64KB requests, 75% of which were reads, 25% were writes; once again, derived from monitoring our own desktop usage patterns. 

We then varied the randomness of disk accesses from 0% (e.g. 100% sequential) up to 100% (0% sequential reads/writes).  In theory, the stronger NCQ controllers will show better performance as the percentage of random accesses increases.  We reported both Average IOs per Second and average IO response time (how long accesses took to complete on average):

With a queue depth of 8, the two SATA controllers offer virtually identical performance.

Looking at latency, Intel actually offers a very slight performance advantage here - nothing huge, but it's definitely there.

The results get much more interesting as we increase the queue depth to 32:

Here, NVIDIA starts to pull away offering close to a 20% increase in average IOs per second as the access patterns get more random (e.g. as more applications running at the same time start loading down the hard disk). 

What's truly impressive, however, is the reduction in average response time - up to a 90ms decrease in response time, thanks to NVIDIA's superior NCQ implementation. 

But stepping back into reality, how big of a difference NVIDIA's NCQ implementation makes depends greatly on your usage patterns. Heavy multitaskers that are very IO bound will notice a performance difference, while more casual multitaskers would be hard pressed to find any difference.  For example, Intel was actually faster than NVIDIA in our gaming multitasking scenarios from our dual core investigation.

Workstation Performance - ATI GPU SLI Performance
Comments Locked

96 Comments

View All Comments

  • mkruer - Thursday, April 14, 2005 - link

    The only reason why Intel allowed Nvidia to make a chipset for them was for the SLI. Intel is worried, and rightfuly so that Nvidia's SLI sloution for AMD whould give AMD an advantage.
  • Questar - Thursday, April 14, 2005 - link

    "Honestly, Intel processors and even the platform haven’t been interesting since the introduction of Prescott. They have been too hot and poor performers, not to mention that the latest Intel platforms forced a transition to technologies that basically offered no performance benefits (DDR2, PCI Express)."

    Your opinion only, don't make this out to be fact.

    "at the end of the day, Intel would still be happier if there was no threat from companies like NVIDIA"

    nVidia (please print it correctly) is not a "threat" to Intel in the chipset market. They couldn't make a P4 chipset without a license from Intel. If Intel was threatened by them they wouldn't sell them a license. The purpose in licensing is give system builders more choice in design features.

    "However Intel’s chipset team has reason to worry; motherboard manufacturers weren’t happy with the 925/915 chipsets, and with a viable alternative in NVIDIA, we may very well have an opportunity for NVIDIA to start eating into Intel’s own chipset market share in a way that no other company has in the past"

    Intel probably makes as much net profit off the licensing of the nVidia chipset as they do selling thier own - after all thay don't have to design, build, ship or sell anything. So why would they be worried?

    Really Anand, you have to begin thinking these things trough.
  • Houdani - Thursday, April 14, 2005 - link

    Grrr, I should have noted that I was referring to the NCQ testing.
  • Houdani - Thursday, April 14, 2005 - link

    Anand: For the Intel DC Preview, what would you say was the queue depth during the various multitasking tests? I'm curious how today's test compares with how you tested the Intel DC in the preview.

    Also, is the relation between a depth of 8 versus a depth of 32 linear? Would there be any value in testing a depth somewhere in the middle, such as 16 and/or 24?

    Thanks yet again for the quality work!
  • ChineseDemocracyGNR - Thursday, April 14, 2005 - link

    "NVIDIA does not support Intel’s HD Audio spec, so you’re stuck with AC’97 on the nForce4 SLI. "

    That's inexcusable for a $80 chipset, IMO.
  • ksherman - Thursday, April 14, 2005 - link

    cool and all, but is there any variation of the Intel-based SLI vs the AMD Based SLI?

Log in

Don't have an account? Sign up now