Mac Benchmarks: QuickBench, AJA & Photoshop Installation

Since the XP941 is currently only bootable in Macs, I decided to run some benchmarks with the XP941 inside a Mac Pro. The specs of the Mac Pro are as follows:

Test Setup
Model Mac Pro 4.1 (Early 2009)
Processor Intel Xeon W3520 (2.66/2.93GHz, 4/8, 8MB L3)
Graphics NVIDIA GeForce GT120 512MB GDDR3
RAM 12GB (2x4GB + 2x2GB) DDR3-1066 ECC
OS OS X 10.9.2

We would like to thank RamCity for providing us with the Mac Pro, so we were able to run these tests and confirm boot support.

I installed OS X 10.9.2 to all drives and they were the boot drives when benchmarked, just like they would be for most end users. As I mentioned on page one, RamCity actually sent us two 512GB XP941 and I just had to put them in RAID 0 configuration. With a Mac you can easily boot from a software RAID 0 array, so all I had to do was to create a RAID 0 array in Disk Utility and select it as the boot volume. I placed the drives in PCIe slots 2 and 4 to ensure that both drives were getting full PCIe bandwidth and we wouldn't run into bottlenecks there. I picked Intel's 480GB SSD 730 to be the comparison point as it was lying on my table and is among the fastest SATA 6Gbps SSDs in the market. Note that the 2009 Mac Pro only supports SATA 3Gbps, so there's obviously some performance penalty from that as the benchmarks show.

QuickBench

QuickBench is one of the more sophisticated drive benchmark tools for OS X. It's shareware and retails for $15 but compared to the freeware tools available, it's worth it. While QuickBench lacks the option to increase queue depth, it supports various transfer sizes from 4KB to up to 100MB (or more through a custom test). For this test, I just ran the standard test where the IO sizes range from 4KB to xMB. Additionally I ran the extended test, which focuses on very large IOs (20-100MB) in order to get the maximum performance out of the drives. In both cases the tests ran for 10 cycles to ensure sustained results.

QuickBench - 4KB Random Read

QuickBench - 4KB Random Write

The random results don't reveal anything interesting. The RAID 0 array is slightly slower due to the overhead from the software RAID configuration but overall the results make sense when compared with our Iometer scores. Bear in mind that QuickBench only uses queue depth of 1, whereas our Iometer tests are run at queue depth of 3, hence there's a difference that is roughly proportional to the queue depth.

QuickBench - 128KB Sequential Read

QuickBench - 128KB Sequential Write

The sequential tests show that the XP941 seems to be slightly slower in the Mac Pro compared to sequential performance in Iometer. In this case both tests are at a queue depth of 1 and should thus be comparable, but it's certainly possible that there are some other differences that cause the slightly slower performance. Either way, we are still looking at much, much higher performance than any drive would provide under the Mac Pro's native SATA 3Gbps interface.

QuickBench - 90MB Sequential Read

QuickBench - 20MB Sequential Write

Since QuickBench doesn't allow increasing the queue depth, the only way to increase performance is to scale the transfer size. QuickBench's preset tests allow for up to 100MB IO sizes and I ran the preset that tests from 20MB to 100MB and picked the highest perfoming IO sizes that were 90MB and 20MB in this case. There wasn't all that much variation but these seemed to be the highest performing IO sizes for all three configurations.

Now the XP941 and especially RAID 0 show their teeth. With two XP941s in RAID 0, I was able to reach throughput of nearly 2.5GB/s (!) and half of that with a single drive. Compared to the SSD 730 in the SATA 3Gbps bus, you are getting over four times the performance and to reach the performance of X941 RAID 0 you would need at least ten SSDs in a SATA 3Gbps RAID 0 configuration.

AJA System Test

In addition to QuickBench, I decided to run AJA System Test as it's a freeware tool and quite widely used to test disk performance. It's mainly designed to test the performance of video throughput but as the results are reported in megabytes per second, it works for general IO testing as well. I set the settings to the maximum (4096x2160 10-bit RGB, 16GB file size) to product the results below.

AJA System Test - Read Speed

AJA System Test - Write Speed

The results are fairly similar to the QuickBench ones but the performance seems to be slightly lower. Then again, this is likely due to the difference in the data the software uses for testing but the speeds are still well over 1GB/s for a single drive and 2GB/s for RAID 0.

Adobe Photoshop CS6 Installation

One of the most common criticism I hear towards our tests is that we don't run any real world tests. I've been playing around with real-time testing a lot lately in order to build a suite of benchmarks that meet our criteria but for this review I decided to run a quick installation benchmark to see what kind of differences can be expected in real world. I grabbed the latest version of Photoshop CS6 trial from Adobe's website and installed it to all three drives while measuring the time with a stopwatch.

Photoshop CS6 Installation

Obviously the gains are much smaller in typical real world applications. That's because other bottlenecks come to play, which are absent when only testing IO performance. Still, especially for IO heavy workloads the extra performance is always appreciated even if the gains aren't as substantial as benchmarks show.

Performance vs Transfer Size Final Words
Comments Locked

110 Comments

View All Comments

  • McTeags - Thursday, May 15, 2014 - link

    I think there is a spelling mistake in the first sentence. Did you mean SATA instead of PATA? I don't know all of the tech lingo so maybe I'm mistaken.
  • McTeags - Thursday, May 15, 2014 - link

    Please disregard my comment. I googled it...
  • BMNify - Thursday, May 15, 2014 - link

    sata-e[serial], sata[serial], pata[parrallel] ,SCSI [several, and chainable to 15+ drives on one cable, we should have used that as generic] ,shugart these are all drive interfaces and there are more too going back in the day....
  • metayoshi - Thursday, May 15, 2014 - link

    "It's simply much faster to move electrons around a silicon chip than it is to rotate a heavy metal disk."

    While SSD performance blows HDDs out of the water, the quoted statement is technically not correct. If you take a single channel NAND part and put it up against today's mechanical HDDs, the HDD will probably blow the NAND part out of the water in everything except for random reads.

    What really kills HDD performance isn't the just rotational speed as much as it is the track-to-track seek + rotational latency of a random workload. A sequential workload will reduce the seek and rotational latency so much that the areal density of today's 5 TB HDDs will give you pretty good numbers. In a random workload however, the next block of data you want to read is most likely on a different track, different platter, and different head. Now it has to seek the heads to the correct track, perform a head switch because only 1 head can be on at a time, and then wait for the rotation of the disk for that data block to be under the head.

    A NAND part with a low number of channels will give you pretty crappy performance. Just look at the NAND in smartphones and tablets of today, and in the SD cards and USB thumb drives of yesteryear. What really makes SSDs shine is that they have multiple NAND parts on these things, and that they stripe the data across a huge number of channels. Just think RAID 0 with HDDs, except this time, it's done by the SSD controller itself, so the motherboard only needs 1 SATA (or other like PCIe) interface to the SSD. That really put SSDs on the map, and if a single NAND chip can do 40 MB/s writes, think about 16 of them doing it at the same time.

    So yes, there's no question that the main advantage of SSDs vs HDDs is an electrical vs mechanical thing. It's just simply not true that reading the electrical signals off of a single NAND part is faster than reading the bits off of a sequential track in an HDD. It's a lot of different things working together.
  • iwod - Friday, May 16, 2014 - link

    I skim read it. Few things i notice, No Power usage testing. But 0.05w idle is pretty amazing. Since the PCI-E supply the power as well i guess they could be much better fine grained? Although Active was 5.6W. So at the same time we want more performance == faster controller while using much lower power. it seems there could be more work to do.

    I wonder if the relative slow Random I/O were due to Samsung betting its use on NVMe instead of ACHI.
  • iwod - Friday, May 16, 2014 - link

    It also prove my points about Random I/O. We see how Random I/O for xp941 being at the bottom of the chart while getting much better benchmarks results. Seq I/O matters! And It matters a lot. The PCI -E x4 interfaces will once again becomes bottleneck until we move to PCI-E 3.0 Which i hope we do in 2015.
    Although i have this suspicious feeling intel is delaying or slowing down our progression.
  • nevertell - Friday, May 16, 2014 - link

    Can't you place the bootloader on a hard drive, yet have it load the OS up from the SSD ?
  • rxzlmn - Friday, May 16, 2014 - link

    'Boot Support: Mac? Yes. PC? Mostly No.'

    Uh, a Mac is a PC. On a serious tech site I don't expect lines like that.
  • Penti - Friday, May 16, 2014 - link

    Firmware differences.
  • Haravikk - Friday, May 16, 2014 - link

    It still surprises me that PCs can have so many hurdles when it comes to booting from various devices; for years now Macs have been able to boot from just about anything you plug into them (that can store data of course). I have one machine already that uses an internal drive combined with an external USB drive as a Fusion Drive, and it not only boots just fine, but the Fusion setup really helps eliminate the USB performance issues.

    Anyway, it's good to see PCIe storage properly reaching general release; it's probably going to be a while before I adopt it on PCs, as I'm still finding regular SATA or M.2 flash storage runs just fine for my needs, but having tried Apple's new Mac Pros, the PCIe flash really is awesome. Hopefully the next generation of Mac Pros will have connectors for two, as combined in a RAID-0 or RAID-1 the read performance can be absolutely staggering.

Log in

Don't have an account? Sign up now