The Pegasus: Performance

A single 2TB Hitachi Deskstar 7K3000 is good for sequential transfer rates of up to ~150MB/s. With six in a RAID-5 configuration, we should be able to easily hit several Gbps in bandwidth to the Pegasus R6. The problem is, there's no single drive source that can come close to delivering that sort of bandwidth.

Apple sent over a 15-inch MacBook Pro with a 256GB Apple SSD. This was the first MacBook Pro I've ever tested with Apple's own SSD, so I was excited to give it a try. The model number implies a Toshiba controller and I'll get to its performance characteristics in a separate article. But as a relatively modern 3Gbps SSD, this drive should be good for roughly 200MB/s. Copying a large video file from the SSD to the Pegasus R6 over Thunderbolt proved this to be true:

Apple's SSD maxed out at 224MB/s to the Thunderbolt array, likely the peak sequential read speed from the SSD itself. Average performance was around 209MB/s.

That's a peak of nearly 1.8Gbps and we've still got 8.2Gbps left upstream on the PCIe channel. I needed another option.

Without a second Thunderbolt source to copy to the array at closer to the interface's max speed, we had to generate data. I turned to Iometer to perform a 2MB sequential access across the first 1TB of the Pegasus R6's RAID-5 array. I ran the test for 5 minutes, the results are below:

Promise Pegasus R6 12TB (10TB RAID-5) Performance
  Sequential Read Sequential Write 4KB Random Read (QD16) 4KB Random Write (QD16)
Promise Pegasus R6 (RAID-5) 673.7 MB/s 683.9 MB/s 1.24 MB/s 0.98 MB/s

The best performance I saw was 683.9MB/s from our sequential write test, or 5471Mbps. Note that I played with higher queue depths but couldn't get beyond these numbers on the stock configuration. Obviously these are hard drives so random performance is pretty disappointing.

That's best case sequential performance, what about worst case? To find out I wrote a single 10TB file across the entire RAID-5 array then had Iometer measure read/write performance to that file in the last 1TB of the array's capacity:

Promise Pegasus R6 12TB (10TB RAID-5) Performance
  Sequential Read (Beginning) Sequential Write (Beginning) Sequential Read (End) Sequential Write (End)
Promise Pegasus R6 (RAID-5) 673.7 MB/s 683.9 MB/s 422.7 MB/s 463.0 MB/s

Minimum sequential read performance dropped to 422MB/s or 3.3Gbps. This is of course the downside to any platter based storage array. Performance on outer tracks is much better than on the inner tracks, so the more you have written to the drive the slower subsequent writes will be.

At over 5Gbps we're getting decent performance but I still wanted to see how far I could push the interface. I deleted the RAID-5 array and created a 12TB RAID-0 array. I ran the same tests as above:

Promise Pegasus R6 12TB (10TB RAID-5) Performance
  Sequential Read Sequential Write 4KB Random Read (QD16) 4KB Random Write (QD16)
Promise Pegasus R6 (RAID-5) 673.7 MB/s 683.9 MB/s 1.24 MB/s 0.98 MB/s
Promise Pegasus R6 (RAID-0) 782.2 MB/s 757.8 MB/s 1.27 MB/s 5.86 MB/s

Sequential read performance jumped up to 782MB/s or 6257Mbps. We're now operating at just over 60% of the peak theoretical performance of a single upstream Thunderbolt channel. For a HDD based drive array, this is likely the best we'll get.

To see how far we could push things I pulled out all six drives and swapped in four SF-2281 based SSDs. To really test the limits of the interface I created a 4-drive RAID-0 array sized at only 25GB. This would keep drive performance as high as possible and reduce the time required to fill and test the drives.

Unlike the hard drive based arrays, I had to take the queue depth up to 16 in order to get peak performance out of these SSDs. The chart below shows all of my performance attempts:

Promise Pegasus R6 Sequential Read Performance

With highly compressible data, I managed to get just over 1000MB/s (8021Mbps to be exact) to the 4-drive SF-2281 Pegasus R6. Note that this isn't a shipping configuration, but it does show us the limits of the platform. I'm not entirely convinced that we're limited by Thunderbolt here either - it could very well be the Pegasus' internal controller that's limiting performance. Until we get some other Thunderbolt RAID devices in house it's impossible to tell but at around 8Gbps, this is clearly an interface that has legs.

The Pegasus: Quirks It's Bootable & Power/Thermals/Noise
POST A COMMENT

89 Comments

View All Comments

  • enthios - Thursday, July 28, 2011 - link

    "Ironically isolated from the world around them?" How about wifi and iCloud? With limited storage capacity, there's no need for anything more. iOS devices are simply thin clients - and they work wonderfully as such. Reply
  • NirXY - Friday, July 08, 2011 - link

    Which is 0$ Reply
  • Exelius - Friday, July 08, 2011 - link

    Yes, but in this instance, it's not just a dumb wire; there is an IC at each end of the cable. So theoretically, if Intel comes up with an upgraded Light Peak spec, there may well be cables that are capable of faster speeds. I also imagine longer cables become more difficult, and may in fact require fiber optic transceivers built into the cable. This cable also likely costs significantly more to manufacture than a crimped cable, since there's a tiny IC and micro soldering that needs to be done on each cable.

    But yes, in one sense you are right that in a digital bus, higher quality cables do not provide better performance (though I have run into very low-quality HDMI cables that work fine at 720p but refuse to carry a 1080p signal.)
    Reply
  • repoman27 - Sunday, July 10, 2011 - link

    At the moment, i’m not sure the Thunderbolt host controller has any real legs on the cable. That host controller is already pumping data to each port at the same speeds as the DMI link between the CPU and PCH. Everything in the chain that is connected to it needs to get faster for Thunderbolt to get faster, i.e. PCIe 3.0, DMI 3.0, DP 1.2.

    Also, I’m pretty sure it’s not those little ICs that are making Apple’s Thunderbolt cables cost $49. Like many retailers, Apple knows that cables and accessories (and RAM upgrades) are a great place to stretch profit margins. A quick search of the Apple store for cables and adapters will turn up dozens of genuine Apple offerings ranging in price from $19 on up to $99. So, taken in context, the Thunderbolt cable is actually a mid-priced cable from Apple, not an expensive one. It also means that cheap unbranded alternatives could quite realistically be sold for around $15, ICs and all.

    Regarding your HDMI cable experience, Category 1 or “Standard” HDMI cables are only rated for 1080i60, whereas Category 2 or “High Speed” cables will do 1080p60, 4K, 3D, Deep Color, etc. Older cables weren’t marked as such, and thus YMMV.
    Reply
  • snakeInTheGrass - Friday, July 15, 2011 - link

    Yeah, the SCSI cables were $75+ easily, you needed terminators, and do you remember the fine SCSI1/2/3 connector differences so you needed adapters or cables with different ends depending on the devices. I still have probably what WAS $500 of cables in my closed.

    Inflation adjusted, these $50 cables are about the equivalent of $15 cables back in those days, so frankly they don't sound too bad, especially considering the fact that it's industry leading performance right now.

    As for comparing to Monster cables, these Thunderbolt cables have controllers built into them and presumably do have to meet tighter tolerances than USB, particularly as they carry 2 x 10Gbps data streams. But you're right that Monster cables are a rip off.
    Reply
  • flowynn - Friday, July 08, 2011 - link

    I remember those days well. My need for speed SCSI habit was insanely expensive. Reply
  • MonkeyPaw - Friday, July 08, 2011 - link

    It reminds me of FB-DIMMs, an expensive solution that uses additional energy, and the components cost more than devices it replaces. Controllers in the cables AND on the motherboard and peripherals? Maybe the optical solution will make more sense. Reply
  • CrimsonFury - Monday, July 11, 2011 - link

    I thought the initial copper cables didn't need any controllers in them? The impression I got from earlier articles was that Intel said their Thunderbolt implementation could scale to optical in future for greater speeds by releasing optical cables with a copper to optical controller in each end of the cable once controller costs had come down from mass production. Reply
  • MobiusStrip - Tuesday, July 12, 2011 - link

    No, they need optical in the ports and wires.

    By reneging on the light in "Light Peak", Intel effectively killed it. Who knows why they're playing dumb in pretending that they're going to get the industry to adopt Thunderbolt and then turn around and adopt an optical solution right afterward. Just idiotic.
    Reply
  • André - Friday, July 08, 2011 - link

    I find it especially funny considering that all current Thunderbolt solutions (A/V equipment or storage enclosures) are all in excess of $999 to begin with. Reply

Log in

Don't have an account? Sign up now