Enterprise SATA

So the question becomes: will SATA conquer the enterprise market with the SAS Trojan horse, killing off the SCSI disks? Is there any reason to pay 4 times more for a SCSI based disk which has hardly one third of the capacity of a comparable SATA disk just because the former is about twice as fast? It seems ridiculous to pay 10 times more for the same capacity.

Just like with servers, the Reliability, Availability and Serviceability (RAS) of enterprise disks must be better than desktop disks to be able to keep the TCO under control. Enterprise disks are simply much more reliable. They use stiffer covers, heads with very high rigidity, expensive and more reliable rotation engines combined with smart servo algorithms. But that is not all; the drive electronics of SCSI disks can and do perform a lot more data integrity checks.



The rate of failures increase quickly as SATA drives are subjected to server workloads. Source: Seagate

The difference in reliability between typical SATA and real enterprise disks has been proven in a recent test by Seagate. Seagate exposed three groups of 300 desktop drives to high-duty-cycle sequential and random workloads. Enterprise disks list a slightly higher or similar failure rate than desktop drives, but that does not mean they are the same. Enterprise disks are tested for heavy duty highly random workloads and desktop drives are tested with desktop workloads. Seagate's tests revealed that desktop drives failed twice as often in the sequential server tests than with normal desktop use. When running random server or transactional workloads, SATA drives failed four times as often![²] In other words, it is not wise to use SATA drives for transactional database environments; you need real SCSI/SAS enterprise disks which are made to be used for the demanding server loads.

Even the so called "Nearline" (Seagate) or "Raid Edition" (RE, Western Digital) SATA drives which are made to operate in enterprise storage racks, and which are more reliable than desktop disks, are not made for the mission critical, random transactional applications. Their MTBF (Mean Time Between Failure) is still at least 20% lower than typical enterprise disks, and they will show the similar failure rates when used with highly random server workloads as desktop drives.

Also, the current SATA drives on average experience an Unrecoverable Error every 12.5 terabytes written or read (EUR of 1 in 1014 bits). Thanks to the sophisticated drive electronics, SAS/SCSI disks experience these kinds of errors 100 (!) times less. It would seem that EUR numbers are so small that they are completely negligible, but consider the situation where one of your hard drives fails in a RAID-5 or 6 configuration. Rebuilding a RAID-5 array with five 200 GB SATA drives results in reading 0.8 terabytes and writing 0.2 terabytes, in total 1 terabytes. So you have 1/12.5 or 8% chance of getting an EUR on this SATA array. If we look at a similar SCSI enterprise array, we would get a 0.08% chance on one unrecoverable error. It is clear an 8% chance of getting data loss is a pretty bad gamble for a mission critical application.

Another good point that Seagate made in the same study concerns vibration. When a lot of disk spindles and actuators are performing a lot of very random I/O operations in a big storage rack, quite a bit of rotational vibration is the result. In the best case the actuator will have to take a bit more time to get to the right sector (higher seek time) but in the worst case the read operation has to be retried. This can only be detected by the software driver, which means that the performance of the disk will be very low. Enterprise disks can take about 50% more vibration than SATA desktop drives before 50% higher seek times kill the random disk performance.

SAS layers in the real world Conclusion and References
Comments Locked

21 Comments

View All Comments

  • slashbinslashbash - Thursday, October 19, 2006 - link

    Sounds great, thanks. If possible it'd be great to see full schematics of the setup, pics of everything, etc. This is obviously outside the realm of your "everyday PC" stuff where we all know what's going on. I administer 6 servers at a colo facility and our servers (like 90% of the other servers that I see) are basically PC hardware stuck in a rackmount box (and a lot of the small-shop webhosting companies at the colo facility use plain towers! In the rack across from ours, there are 4 Shuttle XPC's! Unbelievable!).

    We use workstation motherboards with ECC RAM, Raptor drives, etc. but still it's basically just a PC. These external enclosures, SAS, etc. are a whole new realm. I know that it'd be better than the ad-hoc storage situation we have now, but I'm kind of scared because I don't know how it works and I don't know how much it would cost. So now I know more about how it works, but the cost is still scary. ;)

    I guess the last thing I'd want to know is the OS support situation. Linux support is obviously crucial.

Log in

Don't have an account? Sign up now