Original Link: http://www.anandtech.com/show/4604/the-sandforce-roundup-corsair-patriot-ocz-owc-memoright-ssds-compared
The SandForce Roundup: Corsair, Kingston, Patriot, OCZ, OWC & MemoRight SSDs Comparedby Anand Lal Shimpi on August 11, 2011 12:01 AM EST
It's a depressing time to be covering the consumer SSD market. Although performance is higher than it has ever been, we're still seeing far too many compatibility and reliability issues from all of the major players. Intel used to be our safe haven, but even the extra reliable Intel SSD 320 is plagued by a firmware bug that may crop up unexpectedly, limiting your drive's capacity to only 8MB. Then there are the infamous BSOD issues that affect SandForce SF-2281 drives like the OCZ Vertex 3 or the Corsair Force 3. Despite OCZ and SandForce believing they were on to the root cause of the problem several weeks ago, there are still reports of issues. I've even been able to duplicate the issue internally.
It's been three years since the introduction of the X25-M and SSD reliability is still an issue, but why?
For the consumer market it ultimately boils down to margins. If you're a regular SSD maker then you don't make the NAND and you don't make the controller.
A 120GB SF-2281 SSD uses 128GB of 25nm MLC NAND. The NAND market is volatile but a 64Gb 25nm NAND die will set you back somewhere from $10 - $20. If we assume the best case scenario that's $160 for the NAND alone. Add another $25 for the controller and you're up to $185 without the cost of the other components, the PCB, the chassis, packaging and vendor overhead. Let's figure another 15% for everything else needed for the drive bringing us up to $222. You can buy a 120GB SF-2281 drive in e-tail for $250, putting the gross profit on a single SF-2281 drive at $28 or 11%.
Even if we assume I'm off in my calculations and the profit margin is 20%, that's still not a lot to work with.
Things aren't that much easier for the bigger companies either. Intel has the luxury of (sometimes) making both the controller and the NAND. But the amount of NAND you need for a single 120GB drive is huge. Let's do the math.
8GB IMFT 25nm MLC NAND die - 167mm2
The largest 25nm MLC NAND die you can get is an 8GB capacity. A single 8GB 25nm IMFT die measure 167mm2. That's bigger than a dual-core Sandy Bridge die and 77% the size of a quad-core SNB. And that's just for 8GB.
A 120GB drive needs sixteen of these die for a total area of 2672mm2. Now we're at over 12 times the wafer area of a single quad-core Sandy Bridge CPU. And that's just for a single 120GB drive.
This 25nm NAND is built on 300mm wafers just like modern microprocessors giving us 70685mm2 of area per wafer. Assuming you can use every single square mm of the wafer (which you can't) that works out to be 26 120GB SSDs per 300mm wafer. Wafer costs are somewhere in four digit range - let's assume $3000. That's $115 worth of NAND for a drive that will sell for $230, and we're not including controller costs, the other components on the PCB, the PCB itself, the drive enclosure, shipping and profit margins. Intel, as an example, likes to maintain gross margins north of 60%. For its consumer SSD business to not be a drain on the bottom line, sacrifices have to be made. While Intel's SSD validation is believed to be the best in the industry, it's likely not as good as it could be as a result of pure economics. So mistakes are made and bugs slip through.
I hate to say it but it's just not that attractive to be in the consumer SSD business. When these drives were selling for $600+ things were different, but it's not too surprising to see that we're still having issues today. What makes it even worse is that these issues are usually caught by end users. Intel's microprocessor division would never stand for the sort of track record its consumer SSD group has delivered in terms of show stopping bugs in the field, and Intel has one of the best track records in the industry!
It's not all about money though. Experience plays a role here as well. If you look at the performance leaders in the SSD space, none of them had any prior experience in the HDD market. Three years ago I would've predicted that Intel, Seagate and Western Digital would be duking it out for control of the SSD market. That obviously didn't happen and as a result you have a lot of players that are still fairly new to this game. It wasn't too long ago that we were hearing about premature HDD failures due to firmware problems, I suspect it'll be a few more years before the current players get to where they need to be. Samsung may be one to watch here going forward as it has done very well in the OEM space. Apple had no issues adopting Samsung controllers, while it won't go anywhere near Marvell or SandForce at this point.
The SF-2281 BSOD Bug
A few weeks ago I was finally able to reproduce the SF-2281 BSOD bug in house. In working on some new benchmarks for our CPU Bench database I built an updated testbed using OCZ's Agility 3. All of the existing benchmarks in CPU Bench use a first generation Intel X25-M and I felt like now was a good time to update that hardware. My CPU testbeds need to be stable given their importance in my life so if I find a particular hardware combination that works, I tend to stick to it. I've been using Intel's DH67BL motherboard for this particular testbed since I'm not doing any overclocking - just stock Sandy Bridge numbers using Intel's HD 3000 GPU. The platform worked perfectly and it has been crash free for weeks.
A slew of tablet announcements pulled me away from CPUs for a bit, but I wanted to get more testing done while I worked on other things. With my eye off the ball I accidentally continued CPU testing using an ASUS P8Z68-V Pro instead of my Intel board. All of the sudden I couldn't complete a handful of my benchmarks. I never did see a blue screen but I'd get hard locks that required a power cycle/reset to fix. It didn't take me long to realize that I had been testing on the wrong board, but it also hit me that I may have finally reproduced the infamous SandForce BSOD issue. The recent Apple announcements once more kept me away from my CPU/SSD work but with a way to reproduce the issue I vowed to return to the faulty testbed when my schedule allowed.
Even on the latest drive firmware, I still get hard locks on the ASUS P8Z68-V Pro. They aren't as frequent as before with the older firmware revision, but they still happen. What's particularly interesting is that the problem doesn't occur on Intel's DH67BL, only on the ASUS board. To make matters worse, I switched power supplies on the platform and my method for reproducing the bug no longer seems to work. I'm still digging to try and find a good, reproducible test scenario but I'm not quite there yet. It's also not a Sandy Bridge problem as I've seen the hard lock on ASRock's A75 Extreme6 Llano motherboard, although admittedly not as frequently.
Those who have reported issues have done so from a variety of platforms including Alienware, Clevo and Dell notebooks. Clearly the problem isn't limited to a single platform.
At the same time there are those who have no problems at all. I've got a 240GB Vertex 3 in my 2011 MacBook Pro (15-inch) and haven't seen any issues. The same goes for Brian Klug, Vivek Gowri and Jason Inofuentes. I've sent them all SF-2281 drives for use in their primary machines and none of them have come back to me with issues.
I don't believe the issue is entirely due to a lack of testing/validation. SandForce drives are operating at speeds that just a year ago no one even thought of hitting on a single SATA port. Prior to the SF-2281 I'm not sure that a lot of these motherboard manufacturers ever really tested if you could push more than 400MB/s over their SATA ports. I know that type of testing happens during chipset development, but I'd be surprised if every single motherboard manufacturer did the same.
Regardless the problem does still exist and it's a valid reason to look elsewhere. My best advice is to look around and see if other users have had issues with these drives and have a similar system setup to you. If you do own one of these drives and are having issues, I don't know that there's a good solution out today. Your best bet is to get your money back and try a different drive from a different vendor.
Update: I'm still working on a sort of litmus test to get this problem to appear more consistently. Unfortunately even with the platform and conditions narrowed down, it's still an issue that appears rarely, randomly and without any sort of predictability. SandForce has offered to fly down to my office to do a trace on the system as soon as I can reproduce it regularly.
OCZ's exclusive on the SF-2281 has finally lifted and now we're beginning to see an influx of second generation SandForce drives from other vendors. Make no mistake, although these drives may look different, have creative new names and different PCBs - they should all perform the same if you know what you're comparing.
OCZ sells three lines of SF-2281 based drives: Vertex, Agility and Solid. The Vertex 3 uses synchronous NAND (read: faster), while the Agility 3 and Solid 3 use slower asynchronous NAND.
Other companies are following suit. Asynchronous NAND is more readily available and cheaper, so don't be surprised to find it on drives. So far everyone seems to be following OCZ's footsteps and creating separate lines for the synchronous vs. asynchronous stuff.
The table below should break it down for you:
|SandForce SF-2281 SSD Comparison|
|Product||NAND Type||Capacities Available||Latest FW Available||Price for 120GB Drive|
|Corsair Force GT||Sync IMFT||60GB, 120GB, 240GB||Corsair v1.3 (SF v3.20?)||$248.49|
|Kingston HyperX||Sync IMFT||120GB, 240GB||SF v3.20||$269.99|
|MemoRight FTM Plus||Sync IMFT||60GB, 120GB, 240GB, 480GB||MR v1.3.1||N/A|
|OCZ Agility 3||Async IMFT||60GB, 120GB, 240GB||OCZ v2.11
|OCZ Vertex 3||Sync IMFT||60GB, 120GB, 240GB, 480GB||OCZ v2.11
|OCZ Vertex 3 MAX IOPS||Toggle||120GB, 240GB||OCZ v2.11
|OWC ME Pro 6G||Toggle||120GB, 240GB, 480GB||SF v3.19||$279.99|
|OWC ME Pro 6G||Sync IMFT||120GB, 240GB, 480GB||SF v3.19||$279.99|
|Patriot Pyro||Async IMFT||60GB, 120GB, 240GB||SF v3.19||$209.99|
|Patriot Wildfire||Toggle||120GB, 240GB, 480GB||SF v3.19||$284.99|
The four drives we're looking at today are a mixture of synchronous and asynchronous. The newly announced Patriot Pyro uses async NAND while everything else uses some form of synchronous Flash memory. OWC recently switched to using 32nm Toggle NAND in an effort to boost performance to OCZ MAX IOPS levels.
SandForce delivers firmware to all of its partners, however the schedules don't always match up. OCZ gets first dibs and renames its firmware to avoid direct comparisons to other drives. There are cases where OCZ may have some unique features in its firmwares but I don't believe that's the case with its latest revision.
Corsair and MemoRight do the same renaming, while the rest of the players stick to the default SandForce firmware version numbers. Everyone that shipped us drives seems to be sticking to either the latest firmware revision (3.20) or the one prior to it (3.19). Performance hasn't changed too much between all of these revisions, the majority of the modifications are there to squash this BSOD issue.
Corsair Force GT
The Force GT is Corsair's answer to OCZ's Vertex 3. The GT ships with a custom PCB design and is outfitted with IMFT (Micron branded) NAND running in synchronous mode. The 120GB sample we were sent for review has 16 NAND packages each with one 8GB MLC NAND die. Corsair just released its first firmware update for the Force GT, bringing it up to version 1.3 in Corsair-speak. Given the timing of the release, I'm going to assume this is Corsair's rebrand of the SandForce 3.20 firmware.
The Force GT is the cheapest 120GB SF-2281 based drive with synchronous NAND in the roundup. The drive comes with a 3 year warranty from Corsair.
As Kingston's first SandForce SSD, the HyperX looks great. Although it doesn't really matter once you get it in the system, the HyperX enclosure has the most heft to it out of all of the SF-2281 drives we've reviewed. Part of the added weight comes from the two thermal pads that line the top and bottom of the case:
Kingston annoyingly uses 1.5mm hollow hex screws to keep the drive together, our biggest complaint from a physical standpoint. As a standalone drive the HyperX is our second most expensive here at $269.99 for 120GB.
The drive ships with a custom PCB and the latest SandForce firmware (3.20). Kingston sent along the upgrade kit which includes a 3.5" drive sled, USB enclosure, SATA cable and Kingston screwdriver:
MemoRight FTM Plus
We haven't had a MemoRight drive in for review since the early days of SSDs on AnandTech. The FTM Plus is a vanilla SF-2281 drive with SandForce's 3.19 firmware. The PCB design is custom although the NAND configuration is pretty standard. We got a 240GB drive in for review with 16 NAND devices and two 8GB 25nm IMFT die per chip.
OWC Mercury Extreme Pro 6G
The last time we reviewed OWC's Mercury Extreme Pro 6G we were a bit surprised by the presence of a rework on a shipping SSD. Since then a couple of things have changed. For starters, the rework is gone as you can see from the images above. While these drives use standard 25nm IMFT synchronous NAND, OWC has recently switched over to 32nm Toggle NAND - giving the Mercury Extreme Pro 6G the same performance as the MAX IOPS drives from OCZ or the Patriot Wildfire. We don't have the Toggle NAND drives in yet but I'd expect performance similar to the MAX IOPS for the updated drives. The branding and pricing haven't changed, just the performance.
The Mercury Extreme Pro 6G is offered with a 5 year warranty from OWC. It's also the most expensive SF-2281 based on IMFT NAND with a 120GB drive costing $279.99.
While the Wildfire is Patriot's high-end SF-2281 offering, the Pyro is its more cost effective solution. The Pyro's savings come courtesy of its asynchronous NAND, putting it on par with an Agility 3. The Wildfire by comparison is akin to OCZ's MAX IOPS drives. Patriot uses the same PCB layout as OWC. The drive ships with firmware revision 3.19 and a 3-year warranty.
Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO
Intel H67 Motherboard
Intel 188.8.131.525 + Intel RST 10.2
|Memory:||Corsair Vengeance DDR3-1333 2 x 2GB (7-7-7-20)|
|Video Card:||eVGA GeForce GTX 285|
|Video Drivers:||NVIDIA ForceWare 190.38 64-bit|
|Desktop Resolution:||1920 x 1200|
|OS:||Windows 7 x64|
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
Random read performance is pretty consistent across all of the SF-2281 drives. The Patriot drives lose a bit of performance thanks to their choice in NAND (asynchronous IMFT in the case of the Pyro and Toggle NAND in the case of the Wildfire).
Most random writes are highly compressible and thus all of the SF-2281 drives do very well here. There's no real advantage to synchronous vs. asynchronous NAND here since most of the writes never make it to NAND in the first place. The Agility 3 and Vertex 3 here both use their original firmware while the newer drives are running the latest firmware updates from SandForce. The result is a slight gain in performance, but all things equal you won't see a difference in performance between these drives.
The MemoRight FTM Plus is the only exception here. Its firmware caps peak random write performance over an extended period of time. This is a trick you may remember from the SF-1200 days. It's almost entirely gone from the SF-2281 drives we've reviewed. The performance cap here will almost never surface in real world performance. Based on what we've seen, if you can sustain more than 50MB/s in random writes you're golden for desktop workloads. The advantage SandForce drives have is they tend to maintain these performance levels better than other controllers thanks to their real-time compression/dedupe logic.
Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:
All of the SF-2281 drives do better with a heavier load. The MemoRight drive is still capped at around 112MB/s here.
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
The older SF-2281 firmwares did a bit better in some tests than the newer versions, hence the Vertex 3 being at the top here. All of the newer drives perform pretty similarly in our sequential read test.
The same goes for our sequential write test - all of the SF-2281 drives perform very similarly.
AS-SSD Incompressible Sequential Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
It's in dealing with incompressible writes that you notice the difference between asynchronous and synchronous NAND. Total drive capacity starts to matter here as well. The more NAND die you have available, the more the controller can stripe in parallel. Again, within a particular NAND type/capacity there's no difference between these SF-2281 drives.
AnandTech Storage Bench 2011
Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.
Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
|AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown|
|IO Size||% of Total|
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.
Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.
There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.
As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.
The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.
AnandTech Storage Bench 2011 - Heavy Workload
We'll start out by looking at average data rate throughout our new heavy workload test:
Our Storage Bench suite groups performers according to die count/drive capacity. The 240GB drives are faster than the 120GB counterparts. There's also not much of a difference between the drives with synchronous vs. asynchronous NAND.
The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:
AnandTech Storage Bench 2011 - Light Workload
Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:
|AnandTech Storage Bench 2011 - Light Workload IO Breakdown|
|IO Size||% of Total|
Despite the reduction in large IOs, over 60% of all operations are perfectly sequential. Average queue depth is a lighter 2.2029 IOs.
As the only 240GB drive running 3.20 the Kingston HyperX is the fastest drive in our test here (only the Vertex 3 and possibly the Corsair Force GT even have 3.20 available for update). The advantage is still pretty small though, at 6% above its closest competitor in an IO bound test you'll never see that advantage in the real world.
There really shouldn't be any surprises here. Given the same controller, the same NAND and the same firmware there's no difference between SandForce SSDs. They may look different and they may be priced differently, but they are effectively the same. So how do you pick between otherwise identical drives?
Firmware availability is an important feature. OCZ's exclusive arrangement with SandForce still gives it the advantage here. No competing SandForce drive gets firmware updates as quickly as OCZ owners. That's not always a good thing however. In the interest of pushing out fixes and improvements as quickly as possible you don't just get good firmware updates, sometimes you get updates that are best skipped. If you're the type of user that wants the latest updates as quickly as possible, OCZ is your only option.
For everyone else it looks like all of the competing drives are at least running with fairly up to date firmware revisions (either the absolute latest or one revision older than the latest). However it's unclear what the update cadence will be like going forward for these guys. The hope is that you won't need to update the firmware on your drive after you buy it, but it's clear at this point that there's still some room for compatibility improvements in the latest SandForce firmware.
The BSOD issue continues to be a significant blemish on SandForce's trackrecord. I don't have nearly enough systems deployed with SF-2281 hardware to really make any accurate statements of how widespread the issue is, but even if it is limited - it's a problem that should not exist. SandForce based drives continue to offer the best performance out of anything on the market today, not just in peak performance but in performance over time. For systems without TRIM support (e.g. Macs without the TRIM enabler) the hallmark SandForce resiliency is even more important. Unfortunately the unresolved BSOD issue makes all of these drives a risk if you don't know for sure that your system won't be affected.
The safest route without sacrificing significant performance continues to be Intel's SSD 510.