Back to Article

  • bobbyh - Thursday, August 11, 2011 - link

    Are you going to talk about synchronous vs asynchronous NAND and the benefits of one vs the other?
  • bobbyh - Thursday, August 11, 2011 - link

    nevermind lol! Reply
  • bobbyh - Thursday, August 11, 2011 - link

    very nice roundup A+ would read again Reply
  • Arnulf - Thursday, August 11, 2011 - link

    FIRST what ? FIRST idi0t to tag himself ? You got that right ! Reply
  • ARoyalF - Thursday, August 11, 2011 - link

    The estimated cost breakdown sure gave me an appreciation of what goe$ on behind the scenes. Reply
  • Sagath - Thursday, August 11, 2011 - link

    Firstly, I'd state I always appreciated you bringing these issues to the front page to allow the consumer to see these issues in a public venue, while also berating manufacturers for selling us junk. Thank you, Anand.

    That being said; I fully understand that the new Sandforce chips allow SATA6 connectivity, and are thus the fastest possible drives on the market...yet I have to ask, is it worth it? I don't see you mentioning these issues with last gens drives like the aforementioned X25-m, or Sandforce v1.

    Any SSD sold today is plainly 'fast', and order of magnitudes faster then magnetic-based storage. Is the incremental upgrade (of microseconds at best?) really worth sacrificing the reliability associated with last generations drives?

    My X25-M and Vertex 2's across multiple computers, laptops and friends computers are all running flawlessly. I have had zero complaints about random BSOD's or lockups. I also have 2 friends with whom purchased Vertex 3's on their own, and are both experiencing the famous Sandforce v2 issues...

    I'll stick with my 'slower' (lol?) X25-m's and V2's, then deal with these issues.
  • bobbyh - Thursday, August 11, 2011 - link

    I have an older x25-m it still works flawlessly, this generation of drives has had an insane amount of problems. Reply
  • tbanger - Thursday, August 11, 2011 - link

    Can anyone shed some more light on the Intel 320 series firmware problem that Anand mentions?

    I've experienced it recently myself with my work machine's 300GB model resetting itself to an 8MB partition with all data lost. Not a huge problem (good backup scheme) but still annoying. At least Intel kindly replaced my drive with a new one fairly quickly. However, given I had already ordered a bunch more drives for the company (before the failure), I would like to see a firmware update that fixes this problem. I'm getting nervous that we're going to experience a bunch of failures.

    Is there any official plan to fix this from Intel? I haven't found much from Googling other than user complaints with little response from Intel.
  • Nickel020 - Thursday, August 11, 2011 - link

    Just follow the link in the article ;)

    They've reproduced the issue and are validating the firmware fix. I got not clue how long their validating could take, but a new FW could be out any day, or maybe it'll take another month. They might find some issues during validation, which need further fixes and then further validating, so not even someone from Intel could give you a definite ETA.
  • tbanger - Thursday, August 11, 2011 - link

    That'll teach me to only skim the article :)

    Thanks for the link. Nice to see Intel to offer a little official feedback.
  • V3ctorPT - Thursday, August 11, 2011 - link

    Exactly what I think, I have an X25-M 160Gb and that thing is still working flawlessly with the advertised speeds, every week he gets the Intel Optimizer and it's good...

    Even my Gskill Falcon 1 64Gb is doing great, no BSOD's, no unexpected problems, the only "bad" thing that I saw was in SSD Life Free, when it say's my SSD is at 80% of NAND wear n' tear, my Intel is at 100%.

    CrystalDisk Info confirms those conditions (that SSD Life reports), Anand, do you think these "tools" are trust worthy? Or they're some sort of scam?
  • SjarbaDarba - Sunday, August 14, 2011 - link

    Where I work - we have had 265 Vertex II drives come back since June 2010.

    That's one every day or two since for our 1 store, hardly reliable tech.
  • Ikefu - Thursday, August 11, 2011 - link

    "a 64Gb 25nm NAND die will set you back somewhere from $10 - $20. If we assume the best case scenario that's $160 for the NAND alone"

    I think you meant to say an 8Gb Nand die will set you back $10-$20. Not 64Gb

    Yay math typos. Those are always hard to catch.
  • bobbozzo - Thursday, August 11, 2011 - link

    No, 64Gb = 8GB

    Note the capitalization/case.
  • Ryan Smith - Thursday, August 11, 2011 - link

    We're using gigaBITs (little b), not gigaBYTEs (big B).

    64Gb x 16 modules / 8 bits-to-bites = 128GBytes.
  • Ikefu - Thursday, August 11, 2011 - link

    Ah Capitalization for the loss, I see my error now. Thank you =)

    Later in the article they refer to 8GB so the switch from Gigabits to Gigabytes through me.
  • philosofool - Thursday, August 11, 2011 - link

    I made the same mistake at first.

    Can I request that, in the future, we write either in terms of bytes or bits for the same type of part? There's no need to switch from bits to bytes when talking about storage capacity and you just confuse a reader or two when you do.
  • nbrenner - Thursday, August 11, 2011 - link

    I understand the GB vs Gb argument, but even if it takes 8 modules to make up 64Gb it was stated that a 64Gb die would set you back $10-$20, so saying a 128Gb drive would cost $160 didn't make any sense until 3 paragraphs later when it said the largest die you could get is 8GB.

    I think most of us read that if 64Gb is $10-$20, then why in the world would it cost $160 to get to 128Gb?
  • Death666Angel - Friday, August 12, 2011 - link

    Unless he edited it, it clearly states "128GB". I think the b=bit and B=byte is quite clear, though I would not complain if they stick with one thing and not change it in between. :-) Reply
  • Mathieu Bourgie - Thursday, August 11, 2011 - link

    Once again, a fantastic article from you Anand on SSDs.

    I couldn't agree more on the state of consumer SSDs and their reliability (or lack of...).

    The problem as you mentioned is the small margins that manufacturers are getting (if they are actually manufacturing it...), which results in less QA than required and products that launch with too many bugs. The issue is, this won't go away, because many customers do want the price per GB to go down before they'll buy. Probably waiting for that psychological $1 per GB, that same 1$ per GB that HDDs reached many years ago.

    With prices per GiB (actual capacity in Windows) dropping below $1.50, reliability is one of the last barrier for SSDs to actually become mainstream. Most power users now have one or are considering one, but SSDs are still very rare in most desktops/laptops sold by HP, Dell and the like. Sometimes they will be offered as an option (with additional cost), but rarely as a standard drive (only a handful or two of exceptions come to mind for laptops).

    I can only hope that the reliability situation improves, because I do wish to see a major computing breakthrough, that is for SSDs to replace HDDs entirely one day. As you said years ago in an early SSD article, once you had a SSD, you can't go without one.

    My desktop used to have two Samsung F3 1TB in RAID 0. Switching to it from my laptop (which had an Intel 120GB X25-M G2) was almost painful. Being accustomed to the speed of the SSD, the HDDs felt awfully slow. And I'm talking about two top of the line (besides raptors) HDDs in RAID 0 here, not a five year old IDE HDD here.

    It's always a pleasure to read your articles Anand, keep up the outstanding work!
  • secretanchitman - Thursday, August 11, 2011 - link

    Thanks for the great review Anand! I'm rocking a Patriot Wildfire 240GB in my 2011 15" mbp (2.2ghz, 8GB, 6750m 1GB, 1680x1050 anti-glare) and it's been 100% perfect. I haven't seen any errors whatsoever in snow leopard, lion, and windows 7 via boot camp.

    These benchmarks are pretty consistent with what I see on my own drive, although the 240GB is a bit higher all around. :)
  • Movieman420 - Thursday, August 11, 2011 - link

    Here is a good summary of the issue to date:

    From Ocz:

    '...I think the ultimate fix will come with a FW coupled with Orom change and new RST/IME driver and possibly UEFI update for the motherboards, the issue needs to be nailed down, at this time its floating around with Orom changes etc and what ever SF do can be countered by what the Orom is doing...and yes SF are talking to Intel so i would hope between them they can get it worked out....

    Full Post:
  • Nickel020 - Thursday, August 11, 2011 - link

    Was gonna post this as well as the likely cause for the problems with the Asus board.

    Then again, if the Intel H67 is your testbed Anand, have you even updated the BIOS or are you staying with an older one for comparability? With an older BIOS it might have an older OROM as well and thus the issue could then not be solely caused by the OROM.
  • xijox - Thursday, August 11, 2011 - link

    Thank you for another great write-up, Anand!

    I'm curious why you left the Corsair out of several of the benchmark results (4KB Random Read, 128KB Sequential Read and Write)?
  • beginner99 - Thursday, August 11, 2011 - link

    maybe because it preformed worse than expected and the site got a little bribe from Corsair not to publish but instead put a nice commercial on the last page? Reply
  • philosofool - Thursday, August 11, 2011 - link

    Don't be a jerk. If you're going to accuse someone of something like this, have some evidence. Reply
  • Anand Lal Shimpi - Thursday, August 11, 2011 - link

    Or because I accidentally put the wrong graphs in the piece :) It has been fixed.

    Take care,
  • Beenthere - Thursday, August 11, 2011 - link

    Sorry but the current SSDs are unreliable at this point in time and it's unscrupulous to continue selling these SSDs when a mfg. doesn't know the root cause, have a resolution for the operational/compatibility issues and can not tell consumers what systems can use these SSDs without issue.

    It's good to see Anandtech substantiate what I have been saying for some time. Now consumers need to stop purchasing these SSDs until they are properly revised so they function without issues for everyone.
  • gevorg - Thursday, August 11, 2011 - link

    SSDs offer amazing performance, but too many of them are cursed with reliability problems. A is faster than B at price point C is not sufficient to make buying decisions with SSDs. When and how can benchmarks examine SSD quality issues? Reply
  • Axonn - Thursday, August 11, 2011 - link

    Why is the Corsair Force 3 in only 1 of the benchmarks @ Random/sequential speed? And I can't see the Corsair GT anywhere on the first page? Reply
  • imaheadcase - Thursday, August 11, 2011 - link

    I was wondering the same thing...this seems to happen a lot lately with roundups. Reply
  • Anand Lal Shimpi - Thursday, August 11, 2011 - link

    My apologies! An older version of the graphs made its way live, I've updated all of the charts :)

    Take care,
  • Nickel020 - Thursday, August 11, 2011 - link

    I always thought the difference in price between a 25nm SF1200 drive and a synchronous SF2200 was mainly due to the cost of the controller, but since you put the controller at $25, it's the NAND in the SF1200 that must be cheaper.

    A Corsair F115 with synchronous 25nm (G08CAMDB)* costs $170, a Force 3 with asynchronous NAND costs $185 and a Force GT with synchronous NAND costs $245. The synchronous NAND in the F115 must be way cheaper than the synchronous in the Force GT thus.

    I'm guessing the SF2200 is more expensive than the SF1200, so that basically means that following your cost breakdown, the asynchronous NAND in drives such as the Force 3 or Agility 3 must be similarly priced as the synchronous NAND in the 25nm SF1200 drives.

    Why is the synchronous in the SF1200 drives so much cheaper than the one in the SF1200 drives? Could you decipher the the whole part number?

    *I'm assuming the F115 uses the same NAND as the first Vertex 2s with 25nm:,t...
  • Coup27 - Thursday, August 11, 2011 - link

    If the current state of affairs are due to the reasons you have outlined in the first couple of paragraphs then this has been brought on by the manufacturers themselves.

    All the manufacturers have tried to bring costs down as much as possible for obvious reasons, but they should not have brought them down so low that they sacrifice validation and testing to get there.

    The benefits SSD's have over HDD's are enormous and I am sure I am not alone when I say that I would quite happily pay an additional 15-25% than the current prices for my drive knowing that it works, full stop.
  • QChronoD - Thursday, August 11, 2011 - link

    I understand sync and async, but not really sure what toggle means. Is it safe to assume that means that it can switch between the two modes? Or is there something else that is special about it? Reply
  • Nickel020 - Thursday, August 11, 2011 - link

    It's a different NAND standard. Intel/Micron NAND follows the ONFI standard (which they developed afaik), Toggle is another standard that's developed by Samsung and others, the Toggle NAND in SF2281 SSDs is 34nm from Toshiba.

    If I understand it correctly, the difference is mainly the interface, with which the MLC cells are connected to the controller. Both are MLC though, the basic principle on which they are based is the same.

    The Toggle NAND SSDs are generally faster, because 34nm means less density, more NAND dies, and thus more interleaving. Same thing causes bigger SSDs to be faster than smaller ones (read Anands other recent articles if you want to know more).
  • Conscript - Thursday, August 11, 2011 - link

    is there a reason the same products aren't in every graph? Corsair GT seems to be missing from quite a few? Reply
  • Anand Lal Shimpi - Thursday, August 11, 2011 - link

    Fixed :)

    Take care,
  • Shadowmaster625 - Thursday, August 11, 2011 - link

    Is there a way you can force the drive to run at SATA2 speeds to see if that eliminates the lockups? Reply
  • irev210 - Thursday, August 11, 2011 - link

    You open this SandForce article on Intel 320 SSDs firmware bug.

    I love how the BSOD is a page two reference.

    Anand, your OCZ/sandforce bias bleeds through pretty hard. I hope you can be a bit more objective with your reports moving forward.

    The speed difference between SSDs at this point is pretty trivial. As you continue to hammer about reliability, you never even reviewed the Samsung 470, rarely talk about the Crucial C300/M4, and Toshiba seems to be an afterthought.

    At least tomshardware made an attempt to look at SSD reliability.

    Bottom line, it seems like sandforce-driven ssds have the biggest number of issues, yet you still recommend them. You say "well I never really experience the issues" but just because you don't doesn't mean that it is the most reliable drive.

    I think you should work a little harder at focusing on reliability studies instead of performance metrics. For most users, it taking 1.53 seconds or 1.54 seconds to open an application is pretty irrelevant if SSD A is 10x more likely to fail over SSD B.
  • DarkKnight_Y2K - Thursday, August 11, 2011 - link

    "Bottom line, it seems like sandforce-driven ssds have the biggest number of issues, yet you still recommend them."

    Did you read the last sentence of Anand's review?

    "The safest route without sacrificing significant performance continues to be Intel's SSD 510."
  • Socratic - Thursday, August 11, 2011 - link

    Yeah I don't know what planet you have been living on, but in MULTIPLE articles Anand has basically ended with the phase, The only logical choice is Intel.

    How is that being a sandforce fanboy??

    You need to keep YOUR bias in line and re-read the article and past articles!!
  • Anand Lal Shimpi - Thursday, August 11, 2011 - link

    Given the continued issues with SF drives I'm quickly looking at other alternatives. Toshiba and Crucial have never been top end performers, which is why I've focused most of my recommendations on the Intel SSD 510. The biggest advantage SandForce continues to have is in better performance over the long run thanks to its live dedupe/compression. I've been working on a way to quantify that for a while unfortunately I don't have a good test I'm happy with...yet.

    Going forward I believe Samsung may be a bigger player. Take note of the recently announced PM830, expect full coverage of that drive upon its arrival.

    Take care,
  • melgross - Thursday, August 11, 2011 - link

    Well, dedup itself is subject to a lot of controversy. It isn't necessarally a good thing. Reply
  • Anand Lal Shimpi - Thursday, August 11, 2011 - link

    I'd argue for most mainstream uses it's a very good thing for long term performance. If the SF-2281 had Intel's track record it'd be the best option in my mind.

    Take care,
  • name99 - Thursday, August 11, 2011 - link

    Hi Anand,

    Rather than beating up on you for not stressing reliability more in the past, I'm going to ask, AGAIN, that you take power more seriously.

    My experience has been

    - replaced the hard drive in my 2nd gen MacBook Air with a RunCore IV. The thing would crash about once a week, as far as I could tell NOT from logic errors but because its power draw during a long train of writes spiked higher than the interface was specced for. If this coincided with a high power draw elsewhere in the system --- fan, CPU etc, game over

    - an OCZ enyo USB3 drive which work just fine as a READ drive --- and is once again somewhat flaky if too many back-to-back writes occur

    - a Kingston SSDNow V which I have as the boot/VM drive for my iMac running off USB. My original plan for this was to have it running off FW800 (which is in theory 7W of power), but I got the same thing as the two previous drives --- crashes with too many back to back writes. It's now running successfully because I stuck it in a Kingwin USB<->SATA bridge that is for 3.5" drives, and thus has a separate power supply and the ability to provide a lot of juice.

    All this basically mirrors (along a different dimension) what you have said: these drives are ABSOLUTE CRAP for the naive consumer. You buy them, things seem great, and then randomly and with no obvious pattern to the naive user, your system hangs.

    You seem to be trying really hard to have the manufacturers get their act together; my point is to remind you that an IMPORTANT part of getting their act together is that these things are ALWAYS within spec with respect to power. Right now, we seem to have a lot (at least three different brands, in three different market segments) of drives that are simply not within spec --- they can run on the power that the system is specced to deliver for most command sequences, but there are always those few command sequences that over-draw power. Heck, at the very least, it is the responsibly of the drive to recognize this
    situation and throttle themselves, just like any modern x86 CPU.
  • Coup27 - Thursday, August 11, 2011 - link


    I have been feeling similar sentiments lately as well.

    I have posted in the forums on what happened to the 470 review but no official comment from anybody. Considering all of the reliability issues flying about, you woud think that if the 470 was a reliable as word suggests, it would have had a featured review.

    Some guy actually bought an Agility 3 based off the AT review and forum list of recommended drives and neither mentioned the BSOD. When he got it the BSOD, he went into the forums and kicked off. Rightly so.

    Unfortunately issues drag on for sometimes months before AT even update their article to make people aware that the product they might be buying could be seriously flawed.

    No other website offers the depth of detail which AT does and for that the editors are applauded, but unfortunately the playing field does not seem level.
  • Lord 666 - Thursday, August 11, 2011 - link

    Before this article, previous reviews of Vertex problems did not address the issues. This hits it head on. Reply
  • jo-82 - Thursday, August 11, 2011 - link

    The Kingston HyperX cleary stands out with a consistent high performance. Why no words on that? Clearly the drive to buy. And Kingston has imho a much higher reputation on circuitry reliance and better QA in general then the rest of the pack, except Intel. Reply
  • Roland00Address - Thursday, August 11, 2011 - link

    And I ain't sure you can apply the logic of Kingston being rocking when Kingston purposefully makes their SSD line confusing using similar names with completely different controllers

    Kingston E series, Intel X25-E controller
    Kingston M series, Intel X25-M G2 controller
    SSDNow V 100, JMicron JMF618 controller
    SSDNow V+, Samsung S3C29RBB01 controller
    SSDNow V+ 100, Toshiba T6UG1XBG controller
    SSDNow V+ 180, Toshiba T6UG1XBG controller
    SSDNow V Series, Toshiba TC58NCF602GAT controller, which is based off the stuttering JMicron JMF602
    30GB SSDNow V Series Boot Drive, Toshiba T6UG1XBG controller

    I may be forgetting to list a couple models, but as I pointed above, Kingston has used 2 different controllers from Intel, 1 from Samsung, and 2 different from Toshiba (and all these controllers have similar names), not counting their most recent drive that is a Sandforce controller.
  • Ipatinga - Thursday, August 11, 2011 - link

    So, the Corsair Force GT is really going against OCZ Vertex 3? I thought it was agains Vertex 3 Max IOPS.

    In this case, the Corsair Force 3 is going after Agility 3?
    And Corsair Performance 3 is going after Solid 3?

    Thanks :)

    Would like to hear more about NAND Flash that is Async and Sync and Toogle.
  • bob102938 - Thursday, August 11, 2011 - link

    There are some factors that were not considered on the first page of the article. The number of dies per wafer is important, but you are forgetting the cost of producing a flash memory wafer vs a VLSI wafer. Flash memory is a ~20 layer process that has margins for error which can be worked around. VLSI is a 60+ layer process that has 0 margin for error. Producing flash memory wafers is more than an order of magnitude cheaper than producing the same-size VLSI wafer. Additionally, turnaround time on a flash wafer can be achieved in ~20 days, whereas a VLSI wafer can require 3 months.

    Also the internal cost of a 300mm flash memory wafer is more like $1000. A VLSI wafer is around $8000.
  • philosofool - Thursday, August 11, 2011 - link

    I don't want to blame the victims, end users. Obviously, manufacturers have a responsibility to QA.

    Still, when you look at the market forces here, it seems obvious that market forces are driving the problem.

    Manufacturer makes the COOL drive that gets the best performances marks of any drive out there. One year later, the COOLER drive is released. No one wants a COOL drive anymore. Plus, the margin making COOL drives is so small, you can't drop your price on a COOL drive to make it an attractive "midrange" option. So you have to start developing a new controller to make something down-right freezing.

    Because there's such an emphasis on performance, controllers and the drives they run become obsolete before a water-tight reliable version of the controller can be made. Of course, they're not really obsolete--there's nothing wrong with the X-25M controller--but they can't compete in a market with drives that show twice the random read performance of an unreliable competitor.

    Constant R&D on new controllers and the demand for performance mean that reliability takes a backseat. You can't sell COOL drives as long as someone makes a COOLER drive, even if cooler drives have reliability problems. Think about yourself: would you buy an X-25 M knowing that you could get a Vertex 3 instead?
  • Bannon - Thursday, August 11, 2011 - link

    I built a system on an Asus P8Z68 Deluxe motherboard and used two Intel 510 250GB drives with it. One is the system drive and the other data drive with firmwares PWG2 and PWG4 respectively. To date I have not experienced a BSOD BUT my system drive will drop from 6Gbs to 3Gbs for no apparent reason and stay there until I power the system off. My data drive is rock solid at 6Gbs and stays there. I've just started working with Intel so I don't know where that is going to lead. Hopefully it end up with a new drive with the latest firmware and 6Gbs performance. Given my druthers I'd rather have this problem than the Sandforce BSOD's but I wanted to point out that everything isn't perfect in Intel-land. Reply
  • Coup27 - Thursday, August 11, 2011 - link


    Can we ever expect a 470 review?
  • nish0323 - Thursday, August 11, 2011 - link

    or am I the only one about the fact that the OWC drive is the ONLY one with a 5 year warranty on it!! That's nuts... they actually back up the claim of their SSD drive longevity by giving you such a long warranty. I love SSDs. Reply
  • OWC Grant - Friday, August 12, 2011 - link

    Glad you noticed that warranty term because it's somewhat related to topic of this article. I've been in direct contact with Anand on this as the tone of article is all-encompassing and I wanted to shed some light on that from our perspective.

    While many SF based SSDs share firmware, not all hardware is the same. Our SSDs have subtle design and/or component differences which is what we feel reduces or eliminates our products susceptibility to the BSOD issue.

    The honest truth is we have not been able to create a BSOD issue here with our SSDs using the same procedures that caused other brands' SSDs to experience BSOD. Nor have we received or read one direct report of such an occurrence using our drives.

    And while we cut our teeth so to speak in the Mac industry, PLENTY of PC users have our SSDs in their well as that we do extensive testing on a variety of motherboards/system configs to ensure long term reliable operation.

    More supportive perhaps is the fact that we've had other brand users who experienced BSOD, but after buying our SSD, they reported back that it eliminated any issues they were experiencing.
  • ckryan - Thursday, August 11, 2011 - link

    should be getting more reliable, not less. As profit margins get slimmer and slimmer, shouldn't manufactures be producing more reliable drives? Also, Intel might be making less money per drive, but surely their enterprise sales require the same levels of validation (required previously). Reply
  • Conscript - Thursday, August 11, 2011 - link

    am I nuts after reading multiple reviews from Anand as well as elsewhere, that I keep thinking I'm best off with a 256GB Crucial M4? I've had my 160GB X-25 for a while now, and think I'm going to hand it down to the wifey. Reply
  • Bannon - Thursday, August 11, 2011 - link

    I had a 256GB M4 which worked fine except it would BSOD if I let my system sleep. Reply
  • arklab - Thursday, August 11, 2011 - link

    A pity you didn't get the new ... err revised OWC 240GB Mercury EXTREME™ Pro 6G SSD.

    It now uses the SandForce 2282 controller.
    While said to be similar to the troubled 2281, I'm wondering if it is different enough to side step the BSOD bug.

    It may well also be faster - at least by a bit.

    Only the 240GB has the new controller, not there 120GB - though the 480 will also be getting it "soon".

    PLEASE get one, test, and add to this review!
  • cigar3tte - Thursday, August 11, 2011 - link

    Anand mentioned that he didn't see any BSOD's with the 240GB drives he passed out. AFAIK, only the 120GB drives have the problem.

    Also, the BSOD is only when you are running the OS on the drive. So if you have the drive as an addon, you'd just lose the drive, but no BSOD, I believe.

    I returned my 120GB Corsair Force 3 and got a 64GB Micro Center SSD (the first SandForce controller) instead.
  • jcompagner - Sunday, August 14, 2011 - link

    ehh,, i have one of the first 240GB vertex 3 in my Dell XPS17 sandy bridge laptop.

    with the first firmware 2.02 i didn't get BSOD after i got Windows 7 64bit installed right (using the intel drivers, fixing the LPM settings in the registry)
    everything was working quite right

    then we got 2 firmware version who where horrible BSOD almost any other day. Then we get 2.09 which OCZ says thats a bit of an debug/intermediate release not really a final release. And what is the end result? ROCK STABLE!! no BSOD at all anymore.

    But then came the 2.11 release they stressed that everybody should upgrade and also upgrade to the latest 10.6 intel drivers.. I thought ok lets do it then.

    In 2 weeks: 3 BSOD, at least 2 of them where those F6 errors again..

    Now i think it is possible to go back to 2.09 again, which i am planing to do if i got 1 more hang/BSOD ...
  • geek4life!! - Thursday, August 11, 2011 - link

    I thought OCZ purchasing Indilnx was to have their own drives made "In house".

    To my knowledge they already have some drives out that use the Indilnx controller with more to come in the future.

    I would like your take on this Anand ?
  • zepi - Thursday, August 11, 2011 - link

    How about digging deeper into SSD behavior in server usage?

    What kind of penalties can be expected if daring admins use couple of SSD's in a raid for database / exchange storage? Or should one expect problems if you run a truckload of virtual machines from reasonably priced a raid-5 of MLC-SSD's ?

    Does the lack of trim-support in raid kill the performance and which drivers are the best etc?
  • cactusdog - Thursday, August 11, 2011 - link

    Great review but Why wouldnt you use the latest RST driver? Supposed to fix some issues. Reply
  • Bill Thomas - Thursday, August 11, 2011 - link

    What's your take on the new EdgeTech Boost SSD's? Reply
  • ThomasHRB - Thursday, August 11, 2011 - link

    Thanks for another great article Anand, I love reading all the articles on this site. I noticed that you have also managed to see the BSOD issues that others are having.

    I don't know if my situation is related, but from personal experience and a bit of trial and error I found that by unstable power seems to be related to the frequency of these BSOD events. I recently built a new system while I was on holiday in Brisbane Australia.
    Basic Specs:
    Mainboard - Gigabyte GA-Z68X-UD3R-B3
    Graphics - Gigabyte GV-N580UD-15I
    CPU - Intel Core i72600K (stock clock)
    Cooler - Corsair H60 (great for computer running in countries where ambient temp regularly reach 35degrees Celsius)
    PSU -Corsair TX750

    In Brisbane my machine ran stable for 2 solid weeks (no shutdown's only restart during software installations, OS updates etc).

    However when I got back to Fiji, and powered up my machine, I had these BSOD's every day or 2 (I shutdown my machine during the days when I am at work and at night when I am asleep) (CPU temp never exceeded 55degrees C measured with CoreTemp and RealTemp) and GPU temp also never went above 60degrees C measured with nvidia gadget from

    All my computer's sit behind an APC Back-UPS RS (BR1500). I also have an Onkyo TX-NR609 hooked up to the HDMI-mini port, so I disconnected that for a few days, but i saw no differences.

    However last Friday, a major power spike caused my Broadband router (dlink DIR-300) to crash, and I had to reset the unit to get it working. My machine also had a BSOD at that exact same moment. so I thought that it was a possibility that I was getting a power spike being transmitted through the Ethernet cable from my ISP (the only thing that I have not got an isolation unit for)

    So the next day I bough and installed an APC ProtectNET (PNET1GB) and I have not had a single BSOD running for almost 1 full week (no shutdown's and my Onkyo has been hooked back up).

    Although this narrative is long and reflects nothing more than my personal experiences, I at least found it strange that my BSOD seems to have nothing to do with the Vertex3 and more to do with random power fluctuations in my living environment.

    And it may be possible that other people are having the same problem I had, and attributing it to a particular piece of hardware simply because other people have done the same attribution.

    Kind Regards.
    Thomas Rodgers
  • etamin - Thursday, August 11, 2011 - link

    Great article! The only thing that's holding me back from buying an SSD is that secure data erasing is difficult on an SSD and a full rewrite of the drive is neither time efficient nor helpful to the longevity of the drive...or so I have heard from a few other sources. What is your take on this secure deletion dilemma (if it actually exists)? Reply
  • lyeoh - Friday, August 12, 2011 - link

    AFAIK erasing a "conventional" 1 TB drive is not very practical either ( takes about 3 hours).

    a) Use encryption, refer to the "noncompressible" benchmarks, use the more reliable SSDs, and use hardware acceleration or fast CPUs e.g.
    b) Use physical destruction - e.g. thermite, throwing it into lava, etc :).
  • bernardl - Thursday, August 11, 2011 - link

    I am pretty surprised by the little mention of the OWC SSD in your introduction and conclusion. It seems to belong to the top 3 perforers in every single of your tests and their product have proven extremely reliable and durable over the years.

    I am using 3 of their SSDs (Mac pro boot, mac mini boot and external storage for music server) and have experienced zero issue and stable/fast performance.

  • arntc - Friday, August 12, 2011 - link

    I'm not sure if I read correctly between the lines; one should stick to the previous generation of consumer SSD's if your on the prowl for a systemdisk in a notebook?

    If the 3-dimensional comparison of Price/Performance/Reliability is charted, which SSD would currently come out on top (subjective comments allowed)?
  • 86waterpumper - Friday, August 12, 2011 - link

    I just bought the 120mb version of the mercury extreme 6g for my sandy bridge build a few weeks ago. I sure wish I had known they were coming out with faster drives :( Oh well so far no bsod issues, and I
    hope I don't see one!
  • FunBunny2 - Friday, August 12, 2011 - link

    My understanding is that NAND is measured in bits because the controllers see the data as bits, not bytes, leveling across available (addressable) dies. Yes? Reply
  • vashtyphoon - Friday, August 12, 2011 - link

    Thanks for the article, very informative, but it made me cringe.

    I just placed an order for a PC build based off of the SandyBridge guide, with the OCZ Vertex 2, but I changed the motherboard to a ASUS P8Z68-V LE, same base model as the setup that caused the BSODs with a vertex 3 on page 2.... Is this going to really miss me up or do the Vertex 2s have a better track record?

    Any thoughts?
  • brakhage - Friday, August 12, 2011 - link

    I just got 3 OCZ drives, 2 vertex3 60's, and a Solid3 120. I quickly encountered the BSOD/freeze issue on the 2 60s (OS drives). After extensive research and thread-chasing, it seems like OCZ has a solid solution, though it's not simple.

    Basically, you update RST and INF drivers (and throw in a BIOS update if possible), then flash the firmware (2.11), clear cmos and fire it up. I've been BSOD-free ever since... EXCEPT when I use the Solid3, which I got a few days later, and haven't flashed yet. (I'm using it for additional programs, so when I play a game that's on the Solid3, I freeze up. Maybe. It may be an overclocking issue there, I haven't had time to figure it out. I just got that Solid 3 a couple days ago - the same day I OC'd the new machine.)

    Full details on this fix can be found on the OCZ forums; once in a terse post, once in a more verbose one.

    So: the firmware flash is a bit of a problem. The tool they provide didn't detect my drives, and it isn't recommended to flash a drive from the OS stored on that drive. I installed windows 7 on a second (spinner) hdd, and tried the tool; it still didn't work. (Maybe because they're in RAID 0?) So I put Ubuntu on the spinner and flashed them through that with no problem.

    (The HDD has since been disconnected, and I haven't gotten around to hooking it up again to flash the solid3, but I'll try to do that this weekend - hopefully that will fix this one too.)

    All this said, the above posters are absolutely right - this should never have happened. However, I'm WAAYYY too impatient to wait for Sandforce to solve the problem, and that impatience extends to waiting for programs to load or for the system to boot. SSDs are like Linux - freaking awesome, but, yes, they aren't the plug-n-play, fire-up-and-forget, McDonalds-style components we've come to expect when running big name OS's. Frustrating, yes, but totally worth it.
  • KPOM - Saturday, August 13, 2011 - link

    Given the ongoing reliability issues with the Sandforce drives, perhaps Apple is justified in using "slower" Toshiba and Samsung SSDs. I've had SSDs since my 2008 Rev B MacBook Air and haven't had a problem with them (the 2008 had a Samsung, my 2010 a Toshiba, and my 2011 a Samsung). Reply
  • Ao1 - Saturday, August 13, 2011 - link

    Lal can you please provide some statistics to back up your claim that the 8MB bug is a plague? How many occurrences of the bug have been reported and how many 310 have been sold?

    Can you als please confirm why you suspect that Intel have cut corners resulting deficiencies in quality control procedures? Perhaps half of the validation team were made redundant; or is that statement just an outrageous speculation?
  • mikeyd55 - Saturday, August 13, 2011 - link

    In 2011, for a consumer to even have to be concerned about technology issues like this, is very disconcerting and bad for everyone. Don’t release a product when it’s not stable and/or hasn’t been thoroughly tested – even if it has to cost more as a result. It’s cheaper in the end for all! It reminds me of my experiences with cell / smart phones that are continuously released to consumers despite their software / hardware/ firmware not being ready for prime time. Regarding my recent (June ‘11) SSD build: OCZ Vertex 3 MAX IOPS 120 GB (updated to firmware 2.06), no hard drive, Intel DZ68DB mb (updated to second BIOS revision), and Windows 7 Home Premium 64 bit; I’ve been fortunate, so far at least, to not have experienced any BSOD’s, although under this cloud of uncertainty, I’m especially leery of updating mb BIOS, firmware or any drivers until Sandforce gets a true handle on this problem. Reply
  • 86waterpumper - Saturday, August 13, 2011 - link

    Well I have had two bsods so far just this weekend :( System is a 2500k non- overclocked running in the normal temperature ranges. First bsod happened during a windows update so I chalked it up to that, but the 2nd one happend awhile ago with the system just sitting there idle. Looks like the owc drives for sure are affected too. Now my question is, how do I prove it is the hard drive causing the bsod lol. Also is there any newer firmware than 3.19 out yet to install or what is the fix? Reply
  • rigged - Sunday, August 14, 2011 - link

    are you using the SF-2281 or SF-2282 based OWC drive?

    only the new 240GB and 480GB drives from OWC use this controller.

    Under Specs

    Controller: SandForce 2282 Series
  • Justin Case - Sunday, August 14, 2011 - link

    It's not just the BSOD. Even systems that don't crash have frequent freezes for anything up to 90 seconds. Tht's enough to make network transfers abort, connections to game servers drop, etc..

    I've tried three Corsair drives on multiple platforms and I know people who have used those and also OCZ. Not a single drive was 100% stable on any platform. They tended to crash more on Intel chipsets and freeze more on AMD chipsets (sometimes recoverable, sometimes a hard lock), but NOT A SINGLE ONE was problem-free for more than 2 or 3 days in a row (often you'll get two or three freezes within the sme hour).

    The first job of a drive is to reliably hold your data. People use SSDs to install their OS and applications. It takes days to reinstall and recover from errrors. It's irrelevant if some drive gives you 500 happybytes in some benchmark when the same drive keeps losing your thesis or getting you killed in Tem Fortress. I have systems with Raptors that have been running for 5 years without a single error.

    If you have any problems (which you will), don't let them string you along with nonsense about your cables or obscure BIOS options or promises about future fixes. Return the drive and demand a refund. Both OCZ and Corsair are still selling drives that they KNOW to be defective, and removing any reference to those problems from the support section of their sites (you'll still find thousands of complaints in their user forums, though). Demanding refunds (or starting a class action suit) seems to be the only language they understand.
  • SjarbaDarba - Sunday, August 14, 2011 - link

    I experience some hard locks too, mainly during gaming only since upgrading to a 120GB Vertex 3.

    System is an X58-UD7 + i7-960, 2 GTX570 OC Sli, Seasonic X-850 80+ GOLD, 6GB Corsair DDR3 1600C8.

    Was originally using 2 x 300GB Velociraptors in RAID 0 with WD1002FAEX and Seagate 2TB XT, stable for 1-2 months before upgrading to the V3. Storage configuration since upgrade is 120GB Vertex3, WD1002FAEX and Raptors RAID 0.

    System perfectly stable under 600GB RAID 0 OS with Crysis 2, CS:S, LoL, L4D, L4D2 and Borderlands all playing stable at all loads with hardware monitoring active, no problems found with any hardware or software at this point, system performed flawlessly for all tasks.

    Dropped the V3 in with the typical "It's fast - but it could be faster" attitude we all know and love and instantly started experiencing ... whackness. System works flawlessly 99% of the time, however, a few times a week I will lock up and need to power cycle - I have the SSD running in AHCI with TRIM etc. enabled, page file, defrag etc. turned off and pretty much every detail of the drive perfectly specced for optimum performance.

    If I lock up and have to cycle, upon restart the SATA controller the SSD is attached to will hang at BIOS and not detect the V3 - however - cycling again at this point allows the SSD to be detected within ~1 second and Windows boots normally.

    At this point, however (and using nVidia 275.33 drivers) returning to desktop boots me in 800x600 resolution with no nVidia control panel and a further power cycle is required again to reset the resolution.

    Yet to test this problem with nVidia 280.16 drivers but havn't had stability problems since then.

    Sorry for any tl;dr, just thought Annad might like to hear about a strange error I've encountered in the SF controller.

    P.S: System is 3DMark, Furmark and Prime stable, it just has some whack locks randomly and the SSD disappears completely for a power cycle.
  • readyrover - Monday, August 15, 2011 - link

    I was going to dive into my first SSD with a Bulldozer build on the upcoming horizon...until this all shakes out...absolutely no way. My usage is for processing large music files on a Digital Audio Workstation with multiple time based effects and multi-tracks of instruments. I have been experiencing some latency bottle necks and thought "wow" ssd is an instant fix!

    If they have ironed out the problems and the reviews' negative percentages drop back below an astounding 20% of my recent research..then perhaps a year from now...Bulldozers should be less expensive then as well..

    Just my humble opinion, but I can't roll the dice on a hit and miss crash...."Please Mr. $120 hour guitarist...would you wait an hour for me to fix the computer and replay that absolutely inspired, one of kind improvisation...AGAIN! away fast!
  • Gothmoth - Friday, August 19, 2011 - link

    i have a few asus z68- v pro boards (three to be exact).

    all of them have an vertex3 120 GB SSD as C drive.
    all have 16 GB g.skill ram and run win 7 64 bit sp1.

    i had not a single issue with the vertex 3 since i bought them (13. april 2011).

    i have still the first firmware running.
    thank god i have avoided updating to firmware v2.06 or v2.09.

    i have put the vertex3 240 GB from a friend in my system with firmware 2.06.
    we could reproduce the BSOD after 1 hour.
    he has constand crashes on his gigabye motherboard based system.

    we but one of my vertex3 120GB SSD in his system and it was running flawless for 2 days.
  • twindragon6 - Friday, August 26, 2011 - link

    I know the market sucks! But I would rather pay more for something that actually works than pay less for something that doesn't and be stuck with an expensive paperweight! Reply
  • alpha754293 - Friday, September 02, 2011 - link


    Does that BSOD bug only affect drives that are boot drives? i.e. What would happen if the test drives were slave/data/non-OS-containing drives? Does it still do the same BSOD thing?
  • Keith2468 - Monday, December 12, 2011 - link

    Digital people tend to think digital issue when looking for the causes of computer hardsware and software failure. But sometimes the failures are not digital in origin.

    The power supply may well be critical to SSD failures.

    What causes SSD failures? Largely power disturbances to the SSD.

    Why are SSDs with smaller IOPS and smaller caches less likely to fail?
    Less data to move from volatile RAM cache to Flash when power disturbances occur.

    Why should you not use a notebook SSD in a desktop?
    A notebook SSD designer will typically assume that the notebook's battery means he doesn't have to design for power distrubances.

    "The design of an SSD's power down management system is a fundamental characteristic of the SSD which can determine its suitability and compatibility with user operational environments. Systems integrators must take this into account when qualifying SSDs in new applications - because subtle differences in OS timings, rack power loading and rack logic affect some types of SSDs more than others. Users should be aware that power management inside the SSD (a factor which doesn't get much space in most product datasheets) is as important to reliable operation as management of endurance, IOPS, cost and other headline parameters."
  • jfraser7 - Friday, November 14, 2014 - link

    This article is very useful because Mac OS X 10.10 Yosemite dropped all support for third-party Solid State Drives, except for those which use SandForce controllers. Reply
  • jfraser7 - Friday, November 14, 2014 - link

    Also, all three of Kingston's recent Solid State Drive lines(V300, KC300 & HyperX) use SandForce controllers. Reply

Log in

Don't have an account? Sign up now