SandForce was first to announce and preview its 2011 SSD controller technology. We first talked about the controller late last year, got a sneak peak at its performance this year at CES and then just a couple of months ago brought you a performance preview based on pre-production hardware and firmware from OCZ. Although the Vertex 3 shipment target was originally scheduled for March, thanks to a lot of testing and four new firmware revisions since I previewed the drive, the officially release got pushed back to April.

What I have in my hands is retail 120GB Vertex 3 with what OCZ is calling its final, production worthy client firmware. The Vertex 3 Pro has been pushed back a bit as the controller/firmware still have to make it through more testing and validation.

I'll get to the 120GB Vertex 3 and how its performance differs from the 240GB drive we previewed not too long ago, but first there are a few somewhat-related issues I have to get off my chest.

The Spectek Issue

Last month I wrote that OCZ had grown up after announcing the acquisition of Indilinx, a SSD controller manufacturer that was quite popular in 2009. The Indilinx deal has now officially closed and OCZ is the proud owner of the controller company for a relatively paltry $32M in OCZ stock.

The Indilinx acquisition doesn't mean much for OCZ today, however in the long run it should give OCZ at least a fighting chance at being a player in the SSD space. Keep in mind that OCZ is now fighting a battle on two fronts. Above OCZ in the chain are companies like Intel, Micron and Samsung. These are all companies with their own foundries and either produce the NAND that goes into their SSDs or the controllers as well. Below OCZ are companies like Corsair, G.Skill, Patriot and OWC. These are more of OCZ's traditional competitors, mostly acting as assembly houses or just rebadging OEM drives (Corsair is a recent exception as it has its own firmware/controller combination with the P3 series).

By acquiring Indilinx OCZ takes one more step up the ladder towards the Intel/Micron/Samsung group. Unfortunately at that level, there's a new problem: NAND supply.

NAND Flash is not unlike any other commodity. Its price is subject to variation based on a myriad of factors. If you control the fabs, then you generally have a good idea of what's coming. There's still a great deal of volatility even for a fab owner, process technologies are very difficult to roll out and there is always the risk of issues in manufacturing, but generally speaking you've got a better chance of supply and controlled costs if you're making the NAND. If you don't control the fabs, you're at their mercy. While buying Indilinx gave OCZ the ability to be independent of any controller maker if it wanted to, OCZ is still at the mercy of the NAND manufacturers.


Intel NAND

Currently OCZ ships drives with NAND from four different companies: Intel, Micron, Spectek and Hynix. The Intel and Micron stuff is available in both 34nm and 25nm flavors, Spectek is strictly 34nm and Hynix is 32nm.

Each NAND supplier has its own list of parts with their own list of specifications. While they're generally comparable in terms of reliability and performance, there is some variance not just on the NAND side but how controllers interact with the aforementioned NAND.

Approximately 90% of what OCZ ships in the Vertex 2 and 3 is using Intel or Micron NAND. Those two tend to be the most interchangeable as they physically come from the same plant. Intel/Micron have also been on the forefront of driving new process technologies so it makes sense to ship as much of that stuff as you can given the promise of lower costs.

Last month OWC published a blog accusing OCZ of shipping inferior NAND on the Vertex 2. OWC requested a drive from OCZ and it was built using 34nm Spectek NAND. Spectek, for those of you who aren't familiar, is a subsidiary of Micron (much like Crucial is a subsidiary of Micron). IMFT manufactures the NAND, the Micron side of it takes and packages it - some of it is used or sold by Micron, some of it is "sold" to Crucial and some of it is "sold" to Spectek. Only Spectek adds its own branding to the NAND.

OWC published this photo of the NAND used in their Vertex 2 sample:

I don't know the cause of the bad blood between OWC and OCZ nor do I believe it's relevant. What I do know is the following:

The 34nm Spectek parts pictured above are rated at 3000 program/erase cycles. I've already established that 3000 cycles is more than enough for a desktop workload with a reasonably smart controller. Given the extremely low write amplification I've measured on SandForce drives, I don't believe 3000 cycles is an issue. It's also worth noting that 3000 cycles is at the lower end for what's industry standard for 25nm/34nm NAND. Micron branded parts are also rated at 3000 cycles, however I've heard that's a conservative rating.

If you order NAND from Spectek you'll know that the -AL on the part number is the highest grade that Spectek sells; it stands for "Full spec w/ tighter requirements". I don't know what Spectek's testing or validation methodology are but the NAND pictured above is the highest grade Spectek sells and it's rated at 3000 p/e cycles. This is the same quantity of information I know about Intel NAND and Micron NAND. It's quite possible that the Spectek branded stuff is somehow worse, I just don't have any information that shows me it is.

OCZ insists that there's no difference between the Spectek stuff and standard Micron 34nm NAND. Given that the NAND comes out of the same fab and carries the same p/e rating, the story is plausible. Unless OWC has done some specific testing on this NAND to show that it's unfit for use in an SSD, I'm going to call this myth busted.

The Real Issue
Comments Locked

153 Comments

View All Comments

  • GrizzledYoungMan - Thursday, April 7, 2011 - link

    Thank you Anand for your vigilance and consumer advocacy. OCZ's disorganization remains a problem for their customers (and I'm one of them, running OCZ SSDs in all my systems).

    Still, I am disappointed by the fact that your benchmarks continue to exaggerate the different between SSDs, instead of realistically portraying the difference between SSDs that a user might notice in daily operation. Follow my thinking:

    1. The main goal of buy an SSD, or upgrading an SSD from another SSD, is to improve system responsiveness as it appears to the user.
    2. No user particularly cares about the raw performance of their drives as much as how much performance is really available in real-world use.
    3. Thus, tests should focus on timing and comparing common operations, in both solo tasking and multi tasking scenarios (like booting, application loading, large catalog/edit files/database loading and manipulation for heavy duty desktop content creation applications and so on).
    4. In particular, Sandforce is a huge concern when comparing benchmarks to real world use. Sure, they kill in the benchmarks everyone uses, but many of the most resource intensive (and especially disk intensive) desktop tasks are content creation related (photo and video, primarily) which use incompressible files. How is it that no one has investigated the performance of Sandforce in these situations?

    Users here have complained that if we did only #3, only a small difference between SSDs would be apparent. But to my eyes, THAT IS EXACTLY WHAT WE NEED TO KNOW. If the performance delta between generations of SSDs is not really significant, and the price isn't moving, then this is a problem for the industry and consumers alike.

    However, creating the perception with unrealistically heavy trace programs that SSDs have significant performance differences (or that different flash types and processes have significant performance differences) when you haven't yet demonstrated that there are real world performance differences in terms of system responsiveness (if anything, you've admitted the opposite on a few occasions) strikes me as a well intentioned but ultimately irresponsible testing method.

    I'm sure it's exciting to stick it to OCZ. But really, they are one manufacturer among many, and not the core issue. The core issue is this charade we're all participating in, in which we pretend to understand how SSDs really improve the user experience when we have barely scratched the surface of this issue (or are even heading in the wrong direction).
  • GrizzledYoungMan - Thursday, April 7, 2011 - link

    Wow, typos galore there. Too early, too much going on, too little coffee. Sorry.
  • kmmatney - Thursday, April 7, 2011 - link

    The Anand Storage Bench 2010 "Typical workload" is about as close as you can get (IMO) to a real work test. Maybe its a heavier multitasking scenario that most of us would use, but I think its the best test out there to give a real-world assessment of SSDs. Just read the description of the test - I think it already has what you are asking for:

    "The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11."
  • GrizzledYoungMan - Friday, April 8, 2011 - link

    Actually, the storage bench is the opposite of what I'm asking for. I've written about this a couple of times, but my complaint is basically that benchmarks exaggerate the difference between SSDs, that in real world use, it might be impossible to tell one apart from another.

    The Anand Storage Benches might be the worst offenders in this regard, since they dutifully exaggerate the difference between SSD generations while giving the appearance of a highly precise way to test "real world" workloads.

    In particular, the Sandforce architecture is an area of concern. Sure, it blows away everyone in the benchmarks, but the fact that it becomes HDD-slow when given an incompressible workload really has to be explored further. After all, the most disk-intensive desktop workloads all involve manipulating highly compressed (ie, not compressible further) image files, video files and to a lesser degree audio files. One more than one occasion, I've seen people use Sandforce drives as scratch disks for this sort of thing (given their high sequential writes, it would seem ideal) and been deeply disappointed by the resulting performance.

    No response yet from Anand on this. But I'll keep posting. It's nothing personal - if anything, I'm posting here out of respect for Anand's leadership in testing.
  • KenPC - Thursday, April 7, 2011 - link

    Nice write up. And - excellent results getting OCZ to grow up a little bit more.

    As a consumer, the solution of SKU's based on NAND will be confusing and complicated. How the heck am I supposed to know if the xxx.34 or the xxx.25 or some future xxxx.Hyn34 or xxxx.IMFT25 is the one that will meet one of the many performance levels offered?
    A complicating factor that you mentioned in the article, is that for a specific manufacturer and process size, there can be varying levels of NAND performance.

    I strongly urge you to consider working with OCZ to 'bin' the drives with establshed benchmarks that focus on BOTH random and TRUE non-conmpressible data rates. SKU suffixes then describe the binned performance.

    You also have the opportunity to help set SSD 'industry standard benchmarks' here!

    Then give OCZ the license to meet those binned performance levels with the best/lowest cost methods they can establish.

    But until OCZ comes up with some 'assured performance level', OCZ is just off of my SSD map.

    KenPC
  • KenPC - Thursday, April 7, 2011 - link

    Yes, a reply to my own post......

    But how about a unique and novel idea?

    What if.. a Vertex 2 is a Vertex 2 is a Vertex 2, as measured by ALL of the '4 pillars' of SSD performance?

    Vertex 3's are Vertex 3's, and so on......

    If different nand/fw/controller results in any of the parameters 'out of spec', then that version never ships as a 'Vertex 2'.

    After all, varying levels of performance is why there is a vertex, a vertex 2, and an onyx and an agility, and an onyx2, and an agility2, and etc etc within the OCZ SSD line.

    Why should the consumer need to have to look a second tier of detail to know the product performance?

    KenPC
  • strikeback03 - Friday, April 8, 2011 - link

    So any time Sandforce/OCZ upgrades the firmware you need a new product name? If something happened in the IMFT process and they had to buy up Samsung NAND instead, new product? And of course everyone wants to wait for reviews of the new drives before buying.

    I personally don't mind them changing stuff as necessary so long as they maintain some minimum performance that they advertise. The real-world benchmarks in the Storage Review articles showed a 2-5% difference, to me that is within margin of error and not a problem for anyone not benchmarking for fun. The Hynix NAND performing at only ~70% of the old ones are a problem, not so much the 25nm ones.
  • semo - Thursday, April 7, 2011 - link

    You've done well. I hope you continue to do this kind of work as it benefits the general public and in this particular case, keeps the bad PR away from a very promising technology.

    The OCZ Core and other jmicron drives did plenty to slow down the progress of SSD adoption in to the mainstream. You caught the problem earlier than anyone else and fixed it. This time around it took you longer because of other high priority projects. I think your detective and lobbying work are what keeps us techies checking AT daily. In my opinion, the Vertex 2 section of this article deserves home page space and a catchy title!

    Finally, let's not forget that OCZ have not yet fixed this issue. People may still have 25nm drives without knowing it or be capable of understanding the problems. OCZ must issue a recall of all mislabeled drives.
  • Shadowmaster625 - Thursday, April 7, 2011 - link

    It is ridiculous to expect a company to release so many SKUs based on varying NAND types. It costs a company big money to release and keep track of all those SKUs. When you look at the actual real world differences between the different NAND types, it only comes down to a few percentage points of difference. It is like comparing different types of motherboard RAM. It is a waste of time and money to even bother looking at one vs another. OCZ should just tell you all to go pound sand. I suspect they will eventually, if you keep nitpicking like this. The 25nm Vertex 2 is virtually identical to the 34nm version. If you run a complete battery of real world and synthetic tests, you clearly see that they are within a few % of each other. There is no reason for OCZ to waste any more time or money trying to placate a nitpicking nerd mob.
  • semo - Thursday, April 7, 2011 - link

    The real issue was that it wasn't just a few % difference. Some V2 drives were nowhere near the rated capacity with the 25nm NAND. So if you bought 2 V2 drives and they happen to be different versions, RAID wouldn't work. There is still no way to confirm if the V2 you are trying to buy is one of the affected drives as OCZ haven't issued a recall or taken out affected drives from retail shelves. Best way to avoid unnecesary hassle is not to buy OCZ at all. Corsair did a much better job at informing the customer about the transition:
    http://www.corsair.com/blog/force25nm/

    The performance difference was higher than a few % as well.

Log in

Don't have an account? Sign up now