Original Link: http://www.anandtech.com/show/4256/the-ocz-vertex-3-review-120gb

SandForce was first to announce and preview its 2011 SSD controller technology. We first talked about the controller late last year, got a sneak peak at its performance this year at CES and then just a couple of months ago brought you a performance preview based on pre-production hardware and firmware from OCZ. Although the Vertex 3 shipment target was originally scheduled for March, thanks to a lot of testing and four new firmware revisions since I previewed the drive, the officially release got pushed back to April.

What I have in my hands is retail 120GB Vertex 3 with what OCZ is calling its final, production worthy client firmware. The Vertex 3 Pro has been pushed back a bit as the controller/firmware still have to make it through more testing and validation.

I'll get to the 120GB Vertex 3 and how its performance differs from the 240GB drive we previewed not too long ago, but first there are a few somewhat-related issues I have to get off my chest.

The Spectek Issue

Last month I wrote that OCZ had grown up after announcing the acquisition of Indilinx, a SSD controller manufacturer that was quite popular in 2009. The Indilinx deal has now officially closed and OCZ is the proud owner of the controller company for a relatively paltry $32M in OCZ stock.

The Indilinx acquisition doesn't mean much for OCZ today, however in the long run it should give OCZ at least a fighting chance at being a player in the SSD space. Keep in mind that OCZ is now fighting a battle on two fronts. Above OCZ in the chain are companies like Intel, Micron and Samsung. These are all companies with their own foundries and either produce the NAND that goes into their SSDs or the controllers as well. Below OCZ are companies like Corsair, G.Skill, Patriot and OWC. These are more of OCZ's traditional competitors, mostly acting as assembly houses or just rebadging OEM drives (Corsair is a recent exception as it has its own firmware/controller combination with the P3 series).

By acquiring Indilinx OCZ takes one more step up the ladder towards the Intel/Micron/Samsung group. Unfortunately at that level, there's a new problem: NAND supply.

NAND Flash is not unlike any other commodity. Its price is subject to variation based on a myriad of factors. If you control the fabs, then you generally have a good idea of what's coming. There's still a great deal of volatility even for a fab owner, process technologies are very difficult to roll out and there is always the risk of issues in manufacturing, but generally speaking you've got a better chance of supply and controlled costs if you're making the NAND. If you don't control the fabs, you're at their mercy. While buying Indilinx gave OCZ the ability to be independent of any controller maker if it wanted to, OCZ is still at the mercy of the NAND manufacturers.

Intel NAND

Currently OCZ ships drives with NAND from four different companies: Intel, Micron, Spectek and Hynix. The Intel and Micron stuff is available in both 34nm and 25nm flavors, Spectek is strictly 34nm and Hynix is 32nm.

Each NAND supplier has its own list of parts with their own list of specifications. While they're generally comparable in terms of reliability and performance, there is some variance not just on the NAND side but how controllers interact with the aforementioned NAND.

Approximately 90% of what OCZ ships in the Vertex 2 and 3 is using Intel or Micron NAND. Those two tend to be the most interchangeable as they physically come from the same plant. Intel/Micron have also been on the forefront of driving new process technologies so it makes sense to ship as much of that stuff as you can given the promise of lower costs.

Last month OWC published a blog accusing OCZ of shipping inferior NAND on the Vertex 2. OWC requested a drive from OCZ and it was built using 34nm Spectek NAND. Spectek, for those of you who aren't familiar, is a subsidiary of Micron (much like Crucial is a subsidiary of Micron). IMFT manufactures the NAND, the Micron side of it takes and packages it - some of it is used or sold by Micron, some of it is "sold" to Crucial and some of it is "sold" to Spectek. Only Spectek adds its own branding to the NAND.

OWC published this photo of the NAND used in their Vertex 2 sample:

I don't know the cause of the bad blood between OWC and OCZ nor do I believe it's relevant. What I do know is the following:

The 34nm Spectek parts pictured above are rated at 3000 program/erase cycles. I've already established that 3000 cycles is more than enough for a desktop workload with a reasonably smart controller. Given the extremely low write amplification I've measured on SandForce drives, I don't believe 3000 cycles is an issue. It's also worth noting that 3000 cycles is at the lower end for what's industry standard for 25nm/34nm NAND. Micron branded parts are also rated at 3000 cycles, however I've heard that's a conservative rating.

If you order NAND from Spectek you'll know that the -AL on the part number is the highest grade that Spectek sells; it stands for "Full spec w/ tighter requirements". I don't know what Spectek's testing or validation methodology are but the NAND pictured above is the highest grade Spectek sells and it's rated at 3000 p/e cycles. This is the same quantity of information I know about Intel NAND and Micron NAND. It's quite possible that the Spectek branded stuff is somehow worse, I just don't have any information that shows me it is.

OCZ insists that there's no difference between the Spectek stuff and standard Micron 34nm NAND. Given that the NAND comes out of the same fab and carries the same p/e rating, the story is plausible. Unless OWC has done some specific testing on this NAND to show that it's unfit for use in an SSD, I'm going to call this myth busted.

The Real Issue

While I was covering MWC a real issue with OCZ's SSDs erupted back home: OCZ aggressively moved to high density 25nm IMFT NAND and as a result was shipping product under the Vertex 2 name that was significantly slower than it used to be. Storage Review did a great job jumping on the issue right away.

Let's look at what caused the issue first.

When IMFT announced the move to 25nm it mentioned a doubling in NAND capacity per die. At 25nm you could now fit 64Gbit of MLC NAND (8GB) on a single die, twice what you could get at 34nm. With twice the density in the same die area, costs could come down considerably.

An IMFT 25nm 64Gbit (8GB) MLC NAND die

Remember NAND manufacturing is no different than microprocessor manufacturing. Cost savings aren't realized on day one because yields are usually higher on the older process. Newer wafers are usually more expensive as well. So although you get ~2x density improvement going to 25nm, your yields are lower and wafers are more expensive than they were at 34nm. Even Intel was only able to get a maximum of $110 decrease in price when going from the X25-M G2 to the SSD 320.

OCZ was eager to shift to 25nm. Last year SandForce was the first company to demonstrate 25nm Intel NAND on an SSD at IDF, clearly the controller support was there. As soon as it had the opportunity to, OCZ began migrating the Vertex 2 to 25nm NAND.

SSDs are a lot like GPUs, they are very wide, parallel beasts. While a GPU has a huge array of parallel cores, SSDs are made up of arrays of NAND die working in parallel. Most controllers have 8 channels they can use to talk to NAND devices in parallel, but each channel can often have multiple NAND die active at once.

A Corsair Force F120 using 34nm IMFT NAND

Double the NAND density per die and you can guess what happened next - performance went down considerably at certain capacity points. The most impacted were the smaller capacity drives, e.g. the 60GB Vertex 2. Remember the SF-1200 is only an 8-channel controller so it only needs eight devices to technically be fully populated. However within a single NAND device, multiple die can be active concurrently and in the first 25nm 60GB Vertex 2s there was only one die per NAND package. The end result was significantly reduced performance in some cases, however OCZ failed to change the speed ratings on the drives themselves.

The matter is complicated by the way SandForce's NAND redundancy works. The SF-1000 series controllers have a feature called RAISE that allows your drive to keep working even if a single NAND die fails. The controller accomplishes this redundancy by writing parity data across all NAND devices in the SSD. Should one die fail, the lost data is reconstructed from the remaining data + parity and mapped to a new location in NAND. As a result, total drive capacity is reduced by the size of a single NAND die. With twice the density per NAND die in these early 25nm drives, usable capacity was also reduced when OCZ made the switch with Vertex 2.

The end result was that you could buy a 60GB Vertex 2 with lower performance and less available space without even knowing it.

A 120GB Vertex 2 using 25nm Micron NAND

After a dose of public retribution OCZ agreed to allow end users to swap 25nm Vertex 2s for 34nm drives, they would simply have to pay the difference in cost. OCZ realized that was yet another mistake and eventually allowed the swap for free (thankfully no one was ever charged), which is what should have been done from the start. OCZ went one step further and stopped using 64Gbit NAND in the 60GB Vertex 2, although drives still exist in the channel since no recall was issued.

OCZ ultimately took care of those users who were left with a drive that was slower (and had less capacity) than they thought they were getting. But the problem was far from over.

The NAND Matrix

It's not common for SSD manufacturers to give you a full list of all of the different NAND configurations they ship. Regardless how much we appreciate transparency, it's rarely offered in this industry. Manufacturers love to package all information into nice marketable nuggets and the truth doesn't always have the right PR tone to it. Despite what I just said, below is a table of every NAND device OCZ ships in its Vertex 2 and Vertex 3 products:

OCZ Vertex 2 & Vertex 3 NAND Usage
  Process Node Capacities
Intel L63B 34nm Up to 240GB
Micron L63B 34nm Up to 480GB
Spectek L63B 34nm 240GB to 360GB
Hynix 32nm Up to 120GB
Micron L73A 25nm Up to 120GB
Micron L74A 25nm 160GB to 480GB
Intel L74A 25nm 160GB to 480GB

The data came from OCZ and I didn't have to sneak around to get it, it was given to me by Alex Mei, Executive Vice President of OCZ.

You've seen the end result, now let me explain how we got here.

OCZ accidentally sent me a 120GB Vertex 2 built with 32nm Hynix NAND. I say it was an accident because the drive was supposed to be one of the new 25nm Vertex 2s, but there was a screwup in ordering and I ended up with this one. Here's a shot of its internals:

You'll see that there are a ton of NAND devices on the board. Thirty two to be exact. That's four per channel. Do the math and you'll see we've got 32 x 4GB 32nm MLC NAND die on the PCB. This drive has the same number of NAND die per package as the new 25nm 120GB Vertex 2 so in theory performance should be the same. It isn't however:

Vertex 2 NAND Performance Comparison
  AT Storage Bench Heavy 2011 AT Storage Bench Light 2011
34nm IMFT 120.1 MB/s 155.9 MB/s
25nm IMFT 110.9 MB/s 145.8 MB/s
32nm Hynix 92.1 MB/s 125.6 MB/s

Performance is measurably worse. You'll notice that I also threw in some 34nm IMFT numbers to show just how far performance has fallen since the old launch NAND.

Why not just keep using 34nm IMFT NAND? Ultimately that product won't be available. It's like asking for 90nm CPUs today, the whole point to Moore's Law is to transition to smaller manufacturing processes as quickly as possible.

Why is the Hynix 32nm NAND so much slower? That part is a little less clear to me. For starters we're only dealing with one die per package, we've established can have a negative performance impact. On top of that, SandForce's firmware may only be optimized for a couple of NAND devices. OCZ admitted that around 90% of all Vertex 2 shipments use Intel or Micron NAND and as a result SandForce's firmware optimization focus is likely targeted at those NAND types first and foremost. There are differences in NAND interfaces as well as signaling speeds which could contribute to performance differences unless a controller takes these things into account.

25nm Micron NAND

The 25nm NAND is slower than the 34nm offerings for a number of reasons. For starters page size increased from 4KB to 8KB with the transition to 25nm. Intel used this transition as a way to extract more performance out of the SSD 320, however that may have actually impeded SF-1200 performance as the firmware architecture wasn't designed around 8KB page sizes. I suspect SandForce just focused on compatibility here and not performance.

Secondly, 25nm NAND is physically slower than 34nm NAND:

NAND Performance Comparison
  Intel 34nm NAND Intel 25nm NAND
Read 50 µs 50 µs
Program 900 µs 1200 µs
Block Erase 2 µs 3 µs

Program and erase latency are both higher, although admittedly you're working with much larger page sizes (it's unclear whether Intel's 1200 µs figure is for a full page program or a partial program).

The bad news is that eventually all of the 34nm IMFT drives will dry up. The worse news is that the 25nm IMFT drives, even with the same number of NAND devices on board, are lower in performance. And the worst news is that the drives that use 32nm Hynix NAND are the slowest of them all.

I have to mention here that this issue isn't exclusive to OCZ. All other SF drive manufacturers are faced with the same potential problem as they too must shop around for NAND and can't guarantee that they will always ship the same NAND in every single drive.

The Problem With Ratings

You'll notice that although the three NAND types I've tested perform differently in our Heavy 2011 workload, a quick run through Iometer reveals that they perform identically:

Vertex 2 NAND Performance Comparison
  AT Storage Bench Heavy 2011 Iometer 128KB Sequential Write
34nm IMFT 120.1 MB/s 214.8 MB/s
25nm IMFT 110.9 MB/s 221.8 MB/s
32nm Hynix 92.1 MB/s 221.3 MB/s

SandForce's architecture works by reducing the amount of data that actually has to be written to the NAND. When writing highly compressible data, not all NAND devices are active and we're not bound by the performance of the NAND itself since most of it is actually idle. SandForce is able to hide even significant performance differences between NAND implementations. This is likely why SandForce is more focused on NAND compatibility than performance across devices from all vendors.

Let's see what happens if we write incompressible data to these three drives however:

Vertex 2 NAND Performance Comparison
  Iometer 128KB Sequential Write (Incompressible Data) Iometer 128KB Sequential Write
34nm IMFT 136.6 MB/s 214.8 MB/s
25nm IMFT 118.5 MB/s 221.8 MB/s
32nm Hynix 95.8 MB/s 221.3 MB/s

It's only when you force SandForce's controller to write as much data in parallel as possible that you see the performance differences between NAND vendors. As a result, the label on the back of your Vertex 2 box isn't lying - whether you have 34nm IMFT, 25nm IMFT or 32nm Hynix the drive will actually hit the same peak performance numbers. The problem is that the metrics depicted on the spec sheets aren't adequate to be considered fully honest.

A quick survey of all SF-1200 based drives shows the same problem. Everyone rates according to maximum performance specifications and no one provides any hint of what you're actually getting inside the drive.

SF-1200 Drive Rating Comparison
120GB Drive Rated Sequential Read Speed Rated Sequential Write Speed
Corsair Force F120 285 MB/s 275 MB/s
G.Skill Phoenix Pro 285 MB/s 275 MB/s
OCZ Vertex 2 Up to 280 MB/s Up to 270 MB/s

I should stop right here and mention that specs are rarely all that honest on the back of any box. Whether we're talking about battery life or SSD performance, if specs told the complete truth then I'd probably be out of a job. If one manufacturer is totally honest, its competitors will just capitalize on the aforementioned honesty by advertising better looking specs. And thus all companies are forced to bend the truth because if they don't, someone else will.

OCZ Listens, Again

I promised you all I would look into this issue when I got back from MWC. As is usually the case, a bunch of NDAs showed up, more product releases happened and testing took longer than expected. Long story short, it took me far too long to get around to the issue of varying NAND performance in SF-1200 drives.

What put me over the edge was the performance of the 32nm Hynix drives. For the past two months everyone has been arguing over 34nm vs 25nm however the issue isn't just limited to those two NAND types. In fact, SSD manufacturers have been shipping varying NAND configurations for years now. I've got a stack of Indilinx drives with different types of NAND, all with different performance characteristics. Admittedly I haven't seen performance vary as much as it has with SandForce on 34nm IMFT vs. 25nm IMFT vs. 32nm Hynix.

I wrote OCZ's CEO, Ryan Petersen, and Executive Vice President, Alex Mei, an email outlining my concerns last week:

Here are the drives I have:

34nm Corsair F120 (Intel 34nm NAND, 64Gbit devices, 16 devices total)
34nm OCZ Vertex 2 120GB (Hynix 32nm NAND, 32Gbit devices, 32 devices total)
25nm OCZ Vertex 2 120GB (Intel 25nm NAND, 64Gbit devices, 16 devices total)

Here is the average data rate of the three drives through our Heavy 2011 Storage Bench:

34nm Corsair F120 - 120.1 MB/s
34nm OCZ Vertex 2 120GB - 91.1 MB/s
25nm OCZ Vertex 2 120GB - 110.9 MB/s

It's my understanding that both of these drives (from you all) are currently shipping. We have three different drives here, based on the same controller, rated at the same performance running through a real-world workload that are posting a range of performance numbers. In the worst case comparison the F120 we have here is 30% faster than your 32nm Hynix Vertex 2.

How is this at all acceptable? Do you believe that this is an appropriate level of performance variance your customers should come to expect from OCZ?

I completely understand variance in NAND speed and that you guys have to source from multiple vendors in order to remain price competitive. But something has to change here.

Typically what happens in these situations is that there's a lot of arguing back and forth, with the company in question normally repeating some empty marketing line because admitting the truth and doing the right thing is usually too painful. Thankfully while OCZ may be a much larger organization today than just a few years ago, it still has a lot of the DNA of a small, customer-centric company.

Don't get me wrong - Ryan and I argued back and forth like we normally do. But the resolution arrived far quicker and it was far more agreeable than I expected. I asked OCZ to commit to the following:

1) Are you willing to commit, publicly and within a reasonable period of time, to introducing new SKUs (or some other form of pre-purchase labeling) when you have configurations that vary in performance by more than 3%?

2) Are you willing to commit, publicly and within a reasonable period of time, to using steady state random read/write and steady state sequential read/write using both compressible and incompressible data to determine the performance of your drives? I can offer suggestions here for how to test to expose some of these differences.

3) Finally, are you willing to commit, publicly and within a reasonable period of time, to exchanging any already purchased product for a different configuration should our readers be unhappy with what they've got?

Within 90 minutes, Alex Mei responded and gave me a firm commitment on numbers 1 and 3 on the list. Number two would have to wait for a meeting with the product team the next day. Below are his responses to my questions above:

1) Yes, I've already talked to the PM and Production team and we can release new skus that are labeled with a part number denoting the version. This can be implemented on the label on the actual product that is clearly visable on the outside of the packaging. As mentioned previously we can also provide more test data so that customers can decide based on all factors which drive is right for them.

2) Our PM team will be better able to answer this question since they manage the testing. They are already using an assortment of tests to rate drives and I am sure they are happy to have your feedback in regards to suggestions. Will get back to you on this question shortly.

3) Yes, we already currently do this. We want all our customers to be happy with the products and any customer that has a concern about thier drives is welcome to come to us, and we always look to find the best resolution for the customer whether that is an exchange to another version or a refund if that is what the customer prefers.

I should add that this conversation (and Alex's agreement) took place between the hours of 2 and 5AM:

I was upset that OCZ allowed all of this to happen in the first place. It's a costly lesson and a pain that we have to even go through this. But blanket acceptance of the right thing to do is pretty impressive.

The Terms and Resolution

After all of this back and forth here's what OCZ is committing to:

In the coming weeks (it'll take time to filter down to etailers) OCZ will introduce six new Vertex 2 SKUs that clearly identify the process node used inside: Vertex 2.25 (80GB, 160GB, 200GB) and Vertex 2.34 (60GB, 120GB, 240GB). The actual SKUs are below:

OCZ's New SKUs
OCZ Vertex 2 25nm Series OCZ Vertex 2 34nm Series

These drives will only use IMFT NAND - Hynix is out. The idea is that you should expect all Vertex 2.25 drives to perform the same at the same capacity point, and all Vertex 2.34 drives will perform the same at the same capacity as well. The .34 drives may be more expensive than the .25 drives, but they also may be higher performance. Not all capacities are present in the new series, OCZ is starting with the most popular ones.

OCZ will also continue to sell the regular Vertex 2. This will be the same sort of grab-bag drive that you get today. There's no guarantee of the NAND inside the drive, just that OCZ will always optimize for cost in this line.

OCZ also committed to always providing us with all available versions of their drives so we can show you what sort of performance differences exist between the various configurations.

If you purchased a Vertex 2 and ended up with lower-than-expected performance or are unhappy with your drive in any way, OCZ committed to exchanging the drive for a configuration that you are happy with. Despite not doing the right thing early on, OCZ ultimately commited to doing what was right by its customers.

As far as ratings go - OCZ has already started publishing AS-SSD performance scores for their drives, however I've been pushing OCZ to include steady state (multiple hour test runs) incompressible performance using Iometer to provide a comprehensive, repeatable set of minimum performance values for their drives. I don't have a firm commitment on this part yet but I expect OCZ will do the right thing here as well.

I should add that this will be more information than any other SandForce drive maker currently provides with their product specs, but it's a move that I hope will be mirrored by everyone else building drives with varying NAND types.

The Vertex 2 is going to be the starting point for this sort of transparency, but should there be any changes in the Vertex 3 lineup OCZ will take a similar approach.

The Vertex 3 120GB

Whenever we review a new SSD many of you comment asking for performance of lower capacity drives. While we typically publish the specs for all of the drives in the lineup, we're usually only sampled a single capacity at launch. It's not usually the largest, but generally the second largest and definitely an indicator of the best performance you can expect to see from the family.

Just look at the reviews we've published this year alone:

Intel SSD 510 (240GB)
Intel SSD 320 (300GB)
Crucial m4 (256GB)

While we always request multiple capacities, it normally takes a little while for us to get those drives in.

When OCZ started manufacturing Vertex 3s for sale the first drives off of the line were 120GB, and thus the first shipping Vertex 3 we got our hands on was a more popular capacity. Sweet.

Let's first look at the expected performance differences between the 120GB Vertex 3 and the 240GB drive we previewed earlier this year:

OCZ Vertex 3 Lineup
Specs (6Gbps) 120GB 240GB 480GB
Max Read Up to 550MB/s Up to 550MB/s Up to 530MB/s
Max Write Up to 500MB/s Up to 520MB/s Up to 450MB/s
4KB Random Read 20K IOPS 40K IOPS 50K IOPS
4KB Random Write 60K IOPS 60K IOPS 40K IOPS
MSRP $249.99 $499.99 $1799.99

There's a slight drop in peak sequential performance and a big drop in random read speed. Remember our discussion of ratings from earlier? The Vertex 3 was of course rated before my recent conversations with OCZ, so we may not be getting the full picture here.

Inside the 120GB Vertex 3 are 16 Intel 25nm 64Gbit (8GB) NAND devices. Each device has a single 25nm 64Gbit die inside it, with the capacity of a single die reserved for RAISE in addition to the typical ~7% spare area.

The 240GB pre-production drive we previewed by comparison had twice as many 25nm die per package (2 x 64Gbit per NAND device vs. 1 x 64Gbit). If you read our SF-2000 launch article one of the major advantages of the SF-2000 controller has over its predecessor is the ability to activate twice as many NAND die at the same time. What does all of this mean for performance? We're about to find out.

RC or MP Firmware?

When the first SF-1500/1200 drives shipped last year they actually shipped with SandForce's release candidate (RC) firmware. Those who read initial coverage of the Corsair Force F100 drives learned that the hard way. Mass production (MP) firmware followed with bug fixes and threatened to change performance on some drives (the latter was resolved without anyone losing any performance thankfully).

Before we get to the Vertex 3 we have to talk a bit about how validation works with SandForce and its partners. Keep in mind that SandForce is still a pretty small company, so while it does a lot of testing and validation internally the company leans heavily on its partners to also shoulder the burden of validation. As a result drive/firmware validation is split among both SandForce and its partners. This approach allows SF drives to be validated heavier than if only one of the sides did all of the testing. While SandForce provides the original firmware, it's the partner's decision whether or not to ship drives based on how comfortable they feel with their validation. SandForce's validation suite includes both client and enterprise tests, which lengthens the validation time.

The shipping Vertex 3s are using RC firmware from SandForce, the MP label can't be assigned to anything that hasn't completely gone through SandForce's validation suite. However, SF assured me that there are no known issues that would preclude the Vertex 3 from being released today. From OCZ's perspective, the Vertex 3 is fully validated for client use (not enterprise). Some features (such as 0% over provisioning) aren't fully validated and thus are disabled in this release of the firmware. OCZ and SandForce both assure me that the SF-2200 has been through a much more strenuous validation process than anything before it.

Apparently the reason for OCZ missing the March launch timeframe for the Vertex 3 was a firmware bug that was discovered in validation that impacted 2011 MacBook Pro owners. Admittedly this has probably been the smoothest testing experience I've encountered with any newly launched SandForce drive, but there's still a lot of work to be done. Regardless of the performance results, if you want to be safe you'll want to wait before pulling the trigger on the Vertex 3. SandForce tells me that the only difference between RC and MP firmware this round is purely the amount of time spend in testing - there are no known issues for client drives. Even knowing that, these are still unproven drives - approach with caution.

The Test


Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO


Intel DX58SO (Intel X58)

Intel H67 Motherboard


Intel X58 + Marvell SATA 6Gbps PCIe

Intel H67
Chipset Drivers:

Intel + Intel IMSM 8.9

Intel + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64


Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Peak performance on the 120GB Vertex 3 is just as impressive as the 240GB pre-production sample as well as the m4 we just tested. Write incompressible data and you'll see the downside to having fewer active die, the 120GB drive now delivers 84% of the performance of the 240GB drive. In 3Gbps mode the 240 and 120GB drives are identical.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

At high queue depths the gap between the 120 and 240GB Vertex 3s grows a little bit when we're looking at incompressible data.

Iometer - 4KB Random Read, QD=3

Random read performance is what suffered the most with the transition from 240GB to 120GB. The 120GB Vertex 3 is slower than the 120GB Corsair Force F120 (SF-1200, similar to the Vertex 2) in our random read test. The Vertex 3 is actually about the same speed as the old Indilinx based Nova V128 here. I'm curious to see how this plays out in our real world tests.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Iometer - 128KB Sequential Write

Highly compressible sequential write speed is identical to the 240GB drive, but use incompressible data and the picture changes dramatically. The 120GB has far fewer NAND die to write to in parallel and in this case manages 76% of the performance of the 240GB drive.

Iometer - 128KB Sequential Read

Sequential read speed is also lower than the 240GB drive. Compared to the SF-1200 drives there's still a big improvement as long as you've got a 6Gbps controller.

AnandTech Storage Bench 2011

I didn't expect to have to debut this so soon, but I've been working on updated benchmarks for 2011. Last year we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

I'll be sharing the full details of the benchmark in some upcoming SSD articles but here are some details:

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

Update: As promised, some more details about our Heavy Workload for 2011.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

AnandTech Storage Bench 2011 - Heavy Workload

In our heavy test for 2011 the 120GB Vertex 3 is noticeably slower than the 240GB sample we tested a couple of months ago. Fewer available die are the primary explanation. We're still waiting on samples of the 120GB Intel SSD 320 and the Crucial m4 but it's looking like this round will be more competitive than we originally thought.

The breakdown of reads vs. writes tells us more of what's going on:

AnandTech Storage Bench 2011 - Heavy Workload

Surprisingly enough it's not read speed that holds the 120GB Vertex 3 back, it's ultimately the lower (incompressible) write speed:

AnandTech Storage Bench 2011 - Heavy Workload

The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Heavy Workload

AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Despite the reduction in large IOs, over 60% of all operations are perfectly sequential. Average queue depth is a lighter 2.2029 IOs.

AnandTech Storage Bench 2011 - Light Workload

While our heavy 2011 workload may be a little heavy on the writes, the light workload is a bit more balanced. As a result we see the 120GB Vertex 3 move up in the charts a bit more. It's still not as quick as the 240GB drive but it's pretty much quicker than anything else.

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

Performance vs. Transfer Size

All of our Iometer sequential tests happen at a queue depth of 1, which is indicative of a light desktop workload. It isn't too far fetched to see much higher queue depths on the desktop. The performance of these SSDs also greatly varies based on the size of the transfer. For this next test we turn to ATTO and run a sequential write over a 2GB span of LBAs at a queue depth of 4 and varying the size of the transfers.

With highly compressible data, which does make up most of what you find on a desktop (outside of media storage), the Vertex 3 really can't be beat on a 6Gbps interface.

AS-SSD Incompressible Sequential Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

AS-SSD Incompressible Sequential Write Speed

Incompressible write performance is particularly disappointing on the 120GB drive as we've seen. The pre-production 240GB drive is nearly 70% faster in this test. It remains to be seen how close the final production firmware on the 240GB drive will be to what we previewed a couple of months ago.

AS-SSD Incompressible Sequential Read Speed

Overall System Performance using PCMark Vantage

Next up is PCMark Vantage, another system-wide performance suite. For those of you who aren’t familiar with PCMark Vantage, it ends up being the most real-world-like hard drive test I can come up with. It runs things like application launches, file searches, web browsing, contacts searching, video playback, photo editing and other completely mundane but real-world tasks. I’ve described the benchmark in great detail before but if you’d like to read up on what it does in particular, take a look at Futuremark’s whitepaper on the benchmark; it’s not perfect, but it’s good enough to be a member of a comprehensive storage benchmark suite. Any performance impacts here would most likely be reflected in the real world.

PCMark Vantage

If we use PCMark as an indication of light system performance, the Vertex 3 120GB does pretty well here.

PCMark Vantage - Memories Suite

PCMark Vantage - TV & Movies Suite

PCMark Vantage - Gaming Suite

PCMark Vantage - Music Suite

PCMark Vantage - Communications Suite

PCMark Vantage - Productivity Suite

PCMark Vantage - HDD Suite

AnandTech Storage Bench 2010

To keep things consistent we've also included our older Storage Bench. Note that the old storage test system doesn't have a SATA 6Gbps controller, so we only have one result for the 6Gbps drives.

The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Typical Workload

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Multitasking Workload

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

TRIM Performance

In our Vertex 3 preview I mentioned a bug/performance condition/funnythingthathappens with SF-1200 based drives. If you write incompressible data to all LBAs on the drive (e.g. fill the drive up with H.264 videos) and fill the spare area with incompressible data (do it again without TRIMing the drive) you'll actually put your SF-1200 based SSD into a performance condition that it can't TRIM its way out of. Completely TRIM the drive and you'll notice that while compressible writes are nice and speedy, incompressible writes happen at a max of 70 - 80MB/s. In our Vertex 3 Pro preview I mentioned that it seemed as if SandForce had nearly fixed the issue. The worst I ever recorded performance on the 240GB drive after my aforementioned fill procedure was 198MB/s - a pretty healthy level.

The 120GB drive doesn't mask the drop nearly as well. The same process I described above drops performance to the 100 - 130MB/s range. This is better than what we saw with the Vertex 2, but still a valid concern if you plan on storing/manipulating a lot of highly compressed data (e.g. H.264 video) on your SSD.

The other major change since the preview? The 120GB drive can definitely get into a pretty fragmented state (again only if you pepper it with incompressible data). I filled the drive with incompressible data, ran a 4KB (100% LBA space, QD32) random write test with incompressible data for 20 minutes, and then ran AS-SSD (another incompressible data test) to see how low performance could get:

OCZ Vertex 3 120GB - Resiliency - AS SSD Sequential Write Speed - 6Gbps
  Clean After Torture After TRIM
OCZ Vertex 3 120GB 162.1 MB/s 38.3 MB/s 101.5 MB/s

Note that the Vertex 3 does recover pretty well after you write to it sequentially. A second AS-SSD pass shot performance up to 132MB/s. As I mentioned above, after TRIMing the whole drive I saw performance in the 100 - 130MB/s range.

This is truly the worst case scenario for any SF based drive. Unless you deal in a lot of truly random data or plan on storing/manipulating a lot of highly compressed files (e.g. compressed JPEGs, H.264 videos, etc...), I wouldn't be too concerned about this worst-case scenario performance. What does bother me however is how much lower the 120GB drive's worst case is vs. the 240GB.

Power Consumption

Unusually high idle power consumption was a bug in the early Vertex 3 firmware - that seems to have been fixed with the latest firmware revision. Overall power consumption seems pretty good for the 120GB drive, it's in line with other current generation SSDs we've seen although we admittedly haven't tested many similar capacity drives this year yet.

Idle Power - Idle at Desktop

Load Power - 128KB Sequential Write

Load Power - 4KB Random Write, QD=32

Final Words

This is going to be a pretty frustrating conclusion to write. I'm still waiting for the final, shipping 240GB Vertex 3 to arrive before passing judgement on it but from what I've seen thus far it looks like that may be the drive to get if you're torn between the two. I feel like if you're going to be working with a lot of incompressible data (e.g. pictures, movies) on your drive then you'll want to either go for the 240GB version or perhaps consider a more traditional controller. The performance impact the 120GB sees when working with incompressible data just puts it below what I would consider the next-generation performance threshold.

The bigger question is how does the 120GB Vertex 3 stack up against similar capacity drives from the competition? Unfortunately with only a 300GB Intel SSD 320, a 250GB Intel SSD 510 and a 256GB Crucial m4 on hand it's really tough to tell. I suspect that the drive will still come out on top given that the rest incur a performance penalty as well when going to smaller capacities, but I don't know that the performance drop is proportional across all of the controllers. I hate to say it but you may want to wait a few more weeks for us to get some of these smaller capacity drives in house before making a decision there.

It's clear to me that these SF-2000 based drives are really best suited for use with a 6Gbps interface. Performance over 3Gbps is admirable but it's just not as impressive. If you've got an existing SF-1200 drive (or similar performing drive) on a 3Gbps system, I don't believe the upgrade to a SF-2200 is worth it until you get a good 6Gbps controller.

Log in

Don't have an account? Sign up now