Three months ago we previewed the first client focused SF-2200 SSD: OCZ's Vertex 3. The 240GB sample OCZ sent for the preview was four firmware revisions older than what ended up shipping to retail last month, but we hoped that the preview numbers were indicative of final performance.

The first drives off the line when OCZ went to production were 120GB capacity models. These drives have 128GiB of NAND on board and 111GiB of user accessible space, the remaining 12.7% is used for redundancy in the event of NAND failure and spare area for bad block allocation and block recycling.

Unfortunately the 120GB models didn't perform as well as the 240GB sample we previewed. To understand why, we need to understand a bit about basic SSD architecture. SandForce's SF-2200 controller has 8 channels that it can access concurrently, it looks sort of like this:

Each arrowed line represents a single 8-byte channel. In reality, SF's NAND channels are routed from one side of the chip so you'll actually see all NAND devices to the right of the controller on actual shipping hardware.

Even though there are 8 NAND channels on the controller, you can put multiple NAND devices on a single channel. Two NAND devices can't be actively transferring data at the same time. Instead what happens is one chip is accessed while another is either idle or busy with internal operations.

When you read from or write to NAND you don't write directly to the pages, you instead deal with an intermediate register that holds the data as it comes from or goes to a page in NAND. The process of reading/programming is a multi-step endeavor that doesn't complete in a single cycle. Thus you can hand off a read request to one NAND device and then while it's fetching the data from an internal page, you can go off and program a separate NAND device on the same channel.

Because of this parallelism that's akin to pipelining, with the right workload and a controller that's smart enough to interleave operations across NAND devices, an 8-channel drive with 16 NAND devices can outperform the same drive with 8 NAND devices. Note that the advantage can't be double since ultimately you can only transfer data to/from one device at a time, but there's room for non-insignificant improvement. Confused?

Let's look at a hypothetical SSD where a read operation takes 5 cycles. With a single die per channel, 8-byte wide data bus and no interleaving that gives us peak bandwidth of 8 bytes every 5 clocks. With a large workload, after 15 clock cycles at most we could get 24 bytes of data from this NAND device.


Hypothetical single channel SSD, 1 read can be issued every 5 clocks, data is received on the 5th clock

Let's take the same SSD, with the same latency but double the number of NAND devices per channel and enable interleaving. Assuming we have the same large workload, after 15 clock cycles we would've read 40 bytes, an increase of 66%.


Hypothetical single channel SSD, 1 read can be issued every 5 clocks, data is received on the 5th clock, interleaved operation

This example is overly simplified and it makes a lot of assumptions, but it shows you how you can make better use of a single channel through interleaving requests across multiple NAND die.

The same sort of parallelism applies within a single NAND device. The whole point of the move to 25nm was to increase NAND density, thus you can now get a 64Gbit NAND device with only a single 64Gbit die inside. If you need more than 64Gbit per device however you have to bundle multiple die in a single package. Just as we saw at the 34nm node, it's possible to offer configurations with 1, 2 and 4 die in a single NAND package. With multiple die in a package, it's possible to interleave read/program requests within the individual package as well. Again you don't get 2 or 4x performance improvements since only one die can be transferring data at a time, but interleaving requests across multiple die does help fill any bubbles in the pipeline resulting in higher overall throughput.


Intel's 128Gbit 25nm MLC NAND features two 64Gbit die in a single package

Now that we understand the basics of interleaving, let's look at the configurations of a couple of Vertex 3s.

The 120GB Vertex 3 we reviewed a while back has sixteen NAND devices, eight on each side of the PCB:


OCZ Vertex 3 120GB - front

These are Intel 25nm NAND devices, looking at the part number tells us a little bit about them.

You can ignore the first three characters in the part number, they tell you that you're looking at Intel NAND. Characters 4 - 6 (if you sin and count at 1) indicate the density of the package, in this case 64G means 64Gbits or 8GB. The next two characters indicate the device bus width (8-bytes). Now the ninth character is the important one - it tells you the number of die inside the package. These parts are marked A, which corresponds to one die per device. The second to last character is also important, here E stands for 25nm.

Now let's look at the 240GB model:


OCZ Vertex 3 240GB - Front


OCZ Vertex 3 240GB - Back

Once again we have sixteen NAND devices, eight on each side. OCZ standardized on Intel 25nm NAND for both capacities initially. The density string on the 240GB drive is 16B for 16Gbytes (128 Gbit), which makes sense given the drive has twice the capacity.

A look at the ninth character on these chips and you see the letter C, which in Intel NAND nomenclature stands for 2 die per package (J is for 4 die per package if you were wondering).

While OCZ's 120GB drive can interleave read/program operations across two NAND die per channel, the 240GB drive can interleave across a total of four NAND die per channel. The end result is a significant improvement in performance as we noticed in our review of the 120GB drive.

OCZ Vertex 3 Lineup
Specs (6Gbps) 120GB 240GB 480GB
Raw NAND Capacity 128GB 256GB 512GB
Spare Area ~12.7% ~12.7% ~12.7%
User Capacity 111.8GB 223.5GB 447.0GB
Number of NAND Devices 16 16 16
Number of die per Device 1 2 4
Max Read Up to 550MB/s Up to 550MB/s Up to 530MB/s
Max Write Up to 500MB/s Up to 520MB/s Up to 450MB/s
4KB Random Read 20K IOPS 40K IOPS 50K IOPS
4KB Random Write 60K IOPS 60K IOPS 40K IOPS
MSRP $249.99 $499.99 $1799.99

The big question we had back then was how much of the 120/240GB performance delta was due to a reduction in performance due to final firmware vs. a lack of physical die. With a final, shipping 240GB Vertex 3 in hand I can say that the performance is identical to our preview sample - in other words the performance advantage is purely due to the benefits of intra-device die interleaving.

If you want to skip ahead to the conclusion feel free to, the results on the following pages are near identical to what we saw in our preview of the 240GB drive. I won't be offended :)

The Test

CPU

Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO

Motherboard:

Intel DX58SO (Intel X58)

Intel H67 Motherboard

Chipset:

Intel X58 + Marvell SATA 6Gbps PCIe

Intel H67
Chipset Drivers:

Intel 9.1.1.1015 + Intel IMSM 8.9

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

 

Random & Sequential Performance
Comments Locked

90 Comments

View All Comments

  • explorer007 - Monday, May 23, 2011 - link

    I got exacltly same problem even after firmware 2.06! Anyone better luck?
  • beelzebub253 - Saturday, May 28, 2011 - link

    For those of you using Vertex 3 in AHCI mode with the Intel RST drivers, have you tried the LPM Disable fix discussed below?

    http://forums.crucial.com/t5/Solid-State-Drives-SS...

    Check the Event Viewer (System Logs) for an error about iaStor: "did not respond within the timeout period". The entries will be at the exact time of your freeze-ups. Mine had the same problem until I applied the reg fix.
  • sequoia464 - Wednesday, May 11, 2011 - link

    I wonder how this matches up against the Samsung 470.

    I guess we will never know as it appears that the Samsung 470 has still not been reviewed here at AnandTech.

    Hint.
  • tannebil - Thursday, May 12, 2011 - link

    I just installed a 120GB IOPS and I'm seeing ~240MB/s in AS SSD for sequential write. That's 50% higher than you got in your test last month of the regular 120GB model. My understanding is that the sequential performance should be quite similar between the IOPS and regular model so there's something odd going on. My sequential read results match yours.

    If you look across all the different tests, the AS SSD write results seem to be an outlier since the drive is a great performer in all the rest of the benchmarks. The OWC driver had the same odd results so maybe it's something specific to the SF 2200 controller and your test platform.

    My system is a Biostar TH67+ H67 motherboard with a i5-2400 processor with the SSD is connected to an SATAIII port as the boot drive (Windows 7 HP).
  • johnnydolk - Monday, May 16, 2011 - link

    Here in Denmark the Vertex 3 retails slightly cheaper than the Intel 510, but at 25% more than the Crucial M4, which brings the latter on par with the older Vertex 2. This is for the 240/250/256GB versions.
    I guess the M4 is the one to get if value for money matters?
  • werewolf2000 - Friday, May 27, 2011 - link

    Hi, I'm a bit disappointed. Read all the reviews and was excited about Vertex 3.
    Got it recently ... but, I'm sorry to say it, it is not good. It maybe fast, but it is freezing. For a little while just sometimes, but it is. It has some consequences, for example when you play mp3, you hear some annoying sounds every 1 - 2 minutes. Seems I'm not the only one having these issues, the tricks from here http://geekmontage.com/texts/ocz-vertex-3-freezes-... help partially, but not fully.
    I had Intel G1 some time ago, it behaved similarly after several months of usage, Intel G2 didn't have such issues at all. Now the "best" Vertex 3 has these problems from the beginning (I had two of them, one failed immediately, the second one still lives but freezes).
    Maybe, Anand, you could test things like that? Speed is not all, freezing is VERY annoying.
  • zilab - Saturday, June 4, 2011 - link

    While OCZ's 120GB drive can interleave read/program operations across two NAND die per channel, the 240GB drive can interleave across a total of four NAND die per channel.


    Hi Anand,

    Great article, just a question:
    WIll having a pair of 120GB in RAID 0, make this a non-issue? in terms of speed and resiliency ?
  • Mohri - Tuesday, August 23, 2011 - link

    Thanks Anand for nice reviews,

    i got macbook pro 2011- Intel i7 2.2 - 8gb ram - 1gb ddr5 video
    and i want to install a ssd for it, now i wanted to ask you to recommend me which brand is better for me?

    Thank you very much
  • samehvirus - Saturday, August 27, 2011 - link

    To make it short for you all .... OCZ rushed the drive to the market, they want you to BETA Test it, you are stuck with them the moment you buy it, they will keep you in their "RMA or wait for firmware Loop" this SSD is a big "AVOID IT" for main drive (OS+Programs/etc) it will annoy you to no end and a solution atm as of may 26 does not exist neither OCZ admit it yet even when plenty of people are reporting the same issue, they will always try to blame your other hardware or software for the BSOD and never about their own SSD

    If you want to be a beta tester go ahead and buy it, OCZ got no solution to offer atm, they are basically selling a product that works 90% of the time so some users wont notice it (those who use their computer 1-2hours a day) if you leave your computer +5hours a day or on all the time prepare to a daily BSOD, Until OCZ offer a solution .
  • paul-p - Saturday, October 22, 2011 - link

    After 6 months of waiting for OCZ and Sandforce to fix their firmware from freezes and BSOD's, I can finally say it is fixed. No more freezes, no more BSOD's, performance is what is expected. And just to make sure all of the other suggestions were 100% a waste of time, I updated the firmware and DID NOT DO anything else except reboot my machine and magically everything became stable. So, after all these months of OCZ and Sandforce blaming everything under the sun including:

    The CMOS battery, OROM's, Intel Drivers, Intel Chipsets, Windows, LPM, Hotswap, and god knows what else, it turns out that none of those issues had anything to do with the real problem, which was the firmware.

    While I'm happy that this bug is finally fixed, Sandforce and OCZ have irrepairably damaged their reputation for a lot of users on this forum.

    Here is a list of terrible business practices that OCZ and Sandforce have done over the last year...

    OCZ did not stand behind their product when it was clearly malfunctioning is horrible.
    OCZ did not allow refunds KNOWING that the product is defective is ridiculous.
    OCZ nor Sandforce even acknowledged that this was a problem and steadfastly maintained it only affected less than 1% of users.
    The fact that OCZ claims this bug affected 1% of users is ridiculous. We now know it affected 100% of the drives out there. Most users just aren't aware enough to know why their computer froze or blue screened.
    OCZ made their users beta test the firmwares to save money on their own testing
    OCZ did not have a solution but expected users to wipe drives, restore from backups, secure erase, and do a million other things in order to "tire out" the user into giving up.
    OCZ deletes and moves threads in order to do "damage control and pr spin".

    But the worst sin of all is the fact that it took almost a year to fix such a MAJOR bug.

    I really hope that OCZ learns from this experience, because I'm certain that users will be wary of Sandforce and OCZ for some time. It's a shame, because now that the drive works, I actually like it.

Log in

Don't have an account? Sign up now