Today: Toshiba 32nm Toggle NAND, Tomorrow: IMFT 25nm

The Vertex 3 Pro sample I received is a drive rated at 200GB with 256GB of NAND on-board. The SF-2682 controller is still an 8-channel architecture and OCZ populates all 8 channels with a total of 16 NAND devices. OCZ selected Toshiba 32nm Toggle Mode MLC NAND for these early Vertex 3 Pro samples however final shipping versions might transition to IMFT 25nm. The consumer version (Vertex 3) will use IMFT 25nm for sure.

Each of the 16 NAND devices on board is 16GB in size. Each package is made up of four die (4GB a piece) and two planes per die (2GB per plane). Page sizes have changed. The older 34nm Intel NAND used a 4KB page size and a 1MB block size. For Toshiba's 32nm Toggle NAND pages are now 8KB and block size remains unchanged. The move to 25nm will finally double block size as well.

Remember from our earlier description of SandForce's architecture that its data redundancy requires a single die's worth of capacity. In this case 4GB of the 256GB of NAND is reserved for data parity and the remaining 66GB is used for block replacement (either cleaning or bad block replacement). The 200GB drive has a 186GB formatted capacity in Windows.

This is a drive with an enterprise focus so the 27.2% spare area is not unusual. You can expect the consumer versions to set aside less spare area, likely at little impact to performance.


The 0.09F supercap, a feature of the enterprise level SF-2500 controller. This won't be present on the client Vertex 3.

The Vertex 3 Pro is still at least a month or two away from shipping so pricing could change, but right now OCZ is estimating sales at between $3.75 and $5.25 per GB. The client focused Vertex 3 will be significantly cheaper - I'd estimate somewhere north (but within range) of what you can currently buy Vertex 2 drives for.

OCZ Vertex 3 Pro Pricing
  100GB 200GB 400GB
MSRP $525.00 $775.00 $1350.00
Cost per GB $5.35/GB $3.875/GB $3.375/GB

Both the Vertex 3 and Vertex 3 Pro are expected to be available as early as March, however as always I'd be cautious in jumping on a brand new controller with brand new firmware without giving both some time to mature.

The Unmentionables: NAND Mortality Rate Random Read/Write Speed
Comments Locked

144 Comments

View All Comments

  • jwilliams4200 - Friday, February 18, 2011 - link

    In that case, it would be helpful to print two after-TRIM benchmarks: (1) immediately after TRIM and (2) steady-state after-TRIM (i.e., TRIM, let the drive sit idle for long enough for GC to complete, then benchmark again)
  • jcompagner - Thursday, February 17, 2011 - link

    what i never understood or maybe i should read a bit more the previous articles, is that how come that a SSD can write many times faster then it can read?
    It seems to me that read is way easier to do then write...
  • vol7ron - Friday, February 18, 2011 - link

    I originally thought that, but SSDs first write to the controller, which organizes the data for storing it to the disk. The major point is that the data can go anywhere in the array of NAND nodes and the list of the next available node in the stack is available almost immediately, whereas a read requires a hash lookup of where the data is stored, which means the seek could take longer to accomplish.

    I, as well, am not certain that's true, but that's my best guess.
  • AnnihilatorX - Saturday, February 19, 2011 - link

    Only for Sandforce controllers.
    Sandforce compresses the incoming data at real time. If the incoming data is highly compressible, in a very extreme example, writting a 500MB blank text file, will be instantaneous. So you see 500MB/ms or something ridiculous.

    It is also possible for write speeds to exceed read in burst when small amount of data is written to DRAM on other controllers
  • Soul_Master - Thursday, February 17, 2011 - link

    For zero impact from source performance, I suggest to copy data from RAM drive to your test hard disk.
  • Anand Lal Shimpi - Thursday, February 17, 2011 - link

    That's a great suggestion. I ran out of time before I left the country but I'll be playing with it some more upon my return :)

    Take care,
    Anand
  • MrBrownSound - Thursday, February 17, 2011 - link

    I think the intel x25m was a pretty good control group to send the data from. I would auctally like to see the changes when sending the data through the RAM; that would be interesting.
  • Hacp - Thursday, February 17, 2011 - link

    Anand,
    You still direct your readers to your Vertex2 article but OCZ has changed its performance on those drives. Your results are no longer valid and it would be dishonest to link the old Vertex2 performance numbers in this new article when they do not reflect the new slower performance of the Vertex2 today.
  • Anand Lal Shimpi - Thursday, February 17, 2011 - link

    I've seen the discussion and based on what I've seen it sounds like very poor decision making on OCZ's behalf. Unfortunately my 25nm drive didn't arrive before I left for MWC. I hope to have it by the time I get back next week and I'll run through the gamut of tests, updating as necessary. I also plan on speaking with OCZ about this. Let me get back to the office and I'll begin working on it :)

    As far as old Vertex 2 numbers go, I didn't actually use a Vertex 2 here (I don't believe any older numbers snuck in here). The Corsair Force F120 is the SF-1200 representative of choice in this test.

    Take care,
    Anand
  • Quindor - Thursday, February 17, 2011 - link

    Good to hear that you are addressing the problems surrounding the Vertex 2 drives. There aren't many websites out there which deliver well thought through reviews and bechmarks such as Anandtech does, although some are getting better.

    I did some benchmarks on my own and with the new 25nm NAND the new 180GB OCZ Vertex2 can actually be slower then my more then a year old 120GB OCZ Vertex1.

    If anyone is interested. They can find an overview of the benchmarks performed on the following page. https://picasaweb.google.com/quindor/Benchmarks#

    Still, I would love to see an in depth comparsion as you are famous for. ;)

    For my personal usage scenario (my own ESXi server), the speed decrease will be of minimal effect because running multiple template cloned guests, the dedup and compression should be able to do their work just fine. ;)

Log in

Don't have an account? Sign up now