Last week OCZ pulled the trigger and introduced the world’s first SF-2000 based SSD: the Vertex 3 Pro. Not only was it the world’s first drive to use SandForce’s 2nd generation SSD controller, the Vertex 3 Pro was also the first SATA drive we’ve tested to be able to break 500MB/s on both reads and writes. Granted that’s with highly compressible data but the figures are impressive nonetheless. What wasn’t impressive however was the price. The Vertex 3 Pro is an enterprise class drive, complete with features that aren’t exactly in high demand on a desktop. As a result the V3P commands a premium - the drive starts at $525 for a 100GB capacity.

Just as we saw last round however, if there’s a Vertex 3 Pro, there’s bound to be a more reasonably priced non-Pro version without some of the enterprisey features. Indeed there is. Contained within this nondescript housing is the first beta of OCZ’s Vertex 3 based on a SandForce SF-2200 series controller. The price point? Less than half of that of the V3P:

Pricing Comparison
  128GB 256GB 512GB
OCZ Vertex 3 Pro $525 (100GB) $775 (200GB) $1350 (400GB)
OCZ Vertex 3 $249.99 $499.99 N/A

At an estimated $250 for a 120GB drive the Vertex 3 is more expensive than today’s Vertex 2, but not by too much nor do I expect that price premium to last for long. The Vertex 2 is on its way out and will ultimately be replaced by the V3. And SSD prices will continue to fall.

What sets a Vertex 3 apart from a Vertex 3 Pro? Not all that much, but SandForce has grown a lot in the past year and instead of just a couple of SKUs this time around there are no less than seven members of the SF-2000 family.

You should first know that SandForce only produces a single die, the differences between all of the members of the SF-2000 family are strictly packaging, firmware and testing.

The main categories here are SF-2100, SF-2200, SF-2500 and SF-2600. The 2500/2600 parts are focused on the enterprise. They’re put through more aggressive testing, their firmware supports enterprise specific features and they support the use of a supercap to minimize dataloss in the event of a power failure. The difference between the SF-2582 and the SF-2682 boils down to one feature: support for non-512B sectors. Whether or not you need support for this really depends on the type of system it’s going into. Some SANs demand non-512B sectors in which case the SF-2682 is the right choice.

You may remember that our Vertex 3 Pro sample used a SF-2682 controller. That’s because initially all SandForce made were SF-2682s. Final versions of the V3P will likely use the cheaper SF-2582.

The SF-2200/2100 series are more interesting because of their lower target price points. You lose support for the supercap but that’s not as big of a deal on the desktop since you’re not working with mission critical data. The big difference between the 2200 and 2100 is support for 6Gbps SATA, the former supports it while the latter doesn’t. This is a pretty big difference because as we’ve seen, when paired with a 3Gbps controller the SF-2000 isn’t too much better than what we had with the SF-1000.

The other big difference is the number of byte lanes supported by the controller. The SF-2181 and above all support 8 NAND flash channels, however only the SF-2282 supports 16 byte lanes. Each NAND device is 8 bytes wide, supporting 16 byte lanes means that each channel can be populated by two NAND devices. This lets a single SF-2282 controller talk to twice as many NAND devices as a SF-2281.

There’s no performance difference between the 8 and 16-byte lane versions of the chip, it’s just a matter of pure capacity. Thankfully with 25nm NAND you can get 8GB of MLC NAND on a single die so both the 2281 and 2282 should be able to hit 512GB capacities (the 2281 simply needs higher density NAND packages).

The Vertex 3 sample we have here today uses the SF-2281. Our sample came configured with sixteen 16GB Micron 25nm ONFI 2.0 NAND devices. Remember that while both Intel and Micron own the 25nm fabs, the two companies are providing different specs/yields on 25nm NAND. The 25nm Micron stuff is rated at around 3,000 p/e cycles from what I’ve heard, while the Intel 25nm is rated at 5,000. The main difference here is that the Micron is available in great quantities today while the Intel 25nm isn’t.

RAISE: Optional

One other difference between the SF-2500/2600 and the SF-2100/2200 is the optional nature of RAISE. You'll remember that in order to allow for lower quality NAND SandForce stripes a small amount of redundant data across the array of NAND in a SF-1000/2000 drive. SandForce never stores your actual data, rather a smaller hash/representation of your data. When your data is compressed/deduped for storage, SandForce's controller also generates parity data equal to the size of a single NAND die in the array. This process is known as RAISE (Redundant Array of Independent Silicon Elements) and it allows you to lose as much as a full NAND die worth of data and still never see a bit of data loss from the user's standpoint. At 25nm however a single die can be as large as 8GB, which on a lower capacity drive can be a significant percentage of the total drive capacity.

With the SF-2100/2200, SandForce allows the manufacturer to disable RAISE entirely. At that point you're left with the new 55-bit BCH ECC engine to do any error correcting. According to SandForce the new BCH ECC engine is sufficient for dealing with errors you'd see on 25nm NAND and RAISE isn't necessary for desktop workloads. Drive makers are currently contemplating what to do with RAISE but as of now the Vertex 3 is set to ship with it enabled. The drive we have here today has 256GB of NAND, it'll be advertised as a 240GB drive and appear as a 223.5GB drive in Windows.

Here We Go Again: 4KB Random Write IOP Caps

With the SF-1200 SandForce capped the peak 4KB random write speed of certain drives while negotiating exclusive special firmware deals with other companies to enable higher performance. It was all very confusing as SandForce shipped initial firmware revisions with higher performance and later attempted to take that performance away through later firmware updates.

If you pay attention to the table above you’ll notice that there are two specs for 4KB random write IOPS: burst and sustained. The burst value is for around 15 seconds of operation, the sustained is what happens when the firmware initiated performance cap kicks into action. By default the SF-2100/2200 drives have a cap of 20,000 IOPS for 4KB random writes. After a period of about 15 seconds, the max performance on these drives will drop to 20K. The SF-2500/2600 controllers are uncapped, max performance can remain at up to 60K IOPS.

The beta Vertex 3 review sample I have here today manages around 45K IOPS in our 4KB random write test. That test runs for 3 minutes straight so obviously the cap should’ve kicked in. However it didn’t.

I asked SandForce why this was. SandForce told me that the initial pre-release firmwares on the SF-2200 drives don’t have the cap enabled, but the final release will put the cap in place. I also asked SandForce if it was possible for one of its partners to ship with a special firmware build that didn’t have the cap in place. SandForce replied that anything was possible.

I asked OCZ if this meant the drive I was testing wasn’t representative of final, shipping performance. OCZ stated very clearly that performance will not change between the drive I have today and the drive that goes on sale in the next 2 months. To me this sounds like SF and OCZ have struck another exclusive firmware deal to ensure slightly higher performance on the Vertex 3 compared to a standard SF-2200 based drive.

SandForce wouldn’t comment on any existing agreements and OCZ said it couldn’t get SandForce to confirm that the V3’s performance wouldn’t change between now and its eventual release. Based on what we saw last time I expect SandForce to offer the 60K IOPS firmware to all partners that meet certain order size commitments. Order enough controllers and you get a special firmware, otherwise you’re stuck with the stock SF-2200 firmware.

Of course this makes things very confusing for those of you looking to shop around when buying a SF-2200 drive. I do wish SandForce would just stick to a single spec and not play these sorts of games but that’s just how business works unfortunately.

The good news is that for most desktop workloads you don’t really benefit from being able to execute more than 20K IOPS, at least in today’s usage models.

Faster than a Vertex 3 Pro & The Test
Comments Locked

85 Comments

View All Comments

  • ErikO - Wednesday, March 9, 2011 - link

    It has been over two years, since I took the plunge for SSD, and on my third one, (Intel SLC), I'm loving it, as McDonalds marketing team would surely love everyone to say.

    I started with that woeful OCZ SSD, (that created more noise than the Mohammad Cartoons in the world).

    They sent me a second one by some way of peace offering (I wrote them from the address of a small tech site I own), but public reviews said they were just as bad. Sold on Ebay with no reserve. (almost got my money back too)

    Then a year ago, my wallet allowed for a 160GB X25-M. That was -worlds- better, but still every now and then (once an hour?) my music would skip - and I recognised that as MLC behaviour. This would be the high-latency small-file-size writes then. But as an audiophile, who does actually connect his computer to his hi-fi (via a dedicated sound card of course), this meant noticable breaks in a high output system. Not cool in front of guests.

    So a couple of weeks back, I bought the 64GB X25-E. This was everything I had hoped for. This is what I expected SSDs would give us from day one. The size of the disk is painful though, but I digress.

    My question is...with all these much higher results out there, am I really going to perceive much of a difference in system-wide performance? Intel seems to be dropping SLC as far as I can see, and based on my experience, I don't think I can / could turn my back on SLC technology again...!

    So where is all the talk of the X25-E - of recent? They are for sale everywhere, but the internet seems almost dead to their existance. Too small? Too expensive?

    Even if I can be convinced into a Vertex, I will definately have to pay the high admission cost for the pro...I'd happily sacrifice some speed for consistancy of performance.

    Gents, let me know what you know?

    Thanks.
  • SeattleGeek - Thursday, March 10, 2011 - link

    Will the 128GB model perform the same as the 256GB model that is reviewed?

    Also, do they have the same number of channels?
  • sean.crees - Friday, March 11, 2011 - link

    Where are the benchmarks for the Vertex 2? I'd really like to see how exactly the new Vertex 3 compares to the drive it's supposed to replace.
  • Dssguy1 - Monday, April 18, 2011 - link

    Because we know OCZ is infamous for trickery, I would like to know if you got some kind of "super juiced-up" version of the drive for review.

    It would make me feel a lot better about dropping $550 (basically because I just did), if I knew that the review SSD you tested, matches what we are getting when we buy the Retail version.
  • davele - Friday, December 30, 2011 - link

    Based on what you describe I wonder if this testing is really a representative snapshot of any common use pattern.
    ie: Application installs - typically this is a once off activity. Any one machine only installs a finite number of applications. (the exception being a SCOM based virtual desktop where the entire systems is installed each time a user logs on. But that is Enterprise & they are very unlikely to put SSD's in user workstations)

    For Enterprise SSD benchmarks please consider profiling a SQL Server system running both normal OLTP app & a Sharepoint app.
    The OLTP would give you huge numbers of 4K & 64K random writes, while the logs would give you largely a sequential write. The Sharepoint app is a heavy BLOB store, So this would give you the a random set of large sequential I/O, with much more Read than Write.

    The advantage of this is you can backup the database, capture the I/O requests. Then each test is just a restore & replay of the requests. Easy, repeatable & extremely representative of Heavy I/O workload. Especially an enterprise class load.

Log in

Don't have an account? Sign up now