Once More, With Feeling

Ryan said we’d lose some sequential write performance. The drive would no longer be capable of 230MB/s writes, perhaps only down to 80 or 90MB/s now. I told him it didn’t matter, that write latency needed to come down and if it were at the sacrifice of sequential throughput then it’d be fine. He asked me if I was sure, I said yes. I still didn’t think he could do it.

A couple days later and I got word that OCZ just got a new firmware revision back from Korea with the changes I’d asked for. They were going to do a quick test and if the results made me happy, they’d overnight a drive to me for Saturday delivery.

He sent me these iometer results:


The New Vertex was looking good, but was it too good to be true?

I couldn’t believe it. There was no way. “Sure”, I said, “send the drive over”. He asked if I’d be around on Saturday to receive it. I would be, I’m always around.

This was the drive I got:

No markings, no label, no packaging - just a black box that supposedly fixed all of my problems. I ran the iometer test first...it passed. I ran PCMark. Performance improved. There’s no way this thing was fixed. I skipped all of the other tests and threw it in my machine, once again cloning my system drive. Not a single pause. Not a single stutter.

The drive felt slower than the Intel or Summit drives, but that was fine, it didn’t pause. My machine was usable. Slower is fine, invasive with regards to my user experience is not.

I took the Vertex back out and ran it through the full suite of tests. It worked. Look at the PCMark Vantage results to see just what re-focusing on small file random write latency did to the drive’s performance:

The Vertex went from performing like the OCZ Apex (dual JMicron JMF602B controllers) to performing more like an Intel X25-M or OCZ Summit. I’ll get to the full performance data later on in this article, but let’s just say that we finally have a more affordable SSD option. It’s not the fastest drive in the world, but it passes the test for giving you the benefits of a SSD without being worse in some way than a conventional hard drive.

As the Smoke Cleared, OCZ Won Me Over

Now let’s be very clear what happened here. OCZ took the feedback I gave them, and despite it resulting in a product with fewer marketable features implemented the changes. It’s a lot easier to say that your drive is capable of speeds of up to 230MB/s than it is to say it won’t stutter, the assumption is that it won’t stutter.

As far as I know, this is the one of the only reviews (if not the only) at the time of publication that’s using the new Vertex firmware. Everything else is based on the old firmware which did not make it to production. Keep that in mind if you’re looking to compare numbers or wondering why the drives behave differently across reviews. The old firmware never shipped thanks to OCZ's quick acting, so if you own one of these drives - you have a fixed version.

While I didn’t really see eye to eye with any of the SSD makers that got trashed in the X25-M review, OCZ was at least willing to listen. On top of that, OCZ was willing to take my feedback, go back to Indilinx and push for a different version of the firmware despite it resulting in a drive that may be harder to sell to the uninformed. The entire production of Vertex drives was held up until they ended up with a firmware revision that behaved as it should. It’s the sort of agility you can only have in a smaller company, but it’s a trait that OCZ chose to exercise.

They were the first to bring an Indilinx drive to the market, the first to produce a working drive based on Samsung’s latest controller, and the company that fixed the Indilinx firmware. I’ve upset companies in the past and while tempers flared after the X25-M review, OCZ at least made it clear this round that their desire is to produce the best drive they could. After the firmware was finalized, OCZ even admitted to me that they felt they had a much better drive; they weren’t just trying to please me, but they felt that their customers would be happier.

I should also point out that the firmware that OCZ pushed for will now be available to all other manufacturers building Indilinx based drives. It was a move that not only helped OCZ but could help every other manufacturer who ships a drive based on this controller.

None of this really matters when it comes to the drive itself, but I felt that the backstory was at least just as important as the benchmarks. Perhaps showing you all a different side of what goes on behind the scenes of one of these reviews.

Disappointed, I went back to OCZ The OCZ Summit: First with Samsung’s New Controller
Comments Locked

250 Comments

View All Comments

  • punjabiplaya - Wednesday, March 18, 2009 - link

    Great info. I'm looking to get an SSD but was put off by all these setbacks. Why should I put away my HDDS and get something a million times more expensive that stutters?
    This article is why I visit AT first.
  • Hellfire26 - Wednesday, March 18, 2009 - link

    Anand, when you filled up the drives to simulate a full drive, did you also write to the extended area that is reserved? If you didn't, wouldn't the Intel SLC drive (as an example) not show as much of a performance drop, versus the MLC drive? As you stated, Intel has reserved more flash memory on the SLC drive, above the stated SSD capacity.

    I also agree with GourdFreeMan, that the physical block size needs to be reduced. Due to the constant erasing of blocks, the Trim command is going to reduce the life of the drive. Of course, drive makers could increase the size of the cache and delay using the Trim command until the number of blocks to be erased equals the cache available. This would more efficiently rearrange the valid data still present in the blocks that are being erased (less writes). Microsoft would have to design the Trim command so it would know how much cache was available on the drive, and drive makers would have to specifically reserve a portion of their cache for use by the Trim command.

    I also like Basilisk's comment about increasing the cluster size, although if you increase it too big, you are likely to be wasting space and increasing overhead. Surely, even if MS only doubles the cluster size for NTFS partitions to 8KB's, write cycles to SSD's would be reduced. Also, There is the difference between 32bit and 64bit operating systems to consider. However, I don't have the knowledge to say whether Microsoft can make these changes without running into serious problems with other aspects of the operating system.
  • Anand Lal Shimpi - Wednesday, March 18, 2009 - link

    I only wrote to the LBAs reported to the OS. So on the 80GB Intel drive that's from 0 - 74.5GB.

    I didn't test the X25-E as extensively as the rest of the drives so I didn't look at performance degradation as closely just because I was running out of time and the X25-E is sooo much more expensive. I may do a standalone look at it in the near future.

    Take care,
    Anand
  • gss4w - Wednesday, March 18, 2009 - link

    Has anyone at Anandtech talked to Microsoft about when the "Trim" command will be supported in Windows 7. Also it would be great if you could include some numbers from Windows 7 beta when you do a follow-up.

    One reason I ask is that I searched for "Windows 7 ssd trim" and I saw a presentation from WinHEC that made it sound like support for the trim command would be a requirement for SSD drives to meet the Windows 7 logo requirements. I would think if this were the case then Windows 7 would have support for trim. However, this article made it sound like support for Trim might not be included when Windows 7 is initially released, but would be added later.

  • ryedizzel - Thursday, March 19, 2009 - link

    I think it is obvious that Windows 7 will support TRIM. The bigger question this article points out is whether or not the current SSDs will be upgradeable via firmware- which is more important for consumers wanting to buy one now.
  • Martimus - Wednesday, March 18, 2009 - link

    It took me an hour to read the whole thing, but I really enjoyed it. It reminded me of the time I spent testing circuitry and doing root cause analysis.
  • alpha754293 - Wednesday, March 18, 2009 - link

    I think that it would be interesting if you were to be able to test the drives for the "desktop/laptop/consumer" front by writing a 8 GiB file using 4 kiB block sizes, etc. for the desktop pattern and also to test the drive then with a larger sizes and larger block size for the server/workstation pattern as well.

    You present some very very good arguments and points, and I found your article to be thoroughly researched and well put.

    So I do have to commend you on that. You did an excellent job. It is thoroughly enjoyable to read.

    I'm currently looking at a 4x 256 GB Samsung MLC on Solaris 10/ZFS for apps/OS (for PXE boot), and this does a lot of the testing; but I would be interested to see how it would handle more server-type workloads.
  • korbendallas - Wednesday, March 18, 2009 - link

    If The implementation of the Trim command is as you described here, it would actually kind of suck.

    "The third step was deleting the original 4KB text file. Since our drive now supports TRIM, when this deletion request comes down the drive will actually read the entire block, remove the first LBA and write the new block back to the flash:"

    First of all, it would create a new phenomenon called Erase Amplification. This would negatively impact the lifetime of a drive.

    Secondly, you now have worse delete performance.


    Basically, an SSD 4kB block can be in 3 different states: erased, data, garbage. A block enters the garbage state when a block is "overwritten" or the Trim command marks the contents as invalid.

    The way i would imagine it working, marking block content as invalid is all the Trim command does.

    Instead the drive will spend idle time finding the 512kB pages with the most garbage blocks. Once such a page is found, all the data blocks from that page would be copied to another page, and the page would be erased. Doing it in this way maximizes the number of garbage blocks being converted to erased.
  • alpha754293 - Wednesday, March 18, 2009 - link

    BTW...you might be able to simulate the drive as well using Cygwin where you go to the drive and run the following:

    $ dd if=/dev/random of=testfile bs=1024k count=76288

    I'm sure that you can come up with fancier shell scripts and stuff that uses the random number generator for the offsets (and if you really want it to work well, partition it so that when it does it, it takes up the entire initial 74.5 GB partition, and when you're done "dirtying" the data using dd and offset in a random pattern, grow the partition to take up the entire disk again.)

    Just as a suggestion for future reference.

    I use parts of that to some (varying) degree for when I do my file/disk I/O subsystem tests.
  • nubie - Wednesday, March 18, 2009 - link

    I should think that most "performance" laptops will come with a Vertex drive in the near future.

    Finally a performance SSD that comes near mainstream pricing.

    Things are looking up, if more manufacturers get their heads out of the sand we should see prices drop as competition finally starts breeding excellence.

Log in

Don't have an account? Sign up now