Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

As impressive as the random read/write speeds were, at low queue depths the Vertex 4's sequential read speed is problematic:

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Curious as to what's going on, I ran AS-SSD and came away with much better results:

Incompressible Sequential Read Performance - AS-SSD

Finally I turn to ATTO, giving me the answer I'm looking for. The Vertex 4's sequential read speed is slow at low queue depths with certain workloads, move to larger transfer sizes or high queue depths and the problem resolves itself:

QD2
QD4
QD8

The problem is that many sequential read operations for client workloads occur at 64 – 128KB transfer sizes, and at a queue depth of 1 - 3. Looking at the ATTO data above you'll see that this is exactly the weak point of the Vertex 4.

I went back to Iometer and varied queue depth with our 128KB sequential read test and got a good characterization of the Vertex 4's large block, sequential read performance:

The Vertex 4 performs better with heavier workloads. While other drives extract enough parallelism to deliver fairly high performance with only a single IO in the queue, the Vertex 4 needs 2 or more for large block sequential reads. Heavier read workloads do wonderfully on the drive, ironically enough it's the lighter workloads that are a problem. It's the exact opposite of what we're used to seeing. As this seemed like a bit of an oversight, I presented OCZ with my data and got some clarification.

Everest 2 was optimized primarily for non-light workloads where higher queuing is to be expected. Extending performance gains to lower queue depths is indeed possible (the Everest 1 based Octane obviously does fine here) but it wasn't deemed a priority for the initial firmware release. OCZ instead felt it was far more important to have a high-end alternative to SandForce in its lineup. Given that we're still seeing some isolated issues on non-Intel SF-2281 drives, the sense of urgency does make sense.

There are two causes for the lower than expected, low queue depth sequential read performance. First, OCZ doesn't currently enable NCQ streaming for queue depths less than 3. This one is a simple fix. Secondly, the Everest 2 doesn't currently allow pipelined read access from more than 8 concurrent NAND die. For larger transfers and queue depths this isn't an issue, but smaller transfers and lower queue depths end up delivering much lower than expected performance.

To confirm that I wasn't crazy and the Vertex 4 was capable of high, real-world sequential read speeds I created a simple test. I took a 3GB archive and copied it from the Vertex 4 to a RAM drive (to eliminate any write speed bottlenecks). The Vertex 4's performance was very good:

Sequential Read - 3GB Archive Copy to RAM Disk

Clearly the Vertex 4 is capable of reading at very high rates – particularly when it matters, however the current firmware doesn't seem tuned for any sort of low queue depth operation.

Both of these issues are apparently being worked on at the time of publication and should be rolled into the next firmware release for the drive (due out sometime in late April). Again, OCZ's aim was to deliver a high-end drive that could be offered as an alternative to the Vertex 3 as quickly as possible.

Update: Many have been reporting that the Vertex 4's performance is dependent on having an active partition on the drive due to its NCQ streaming support. While this is true, it's not the reason you'll see gains in synthetic tests like Iometer. If you don't fill the drive with valid data before conducting read tests, the Vertex 4 returns lower performance numbers. Running Iometer on a live partition requires that the drive is first filled with data before the benchmark runs, similar to what we do for our Iometer read tests anyway. The chart below shows the difference in performance between running an Iometer sequential read test on a physical disk (no partition), an NTFS partition on the same drive and finally the physical disk after all LBAs have been written to:

Notice how the NTFS and RAW+precondition lines are identical, it's because the reason for the performance gain here isn't NCQ streaming but rather the presence of valid data that you're reading back. Most SSDs tend to give unrealistically high performance numbers if you read from them immediately following a secure erase so we always precondition our drives before running Iometer. The Vertex 4 just happens to do the opposite, but this has no bearing on real world performance as you'll always be reading actual files in actual use.

Despite the shortcomings with low queue depth sequential read performance, the Vertex 4 dominated our sequential write tests, even at low queue depths. Only the Samsung SSD 830 is able to compete:

Desktop Iometer - 128KB Sequential Write (4K Aligned)

Technically the SF-2281 drives equal the Vertex 4's performance, but that's only with highly compressible data. Large sequential writes are very often composed of already compressed data, which makes the real world performance advantage of the Vertex 4 tangible.

Incompressible Sequential Write Performance - AS-SSD

AS-SSD gives us another taste of the performance of incompressible data, which again is very good on the Vertex 4. As far as writes are concerned, there's really no beating the Vertex 4.

Random Read/Write Speed AnandTech Storage Bench 2011
Comments Locked

127 Comments

View All Comments

  • RussianSensation - Wednesday, April 4, 2012 - link

    It seems after extensive use and degradation, the Corsair Performance Pro is one of the best, even besting the Crucial M4:

    http://www.xbitlabs.com/articles/storage/display/m...
  • meloz - Wednesday, April 4, 2012 - link

    What's the deal with using such an enormous SoC built on 65nm process?

    I can understand OCZ / Indilinx not willing to shell premium for cutting edge 28nm process, but they could have at least used 45nm process.

    With a 45nm process the SoC would be a lot smaller, thermal management would be easier (and cheaper), and power consumption would be lower (firmware update or not).

    The cost of more modern process is easily balanced by the fact that they would get a lot more chips of a 300nm wafer with 45nm rather than 65nm.

    Vertex 4 is a good improvement from OCZ, but they need to get serious about their execution and 'little details' if they still want to exist in another 5 years. Marvell, SandForce and Intel are not standing still and as competition increases the price of such poor decisions will weigh heavily against OCZ.
  • Ryan Smith - Thursday, April 5, 2012 - link

    While we obviously can't speak for OCZ, when it comes to processes do keep in mind that costs escalate with the process. Older processes are not only cheaper because they have effectively reached their maximum yields, but the cost of their development has always been paid off, allowing the fabs to sell wafer runs at a lower cost and still book a profit.

    For a sufficiently simple device, the additional number of dice per wafer may not offset the higher per-wafer costs, lower yield, and demand-driven pricing. 4x nm processes are still booked solid, and will be for some time.
  • NCM - Wednesday, April 4, 2012 - link

    Firmware "promises" aren't worth the guarantee they're not written on.
    We buy today's product, not the product there might be in some indefinite future.
  • rw1986 - Wednesday, April 4, 2012 - link

    seems intel NAND would cost more and OCZ buys most of their NAND from Micron I believe
  • Kristian Vättö - Wednesday, April 4, 2012 - link

    Without knowing the prices, it's hard to say anything. Intel and Micron NAND come from the same fab so the silicon quality is the same. Intel does rate their NAND higher (5000 vs 3000 P/E cycles) and both companies have their own validation processes, so it's possible that Intel NAND is slightly higher quality.

    It's possible that OCZ sources NAND from several fabs for the Vertex 4. E.g. Vertex 3 used NAND from Intel, Micron, Spectec and Hynix. Micron NAND is available in higher quantity as they own more plants, so that's why it's more common. Price wise I guess they are all about the same, though.
  • bji - Wednesday, April 4, 2012 - link

    Any time a drive has a significant amount of RAM in it, I get a bit worried about the possibility of data loss on power outage. If the drive has the name OCZ attached, this worry becomes a huge concern. I would not be at all surprised if OCZ increased performance in part by reducing durability in the face of power outage.

    If the RAM is used as a write buffer, then on power outage the data is lost. This is not a problem if the drive correctly reports this state to the operating system - i.e., not telling the operating system that the data is sync'd to permanent storage until it's been written out of RAM cache into the flash cells. But if the drive cheats by telling the operating system that the blocks have been written when they have been stored in its RAM rather than when the blocks have actually made it to the flash cells, then data can be lost despite the guarantee that the drive has given that it can't.

    Cheating in this way could make write performance look better, and given that the drive looks particularly good for write performance and has a lot of RAM on board, I am very, very suspicious.

    What about testing this drive's durability in the face of power loss?
  • geddarkstorm - Wednesday, April 4, 2012 - link

    That's a good question.

    However, from the article it sounds like the RAM is mostly being used to prefetch reads, rather than buffer writes.
  • bji - Wednesday, April 4, 2012 - link

    That's Anandtech's conjecture. Even if OCZ told them it was so, it's not proven, as OCZ could be lying to cover the deficiency.
  • FunBunny2 - Wednesday, April 4, 2012 - link

    Exactly!! An "Enterprise SSD" with no superCap?? That needs some 'splainin.

Log in

Don't have an account? Sign up now