Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Random write performance looks extremely good on the Agility 3, even with incompressible data (at least at low queue depths). Like the original Agility, it's impossible to tell the performance difference between it and the Vertex 3 here.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Even as we ramp up queue depth in Iometer, the Agility 3 sticks to the performance of the Vertex 3. It's only with incompressible data that we see the first hint of a performance deficit, but even that isn't much.

Iometer - 4KB Random Read, QD=3

Random read performance is unfortunately limited to 120GB Vertex 3 levels. It's unclear to me whether this is an asynchronous NAND issue or an artificial firmware cap.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Iometer - 128KB Sequential Read

Sequential read performance is lower than the Vertex 3. The 240GB Agility 3 performs more like a 120GB Vertex 2 than its 240GB sibling.

Iometer - 128KB Sequential Write

Sequential write speed is competitive but generally not better than the Vertex 3.

Introduction AnandTech Storage Bench 2011 - Heavy Workload
Comments Locked

59 Comments

View All Comments

  • theagentsmith - Tuesday, May 24, 2011 - link

    Corsair Force F115 154 Euros (1.34€/GB)
    OCZ Vertex 2E 120GB 175 Euros (1.46€/GB) don't know if it's a 25nm model
    OCZ Agility 3 120GB 228 Euros (1.9€/GB)
    OCZ Vertex 3 120GB 259 Euros (2.16€/GB)
    Prices including VAT

    Sure these new generation is faster, but there is barely any difference in a every day scenario, definitely not a night and day difference like a mechanical HD and a good SSD, so I prefer to pocket the savings to buy a F115 to another PC :)
  • OCedHrt - Tuesday, May 24, 2011 - link

    Are the numbers in the "OCZ Vertex 3 240GB - Resiliency - AS SSD Sequential Write Speed - 6Gbps" chart on page 9 wrong? They don't match the conclusion: "The 240GB Agility 3 behaves similarly to the Vertex 3, although it does lose more ground after our little torture session."

    A 2-3% drop on Vertex 3 versus nearly 15% on Agility 3 is hardly behaving similarly. And the Agility 3 barely recovers after TRIM.
  • Mr Alpha - Tuesday, May 24, 2011 - link

    For the TRIM test you fill the entire drive with incompressible randomly written data, and then TRIM it. It must take some time for the GC routine to actually clean up all those blocks. Does the time you wait before doing the after TRIM test affect the results you get?
  • JasonInofuentes - Tuesday, May 24, 2011 - link

    I think I understand what you're asking. You're wondering whether the time after the drive has been "deleted" and then left idle (for any amount of time) and thus allowed to engage in some amount of garbage collection, might be affecting the results. Certainly a possibility, which is why tests are run multiple times and averages reported.

    Great question, though. Thanks.
  • B0GiE - Tuesday, May 24, 2011 - link

    I would like to see a 120Gb & 240Gb Shootout between the following:-

    Corsair Force Series 3
    Corsair Force Series 3 GT
    OCZ Vertex 3 Max IOPS
    OCZ Vertex 3
    OCZ Agility 3

    Pretty Please!
  • icrf - Tuesday, May 24, 2011 - link

    Agreed. I'm particularly interested in a 120 GB SSD, probably SF 2200 based. I bought an OCZ Vertex 2 @ 60G drive for boot/apps last fall, thinking I could stay within that, and have failed, so that's moved to the laptop and I'm looking for a 120G drive for the desktop.

    If the Corsair drives can really keep their pricing, they sound the most appealing. Specs sound very Vertex-like with pricing very Agility-like. I just want to see how some of these smaller drives fare with fewer NAND to deal with.
  • Oxford Guy - Tuesday, May 24, 2011 - link

    The 240 GB Vertex 2!
  • Shadowmaster625 - Tuesday, May 24, 2011 - link

    "The original X25-M had 10 channels of NAND, giving it the ability to push nearly 800MB/s of data. Of course we never saw such speeds, as it's only one thing to read a few KB of data from a NAND array and dump it into a register. It's another thing entirely to transfer that data over an interface to the host controller."

    That's why I been saying they need to put a flash controller on the die. Imagine a dual sided DIMM with 8 NAND chips per side, each running ONFi 3.0 400MB/s. That's 6.4 GBps. zomg. It illicits a pavlovian response. 50 billion bits per second?

    If intel was really interested in capturing the portable devices market, they'd be doing this. The tablet and smartphone SoCs all have integrated lpddr controllers, and look how fast they are for being such low bandwidth and low power.
  • bji - Tuesday, May 24, 2011 - link

    I wonder if it's practical to put the controller on the die. Flash dies are highly optimized for flash, not general purpose processing transistors. Flash is usually a generation or so ahead of CPUs in the lithography process used because flash is simpler in its layout than CPUs. Putting a controller on a flash die would imply using the same lithography processes used for flash to be used for processing transistors and I just don't think that's likely to be feasable. Of course, flash controller logic would likely be alot simpler than a full x86 core. But I don't think that changes the fundamental impracticality of using flash process technology to create controller logic.
  • bji - Tuesday, May 24, 2011 - link

    Oh sorry I think I misunderstood you. You're talking about putting flash controllers on CPU dies, not on the flash dies, I think. In that case, I think that it's likely to be an inevitability. I predict that eventually permanent storage will look like DIMMs do now, like you said as sticks that you plug into slots in your motherboard just like you do for RAM now, and the controller will be built into the CPU to interface with them at high speed and operating systems will just see them as mapped to some memory range in the CPU address space. "Hard drives" will be a thing of the past, replaced by 'persistent memory'.

Log in

Don't have an account? Sign up now