AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Despite the reduction in large IOs, over 60% of all operations are perfectly sequential. Average queue depth is a lighter 2.2029 IOs.

AnandTech Storage Bench 2011 - Light Workload

In our light workload the performance gap is a lot closer between the Vertex 3 and Agility 3. For typical desktop usage, the Agility 3 is likely indistinguishable from the Vertex 3.

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Light Workload

AnandTech Storage Bench 2011 - Heavy Workload Performance vs. Transfer Size
Comments Locked

59 Comments

View All Comments

  • theagentsmith - Tuesday, May 24, 2011 - link

    Corsair Force F115 154 Euros (1.34€/GB)
    OCZ Vertex 2E 120GB 175 Euros (1.46€/GB) don't know if it's a 25nm model
    OCZ Agility 3 120GB 228 Euros (1.9€/GB)
    OCZ Vertex 3 120GB 259 Euros (2.16€/GB)
    Prices including VAT

    Sure these new generation is faster, but there is barely any difference in a every day scenario, definitely not a night and day difference like a mechanical HD and a good SSD, so I prefer to pocket the savings to buy a F115 to another PC :)
  • OCedHrt - Tuesday, May 24, 2011 - link

    Are the numbers in the "OCZ Vertex 3 240GB - Resiliency - AS SSD Sequential Write Speed - 6Gbps" chart on page 9 wrong? They don't match the conclusion: "The 240GB Agility 3 behaves similarly to the Vertex 3, although it does lose more ground after our little torture session."

    A 2-3% drop on Vertex 3 versus nearly 15% on Agility 3 is hardly behaving similarly. And the Agility 3 barely recovers after TRIM.
  • Mr Alpha - Tuesday, May 24, 2011 - link

    For the TRIM test you fill the entire drive with incompressible randomly written data, and then TRIM it. It must take some time for the GC routine to actually clean up all those blocks. Does the time you wait before doing the after TRIM test affect the results you get?
  • JasonInofuentes - Tuesday, May 24, 2011 - link

    I think I understand what you're asking. You're wondering whether the time after the drive has been "deleted" and then left idle (for any amount of time) and thus allowed to engage in some amount of garbage collection, might be affecting the results. Certainly a possibility, which is why tests are run multiple times and averages reported.

    Great question, though. Thanks.
  • B0GiE - Tuesday, May 24, 2011 - link

    I would like to see a 120Gb & 240Gb Shootout between the following:-

    Corsair Force Series 3
    Corsair Force Series 3 GT
    OCZ Vertex 3 Max IOPS
    OCZ Vertex 3
    OCZ Agility 3

    Pretty Please!
  • icrf - Tuesday, May 24, 2011 - link

    Agreed. I'm particularly interested in a 120 GB SSD, probably SF 2200 based. I bought an OCZ Vertex 2 @ 60G drive for boot/apps last fall, thinking I could stay within that, and have failed, so that's moved to the laptop and I'm looking for a 120G drive for the desktop.

    If the Corsair drives can really keep their pricing, they sound the most appealing. Specs sound very Vertex-like with pricing very Agility-like. I just want to see how some of these smaller drives fare with fewer NAND to deal with.
  • Oxford Guy - Tuesday, May 24, 2011 - link

    The 240 GB Vertex 2!
  • Shadowmaster625 - Tuesday, May 24, 2011 - link

    "The original X25-M had 10 channels of NAND, giving it the ability to push nearly 800MB/s of data. Of course we never saw such speeds, as it's only one thing to read a few KB of data from a NAND array and dump it into a register. It's another thing entirely to transfer that data over an interface to the host controller."

    That's why I been saying they need to put a flash controller on the die. Imagine a dual sided DIMM with 8 NAND chips per side, each running ONFi 3.0 400MB/s. That's 6.4 GBps. zomg. It illicits a pavlovian response. 50 billion bits per second?

    If intel was really interested in capturing the portable devices market, they'd be doing this. The tablet and smartphone SoCs all have integrated lpddr controllers, and look how fast they are for being such low bandwidth and low power.
  • bji - Tuesday, May 24, 2011 - link

    I wonder if it's practical to put the controller on the die. Flash dies are highly optimized for flash, not general purpose processing transistors. Flash is usually a generation or so ahead of CPUs in the lithography process used because flash is simpler in its layout than CPUs. Putting a controller on a flash die would imply using the same lithography processes used for flash to be used for processing transistors and I just don't think that's likely to be feasable. Of course, flash controller logic would likely be alot simpler than a full x86 core. But I don't think that changes the fundamental impracticality of using flash process technology to create controller logic.
  • bji - Tuesday, May 24, 2011 - link

    Oh sorry I think I misunderstood you. You're talking about putting flash controllers on CPU dies, not on the flash dies, I think. In that case, I think that it's likely to be an inevitability. I predict that eventually permanent storage will look like DIMMs do now, like you said as sticks that you plug into slots in your motherboard just like you do for RAM now, and the controller will be built into the CPU to interface with them at high speed and operating systems will just see them as mapped to some memory range in the CPU address space. "Hard drives" will be a thing of the past, replaced by 'persistent memory'.

Log in

Don't have an account? Sign up now