A Note on Real World Performance

The majority of our SSD test suite is focused on I/O bound tests. These are benchmarks that intentionally shift the bottleneck to the SSD and away from the CPU/GPU/memory subsystem in order to give us the best idea of which drives are the fastest. Unfortunately, as many of you correctly point out, these numbers don't always give you a good idea of how tangible the performance improvement is in the real world.

Some of them do. Our 128KB sequential read/write tests as well as the ATTO and AS-SSD results give you a good indication of large file copy performance. Our small file random read/write tests tell a portion of the story for things like web browser cache accesses, but those are difficult to directly relate to experiences in the real world.

So why not exclusively use real world performance tests? It turns out that although the move from a hard drive to a decent SSD is tremendous, finding differences between individual SSDs is harder to quantify in a single real world metric. Take application launch time for example. I stopped including that data in our reviews because the graphs ended up looking like this:

All of the SSDs performed the same. It's not just application launch times though. Here is data from our Chrome Build test timing how long it takes to compile the Chromium project:

Build Chrome

Even going back two generations of SSDs, at the same capacity nearly all of these drives perform within a couple of percent of one another. Note that the Vertex 3 is even a 6Gbps drive and doesn't even outperform its predecessor.

So do all SSDs perform the same then? The answer there is a little more complicated. As I mentioned at the start of this review, I do long term evaluation of all drives I recommend in my own personal system. If a drive is particularly well recommended I'll actually hand out drives for use in the systems of other AnandTech editors. For example, back when I wanted to measure actual write amplification on SandForce drives I sent three Vertex 2s to three different AnandTech editors. I had them use the drives normally for two - three months and then looked at the resulting wear on the NAND.

In doing these real world use tests I get a good feel for when a drive is actually faster or slower than another. My experiences typically track with the benchmark results but it's always important to feel it first hand. What I've noticed is that although single tasks perform very similarly on all SSDs, it's during periods of heavy I/O activity that you can feel the difference between drives. Unfortunately these periods of heavy I/O activity aren't easily measured, at least in a repeatable fashion. Getting file copies, compiles, web browsing, application launches, IM log updates and searches to all start at the same time while properly measuring overall performance is near impossible without some sort of automated tool. Unfortunately most system-wide benchmarks are more geared towards CPU or GPU performance and as a result try to minimize the impact of I/O.

The best we can offer is our Storage Bench suite. In those tests we are actually playing back the I/O requests captured of me using a PC over a long period of time. While all other bottlenecks are excluded from the performance measurement, the source of the workload is real world in nature.

What you have to keep in mind is that a performance advantage in our Storage Bench suite isn't going to translate linearly into the same overall performance impact on your system. Remember these are I/O bound tests, so a 20% increase in your Heavy 2011 score is going to mean that the drive you're looking at will be 20% faster in that particular type of heavy I/O bound workload. Most desktop PCs aren't under that sort of load constantly, so that 20% advantage may only be seen 20% of the time. The rest of the time your drive may be no quicker than a model from last year.

The point of our benchmarks isn't to tell you that only the newest SSDs are fast, but rather to show you the best performing drive at a given price point. The best values in SSDs are going to be last year's models without a doubt. I'd say that the 6Gbps drives are interesting mostly for the folks that do a lot of large file copies, but for most general use you're fine with an older drive. Almost any SSD is better than a hard drive (almost) and as long as you choose a good one you won't regret the jump.

I like the SF-2281 series because, despite things like the BSOD issues, SandForce has put a lot more development and validation time into this controller than its predecessor. Even Intel's SSD 320 is supposed to be more reliable than the X25-M G2 that came before it. Improvements do happen from one generation to the next but they're evolutionary - they just aren't going to be as dramatic as the jump from a hard drive to an SSD.

So use these numbers for what they tell you (which drive is the fastest) but keep in mind that a 20% advantage in an I/O bound scenario isn't going to mean that your system is 20% faster in all cases.

Patriot's Wildfire Random & Sequential Read/Write Speed
Comments Locked

112 Comments

View All Comments

  • semo - Friday, June 24, 2011 - link

    Thanks Anand. I appreciate your honesty and transparency. If it wasn't for you, Jmicron would have killed the momentum of SSD adoption. I'd hate to see the same thing happen again right under our noses.
  • irev210 - Thursday, June 23, 2011 - link

    I have to agree.

    Anand came out blasting the Intel G2 SSD when it first came out for a very MINOR firmware snafu... yet people angry about Intel SSD's or Samsung 470's are very few and far between.

    Anand came out blasting Crucial for having firmware issues as well - with absolutely no follow up. The C300 ended up being an absolutely fantastic drive (though we do see more complaints vs. intel 320/510 and Samsung 470).

    It's getting old that you admit that all SSD's share extremely similar performance but continue to recommend SSD's that are FAR more unreliable vs. other brands.

    If "real-world" performance among SSD's, you should really look at things that distinguish one from another (reliability, warranty, long-term performance, trim/garbage collection features, raid performance, cost/gig evaluation, etc).

    Frankly, I think consumers are at the point where a 1% chance of SSD failure isn't worth .05% increase in performance. While those exact numbers aren't easy to come by - that's why we want you, Anand, to get the dirt for us.
  • Anand Lal Shimpi - Thursday, June 23, 2011 - link

    Intel was held to a higher standard simply because with the X25-M you had to give up performance and the promise was you would have something that was more reliable than the competition.

    The C300 had several firmware issues to begin with and didn't do well over time as we showed in our TRIM torture tests, it's the former that kept me from recommending it early on and the latter that kept me from being all that interested in it in the long run.

    In the past two articles I've recommended the Intel SSD 510 and it was my personal choice of SSD for the past three months. I do have to allow for the fact that I have yet to have a single issue with any SF-2281 drive and some users may feel like they want to take a chance on something that's potentially faster (and has better write amplification characteristics).

    If it was my money I'd stick with the 510 but until I see a readily repeatable situation where the SF-2281 drives have issues I have to at least mention them as an option.

    Take care,
    Anand
  • jwilliams4200 - Thursday, June 23, 2011 - link

    "The C300 had several firmware issues to begin with and didn't do well over time as we showed in our TRIM torture tests, it's the former that kept me from recommending it early on and the latter that kept me from being all that interested in it in the long run."

    So, now that the Vertex 3 has had firmware issues, and now that your test in this article shows that its speed degrades terribly after torture tests, and somewhat even with TRIM....

    Basically, now that the V3 is shown to have the same or worse problems as you complained about with the C300...

    The question is, why are you not giving the Vertex 3 the same derogatory treatement that you gave the C300?
  • Anand Lal Shimpi - Thursday, June 23, 2011 - link

    I've had multiple C300s die in my lab, not even trying to torture them (it looks like I may have just had another one die as of last night). Thus far I haven't had any SF-2281 drives die on me and I haven't experienced the BSOD issue first hand.

    The C300's performance degraded pretty poorly under harsh but still reasonable conditions. If you run the same torture test on a Vertex 3, its performance doesn't degrade.

    It's only when you completely fill a SF-2281 drive with incompressible data, then randomly write small block incompressible data all over the drive for an hour that you end up in a situation with reduced performance. While random writes do happen on all drives, it's highly unlikely that you'll take your system drive, fill it with H.264 videos, delete those videos, install Windows on the drive and then run some sort of application that writes purely random data all over the drive. The torture test I created for the SF drives in particular is specifically designed to look at worst case performance if you're running a very unusual workload.

    I did an 8-month investigation on SandForce's architecture that proved even in my own personal system I never saw the sort of worst case performance I was concerned about. The four drives we deployed across AT editors came back with an average write amplification of 0.6, as in most of the data that was written to the drive was actually deduped/compressed and never hit NAND. Based on that I don't believe most users will see the worst case performance I put forth on the TRIM page, the exception being if you're using this drive purely for highly compressed media or fully random data.

    Take care,
    Anand
  • jwilliams4200 - Thursday, June 23, 2011 - link

    "The C300's performance degraded pretty poorly under harsh but still reasonable conditions."

    You call running HD Tach on an SSD "reasonable conditions"? Seriously?
  • seapeople - Thursday, June 23, 2011 - link

    Yes, I'm sure OCZ loves the fact that Anand mentions the Intel SSD 510 as being the better drive overall considering reliability like five times in this review.

    Not only that, but he explains in depth on page 3 that the extra performance from the Vertex 3 and other latest generation SSD's doesn't even matter in normal computing situations.

    So, Anand's options are this: 1) Say that SSD performance differences don't really matter and you should stop reading review sites like this and just go buy an Intel for reliability, or 2) Mention the irrelevancies of SSD performance differences in passing and continue on to do a full performance review which concludes that the Sandforce drives are, in fact, the fastest drives available today as long as you can get past the BSOD issues which may or may not affect you.

    Just because Anand chose option 2 does not mean he is in OCZ's pocket, it just means he likes reviewing SSD performance. This is very fortunate for us readers who enjoy reading such articles.
  • Anand Lal Shimpi - Thursday, June 23, 2011 - link

    You are very correct - I've tested eight (more coming) SF-2281 drives and haven't had any issues. However by the same logic the sample size of complaints on the forums isn't statistically significant either.

    Despite my sample size being what it is, I continue to have the discussion about quality control and testing in every SF-2281 drive. If there was a repeatable way to bring about the BSOD issue on any (or some?) readily available platforms I'd have no problems completely removing the drive from the discussion altogether. Unfortunately that doesn't seem to be the case.

    Instead what I do is lay out the options for the end user. If you want the best overall reliability, Intel's SSD 320 is likely the drive for you. If you want the best balance of performance and reliability then there's the Intel SSD 510. And finally if you want to take a chance but want the drive with the lowest write amp for most users, there's anything SF-2281 based.

    For me personally the choice was Intel's SSD 510. I've moved it to a secondary role in my system to try and bug hunt the Vertex 3 on a regular basis.

    Take care,
    Anand
  • Tomy B. - Thursday, June 23, 2011 - link

    Why Samsung 470 isn't included in any results?
  • Spoogie - Thursday, June 23, 2011 - link

    1) The first Vertex II I received was DOA.
    2) The second died completely after just eight months of light use.
    3) The BSODs occurred about once every six sleep modes. The Kingston replacement never gives a BSOD.

    Buyer beware.

Log in

Don't have an account? Sign up now