A Note on Real World Performance

The majority of our SSD test suite is focused on I/O bound tests. These are benchmarks that intentionally shift the bottleneck to the SSD and away from the CPU/GPU/memory subsystem in order to give us the best idea of which drives are the fastest. Unfortunately, as many of you correctly point out, these numbers don't always give you a good idea of how tangible the performance improvement is in the real world.

Some of them do. Our 128KB sequential read/write tests as well as the ATTO and AS-SSD results give you a good indication of large file copy performance. Our small file random read/write tests tell a portion of the story for things like web browser cache accesses, but those are difficult to directly relate to experiences in the real world.

So why not exclusively use real world performance tests? It turns out that although the move from a hard drive to a decent SSD is tremendous, finding differences between individual SSDs is harder to quantify in a single real world metric. Take application launch time for example. I stopped including that data in our reviews because the graphs ended up looking like this:

All of the SSDs performed the same. It's not just application launch times though. Here is data from our Chrome Build test timing how long it takes to compile the Chromium project:

Build Chrome

Even going back two generations of SSDs, at the same capacity nearly all of these drives perform within a couple of percent of one another. Note that the Vertex 3 is even a 6Gbps drive and doesn't even outperform its predecessor.

So do all SSDs perform the same then? The answer there is a little more complicated. As I mentioned at the start of this review, I do long term evaluation of all drives I recommend in my own personal system. If a drive is particularly well recommended I'll actually hand out drives for use in the systems of other AnandTech editors. For example, back when I wanted to measure actual write amplification on SandForce drives I sent three Vertex 2s to three different AnandTech editors. I had them use the drives normally for two - three months and then looked at the resulting wear on the NAND.

In doing these real world use tests I get a good feel for when a drive is actually faster or slower than another. My experiences typically track with the benchmark results but it's always important to feel it first hand. What I've noticed is that although single tasks perform very similarly on all SSDs, it's during periods of heavy I/O activity that you can feel the difference between drives. Unfortunately these periods of heavy I/O activity aren't easily measured, at least in a repeatable fashion. Getting file copies, compiles, web browsing, application launches, IM log updates and searches to all start at the same time while properly measuring overall performance is near impossible without some sort of automated tool. Unfortunately most system-wide benchmarks are more geared towards CPU or GPU performance and as a result try to minimize the impact of I/O.

The best we can offer is our Storage Bench suite. In those tests we are actually playing back the I/O requests captured of me using a PC over a long period of time. While all other bottlenecks are excluded from the performance measurement, the source of the workload is real world in nature.

What you have to keep in mind is that a performance advantage in our Storage Bench suite isn't going to translate linearly into the same overall performance impact on your system. Remember these are I/O bound tests, so a 20% increase in your Heavy 2011 score is going to mean that the drive you're looking at will be 20% faster in that particular type of heavy I/O bound workload. Most desktop PCs aren't under that sort of load constantly, so that 20% advantage may only be seen 20% of the time. The rest of the time your drive may be no quicker than a model from last year.

The point of our benchmarks isn't to tell you that only the newest SSDs are fast, but rather to show you the best performing drive at a given price point. The best values in SSDs are going to be last year's models without a doubt. I'd say that the 6Gbps drives are interesting mostly for the folks that do a lot of large file copies, but for most general use you're fine with an older drive. Almost any SSD is better than a hard drive (almost) and as long as you choose a good one you won't regret the jump.

I like the SF-2281 series because, despite things like the BSOD issues, SandForce has put a lot more development and validation time into this controller than its predecessor. Even Intel's SSD 320 is supposed to be more reliable than the X25-M G2 that came before it. Improvements do happen from one generation to the next but they're evolutionary - they just aren't going to be as dramatic as the jump from a hard drive to an SSD.

So use these numbers for what they tell you (which drive is the fastest) but keep in mind that a 20% advantage in an I/O bound scenario isn't going to mean that your system is 20% faster in all cases.

Patriot's Wildfire Random & Sequential Read/Write Speed


View All Comments

  • L. - Thursday, June 23, 2011 - link

    Would you handle a website like this for free forever ?
    Besides, that's not the only huge bias there is on Anandtech, but then it's like that mostly everywhere on the interwebz -- just get used to reading through ;)

    The 1% may have easily been a true figure, it's statistics, matter of presentation :

    "In the first month, about 1% of the drives shipped (to resellers) have been returned for RMA (by buyers)"

    See.. that's one huge % that, taking into account customer laziness, various firmware tests and the stock @ resellers could mean about 30% of the drives fail (which of course cannot be the case or it would be a real riot and not just a few whiners on forums).

    It could also be (not here) :

    "In the past 10 years, about 1% of the drives shipped (to resellers) have been returned for RMA (by buyers)"
  • Anand Lal Shimpi - Thursday, June 23, 2011 - link

    I outlined where the 0.66% figure came from in the article. Take all complaints received on the forums + tech support channels and divide that number by the total number of drives sold through (not just sold to retailers).

    I've gotten six more OCZ SF-2281 drives in the past week alone - partially to see if this is something that's caused at the drive level. Given that I still haven't seen it yet, I'm beginning to think that this is an issue that really requires a combination of the right platform and one of these drives.

    Take care,
  • jwilliams4200 - Thursday, June 23, 2011 - link

    "I outlined where the 0.66% figure came from in the article. Take all complaints received on the forums + tech support channels and divide that number by the total number of drives sold through (not just sold to retailers)."

    But that is absurd. That is nothing but a lower limit on the percentage of people who have trouble.

    Doesn't OCZ have any clue about how many people use their computers? The people posting on the forums or contacting tech support are only a fraction of the people who use the products -- and usually the more savvy ones. A lot of people use their computer, and if it crashes, they have no idea what caused the crash. They just reboot it and keep going. Or if they contact tech support, it may be for a completely different product, since they do not know which one caused the crash.

    It would not be crazy to guess that only 10% of people who experienced the problem with OCZ SSDs actually identified the cause of the crash as the SSD, and followed through to contact OCZ about it by phone or forum.

    So the true scope of the problem could easily be 6%, or even higher. The 0.66% figure is just a useless lower limit spit out by OCZ's juggernaut propaganda machine.
  • Proph3T08 - Thursday, June 23, 2011 - link

    I think you might be exaggerating the number of computers being used with OCZ SSDs.

    The last time I went to Best Buy I didn't see any of their display model computers shipping with a ssd.

    I think in general SSDs are still used mostly by tech savvy users. Arguging that Anand's 0.66% is useless where he actually gave a source of information is ridiculous when your argument is a random guess.

    Maybe find out how many OEMs actually ship with OCZ drives then maybe you could come up with a compelling argument.
  • jwilliams4200 - Thursday, June 23, 2011 - link

    You missed the point, Proph3T08. I did not claim to know the percentage, I just gave an example. The point is that OCZ's number is indisputably nothing but an extreme lower limit. To claim otherwise is absurd. Reply
  • name99 - Friday, June 24, 2011 - link

    Of course MacBook Airs all ship with SSDs...
    I don't know the numbers for how many SSDs ship on non-Air macs, but I expect the number is pretty high.

    Note also that people are quick to complain that Apple never ships leading edge SSDs --- but it is worth noting that there have been no large scale outcries (even at the level of 1% complains) against the SSDs that Apple does ship.
    I think Apple believes (and most customers agree) that reliability is vastly more important than the ability to win a benchmark that, after all is said and done, does not really reflect the real-world experience.

    As for OCZ's claims, I'd agree that they are certainly at the lower limit. I've bought three SSDs in my time, one from OCZ, and I've been bitterly disappointed by all three of them, ALL of which hang the machine when fed a long sequence of writes. I've not complained in public forums or tried to get a replacement (which is apparently where OCZ get their data) because, what's the point? They won't give me my money back --- all they'll give me (after I spend $10 on mailing the disk to them) is a replacement crappy drive that behaves in exactly the same way.

    Much easier just to conclude that
    (a) this is a business populated by charlatans and scam artists
    (b) Anand, unfortunately, has been way too tolerant of this sort of crap (all devices bought on his recommendation)
    (c) the ONLY vendor of SSDs today that doesn't seem a complete a**hole is buying an SSD built into an Apple product. Sad but true.
  • sam. - Saturday, June 25, 2011 - link


    I have to agree with you there. I never complained about my SSD when it failed, I just went straight to RMA and paid the $11 postage. Almost think they make you pay so much for an SSD because they know they will have to send you another one down the track.
  • jwilliams4200 - Friday, June 24, 2011 - link

    Also consider that there are probably a great many people who have the crashes, who then go to the forums, read the posts others made about the crashes, and think, okay, that is the problem I am having. No need to post, I'll just wait for a fix. Reply
  • velis - Thursday, June 23, 2011 - link

    It's OCZ who's pumping the SSD novelities the fastest out there. As fast as they go for OCZ reviews, they also go for C300, m4 and intel ones.
    Anand is just crazy for anything SSD.
    If you don't like it, don't read it...

    Dam, now I'm putting my hand into fire for Anand :P }:-)))
  • TrackSmart - Thursday, June 23, 2011 - link

    Ha Ha. I agree on three points raised in this thread: 1) OCZ quickly gets their SSDs out to Anandtech for review, 2) OCZ gets products to market quickly, and 3) Anand is crazy for anything SSD.

    All of that said, this is one of the better tech sites on the web and I have a lot of respect for Anand and others. To Anand: Keep up the good work and don't take the crazies too much to heart. You have my props for explicitly addressing all of the major "concerns" that have popped up of late. That said, you'll never satisfy all of the people who have raised them. Rational people will continue coming back to the site to read about the latest tech.

Log in

Don't have an account? Sign up now