POST A COMMENT

45 Comments

Back to Article

  • Concillian - Thursday, May 05, 2011 - link

    OCZ is still SandForce's favorite partner and thus it gets preferential treatment when it comes to firmware.


    This is why I won't buy a SandForce SSD. Yeah, I can get a brand that doesn't have a cap. Or I can go with a different SSD that doesn't force me to jump through hoops to make sure I'm the same hardware from the right vendor.

    The same hardware from different vendors should not have vastly different performance. How many people would put up with a memory bandwidth limit on P67 chipset motherboards from Gigabyte, but not ASUS? (or whatever brands.) No memory bandwidth doesn't have a huge impact on overall PC performance, but I think it would still be a big deal if something like that actually happened.

    The SF-2281 either needs all vendors capped or none. It's a really shady tactic to offer two versions of the same hardware IMO.
    Reply
  • semo - Thursday, May 05, 2011 - link

    Nobody in the know likes the SF games but the whole thing is so complicated that most people won't understand it. Suits OCZ as the recent bad publicity doesn't seem to have affected them and everyone thinks they are the best choice for SSDs.

    Where are the Corsair force GT drives? Also, why are there no reviews of the Samsung 470?
    Reply
  • Mr Perfect - Thursday, May 05, 2011 - link

    I agree. If you want to offer vendors special products, fine, but give them a different model number. Call a controller capable of 27k a 2280 and the 52k version the 2281. You can still have incentive products, but the consumer doesn't get duped. Everyone's happy. Reply
  • Flunk - Thursday, May 05, 2011 - link

    Almost all IC vendors do this. Intel is probably the worst by far. Selling essentially the same chip up to 50 different ways but lasering off parts of it. Almost all onboard sound chips, network chips, drive controllers, GPUs and anything else you can think of uses the same strategy. Reply
  • Chloiber - Thursday, May 05, 2011 - link

    So the huge IOPS are pretty much useless, if the QD needs to be high - which is the case with every SSD.

    Anand, how is the "burst" rate of the Mercury regarding Random Write IOPS? I remember that with SF 12xx, the burst rate was exactly the same (for some seconds), only after 5-20s you could see a difference between the "unlocked" Vertex 2 and the rest. Considering how often one needs the random write performance for several seconds or even minutes (= never) I still think those huge IOPS numbers and the "unlocked firmware" stuff are just a huge marketing stunt. The benefit for the "normal" home user is = zero.
    Reply
  • semo - Thursday, May 05, 2011 - link

    Why is a high IOPS figure useless? Just because average Joe facebook doesn't do continuous IO intensive operations doesn't mean we don't need fast SSDs. You can apply your "logic" to CPU, GPU and pretty much any other technological advancement. Reply
  • kmmatney - Thursday, May 05, 2011 - link

    For the normal home user, the Anand light workload test is really the best thing to look at - no need to look at any other metric. The drive does really well here. Reply
  • Robear - Thursday, May 05, 2011 - link

    I believe most people who are interested in the power consumption are most interested in how it performs in a notebook. 2W versus 7W in power is negligible on a desktop. Instead of using a Velociraptor, can you please compare the SSD to a notebook hard drive, like maybe a Seagate Momentus? Reply
  • krazyderek - Thursday, May 05, 2011 - link

    the momentus XT is included, the XT was a little more power hungry then typical notebook drives, have a look at the past review for more info to compare

    http://www.anandtech.com/show/3734/seagates-moment...

    looks like some the new round of SSD's forgo power savings to move up the performance latter (ie: 240gb OCZ V3)
    Reply
  • mschira - Thursday, May 05, 2011 - link

    I was wondering if one could fit this drive into a 7mm slimline slot such as the Lenovo T420s.
    Lenovo only offers an Intel 160gb drive but I would fancy the possibility to insert a speedier 240gb SSD. maybe when removing some of the casing?
    cheers
    M.
    Reply
  • altermaan - Thursday, May 05, 2011 - link

    nice review though I'll most likely buy either the vertex 3 120GB max iops or the crucial m4 128GB. speaking of which: are there any plans of reviewing those two drives in the near future? I (and I think I'm might not be the only one) am desperately waiting for this as I don't wanna spend $300 for the wrong drive.
    greets
    A
    Reply
  • Nicolas Pillot - Thursday, May 05, 2011 - link

    I see from the graphs, that
    - sequential read are faster than sequencial write, which seems ok
    - random write are faster than random read, which seems illogical
    That's the case for each and every ssd drive (well as far as i have checked)
    Could somebody please explain this to me ?
    Reply
  • Nihility - Thursday, May 05, 2011 - link

    I'm not promising that this is the 100% correct reason, however it's possible that the random writes are being made to the cache (SSD's RAM) so that's quicker. While the reads have to be made from the actual flash storage. Reply
  • andymcca - Thursday, May 05, 2011 - link

    Caching is another possible explanation, but if you run a test for any length of time (and I'm guessing the reviewers here do) logic dictates that your buffer will fill up if input rate > output rate. Reply
  • 7Enigma - Thursday, May 05, 2011 - link

    Makes sense to me. Writing to the drive only requires knowing where to put the data (ie is this block of space free or not). It's basically a limitation of how fast the cpu can deliver write requests to the SSD (so only 2 variables essentially).

    Random read on the other hand has an added variable of first FINDING the data on the SSD after the read request is made by the CPU. The latency of finding that data (as compared to writing in a free block) is where the performance difference occurs. This is why mechanical drives are so much slower than SSD's, but there still is an overhead on the "finding" part.
    Reply
  • andymcca - Thursday, May 05, 2011 - link

    Writes on SSDs are to wherever the drive wants to put them (not to a pre-defined physical location). Reads have to come from a pre-defined location, since that is where the data was already put. Basically, SSDs have to hunt for your read data, but put your write data somewhere convenient. Reply
  • JasonInofuentes - Thursday, May 05, 2011 - link

    Could the power differences be a result of binning? Could be part of the perk of being Sandforce's favorite client.

    Thanks.

    Jason
    Reply
  • andymcca - Thursday, May 05, 2011 - link

    My guess is that it has to do more with the memory and less with the controller, but IANAexpert Reply
  • araczynski - Thursday, May 05, 2011 - link

    I don't mean this as a stupid question (apologies if it is) but why not include a traditional platter driver in the ATStorageBench2011? Sometimes comparing apples to apples doesn't have the impact as when you also throw in an orange into the mix to help visualize what you're seeing.

    average MB/s of 100-200 on a certain bench doesn't mean much to me personally when i don't know how it compares to a traditional drive.
    Reply
  • MilwaukeeMike - Thursday, May 05, 2011 - link

    I agree. I like the Velociraptor included on some graphs because I own one, and know what the comparison is. These charts help us realize which SSD might suit our purposes best, but the question many of us are really wondering is 'should I upgrade to one at all?'

    The easy answer is 'yes', but having MS Word open in 1 second instead of 2 doesn't matter to me. Having my games load in 5 seconds instead of 25 does. But without an old school drive on the benchmark table we can't quantify SSD to HDD.

    Review is great for SSD to SSD, don't get me wrong :)
    Reply
  • wombat2k - Thursday, May 05, 2011 - link

    "The two come with comparable warranties which brings the decision down to pricing, where OCZ currently has a $20 advantage. "

    I don't think that's the complete picture. OWC is more of a Mac shop so provides ways to easily update your firmware from a Mac. OCZ relies on you burning an ISO file. This might have changed for the Vertex 3, but I doubt it. OWC sells directly through macsales.com, and I hear their service is pretty good, so if you're a Mac owner, you would tend to naturally gravitate around the OWC solution, even with the price markup.

    Disclaimer:
    * Not a OWC user yet, but considering them *
    Reply
  • NCM - Thursday, May 05, 2011 - link

    The table on page 1 shows the same specs (other than price...) for all three versions. Reply
  • Shadowmaster625 - Thursday, May 05, 2011 - link

    I paid $95 for my 60GB Agility 2. I cant fathom paying 3 times the money for 2X the capacity and a negligable increase in noticable performance. Reply
  • sor - Thursday, May 05, 2011 - link

    I for one hope Anand will figure out what's up with the OCZ 'max iops' Vertex 3. Seems to me that he was promised that his test sample's performance would be the same as the shipping Vertex 3, and now we have a Vertex 3 with and without a firmware cap, and the firmware capped one was shipped first. Perhaps that's not the case and the 'max iops' is some new tweaks above and beyond, but it's been out a few weeks and so far I've yet to read anything about it anywhere. Reply
  • Mr Perfect - Thursday, May 05, 2011 - link

    Isn't anyone making 64GB drives this generation? With the Z68 chipset reportedly only supporting up to 64G for the SSD/HD hybrid drive configuration, they're going to be in demand. Reply
  • Stargrazer - Thursday, May 05, 2011 - link

    Interestingly enough the 4KB random write cap isn't enough to impact any of our real world tests.


    I would argue that you don't *have* any real world tests.

    Sure, the AnandTech Storage Bench tests are *based on* real world workloads, but since you're playing them back at a faster than normal rate, they stop being real world tests. Just take the case of playing back a movie for example. You have a lot of reads there, but you don't get any increase in performance from being able to perform those reads faster than what is necessary to keep your buffers filled (non-empty really). In addition to this, many of the writes that are performed during the tests should be non-blocking, so increasing the write performance would in many cases not lead to any actual real world performance increases (you'd "free up" the drive faster, but that's mostly a benefit when there's other stuff that needs to be done).

    There is presumably a lot of stuff in there that *is* limited by your IO speed, but it's all mixed up with stuff that *isn't*, so you can't tell which speed increases give a real world improvement, and which do not. You simply can't tell how much real world performance an increase in results represent, the relationship most certainly will not be linear (e.g. a 10 point difference could have different meaning depending on how high the values are), and you can't even conclusively tell that a drive that gets a higher score actually performs better in any real world sense.

    The tests do provide some information, but it's not something that tells you how much you would benefit from upgrading your drive.

    I personally think that it would be interesting if you would provide some *actual* real world tests too, so that people could tell if they would actually see some real world differences between different SSDs. Maybe some program/game loading/zoning tests?

    At least people would then be able to judge if the difference they see would be significant enough (for them) to warrant an upgrade. You know how to value a 10s difference, but how much is a 10 MB/s difference in the storage bench worth?
    Reply
  • Anand Lal Shimpi - Thursday, May 05, 2011 - link

    Note that both the 2011 and 2010 Storage Bench do playback in real time. The 2011 benches in particular even preserve idle times properly so all that's sped up are actual I/O requests. You are correct in that this will speed up things like decoding video, however note that many video players already do a lot of read ahead and pre-decode on frames in order to avoid stuttering. Note that very few of the IOs in these tests are for things like video decode so I don't believe they're biasing the results too much.

    You are correct in that we're focusing exclusively on the I/O aspect of performance. And that a 10% increase in I/O performance won't result in a 10% increase in system performance (except in IO bound tasks).

    I've toyed with doing timing based tests, the issue is that modern SSDs don't show any difference when measuring the launch time of a single application. It's really under heavy multitasking and behavior over time that they differ from one another. Both of these types of tests are very difficult to time in a repeatable fashion, which is why we turn to our trace based performance tools.

    I do agree that there's a need for some perspective on these performance improvements, which is what I tried to do by including disk busy time in our 2011 results. There you can see that over the course of a 3 hour use period (for example in the case of the heavy workload) that one drive may shave off x number of seconds vs another drive. How annoying those seconds are is really up to the end user.

    Personally I find that there are three categories that drives fall into these days. There are those that offer performance around that of an X25-M G2, the next group is around the Vertex 2 and the final group is the 6Gbps 240GB Vertex 3. I feel like there's a noticeable difference going from any one of those groups to the other, but going from an X25-M G2 to something that's slower than the Vertex 2 makes less sense.

    I hope this helps :)

    Take care,
    Anand
    Reply
  • seapeople - Thursday, May 05, 2011 - link

    Considering that it's now a given that someone will ask for a graph of loading an application on the comments of any new SSD article, I think you should just pick 5 or 10 current drives and show everyone a graph that spans from 5.2 seconds to 5.5 seconds (or whatever it is) with a tag line of "There, are you happy? Now stop asking about this!" Reply
  • AnnonymousCoward - Thursday, May 05, 2011 - link

    Real world tests shouldn't be a "There, are you happy?" afterthought. Real world is all that matters.

    Let me refer back to this classic example: http://tinyurl.com/yamfwmg . RAID0 was 20-38% faster in IOPS, and in the time-based comparison it was equal or slightly slower. Anand concluded "RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance."

    Do you buy an SSD to use it, or to sit there and run benchmarks?
    Reply
  • Stargrazer - Thursday, May 05, 2011 - link

    The chipset drivers are listed as being Intel 9.1.1.1015. Aren't those drivers from December 2009? Do they work well for H67?

    I can understand if you want to keep the same drivers for consistency in the tests, but have you checked if there are any significant performance benefits from using more recent drivers? If there are any changes, it could be interesting to see how it affects some of the more recent drives.
    Reply
  • Anand Lal Shimpi - Thursday, May 05, 2011 - link

    Those drivers were only used on the X58 platform, I use Intel's RST10 on the SNB platform for all of the newer tests/results. :)

    Take care,
    Anand
    Reply
  • iwod - Thursday, May 05, 2011 - link

    I lost count of many times i post this series. Anyway people continue to worship 4K Random Read Write now have seen the truth. Seq Read Write is much more important then u think.

    Since the test are basically two identical pieces of Hardware, but one with Random Write Cap, the results shows real world doesn't show any advantage. We need more Seq performance!

    Interestingly we aren't limited by the controller or NAND itself. But the connection method, SATA 6Gbps. We need to start using PCI-Express 4x slot, as Intel has shown in the leaked roadmap. Going to PCI-E 3.0 would give us 4GB/s with 4x slot. That should be plenty of room for improvement. ONFI 3.0 next year should allow us to reach 2GB+ Seq Read Write easily.
    Reply
  • krumme - Thursday, May 05, 2011 - link

    I think Anand heard to much to Intel voice in this ssd story
    4k random madness was Intel g2 business
    And all went in the wrong direction
    Anand was - and is - the ssd review site
    Reply
  • Anand Lal Shimpi - Thursday, May 05, 2011 - link

    The fact of the matter is that both random and sequential performance is important. It's Amdahl's law at its best - if you simply increase the sequential read/write speed of these drives without touching random performance, you'll eventually be limited by random performance. Today I don't believe we are limited by random performance but it's still something that has to keep improving in order for us to continue to see overall gains across the board.

    Take care,
    Anand
    Reply
  • Hrel - Thursday, May 05, 2011 - link

    Damn! 200 dollars too expensive for the 120GB. Stopped reading. Reply
  • snuuggles - Thursday, May 05, 2011 - link

    Good lord, every single article that discusses OWC seems to include some sort of odd-ball tangent or half-baked excuse for some crazy s**t they are pulling.

    Hey, I know they have the fastest stuff around, but there's just something so lame about these guys, I have to say on principle: "never, ever, will I buy from OWC"
    Reply
  • nish0323 - Thursday, August 11, 2011 - link

    What crazy s**t are they pulling? I've got 5 drives from them, all SSDs, all perform great. The 6G ones have a 5-year warranty, 2 longer than all other SSD manufacturers right now. Reply
  • neotiger - Thursday, May 05, 2011 - link

    A lot of people and hosting companies use consumer SSD for server workload such as MySQL and Solr.

    Can you benchmark these SSD's performances on server workload?
    Reply
  • Anand Lal Shimpi - Thursday, May 05, 2011 - link

    It's on our roadmap to do just that... :)

    Take care,
    Anand
    Reply
  • rasmussb - Saturday, May 07, 2011 - link

    Perhaps you have answered this elsewhere, or it will be answered in your future tests. If so, please forgive me.

    As you point out, the drive performance is based in large part upon the compressibility of the source data. Relatively incompressible data results in slower speeds. What happens when you put a pair (or more) of these in a RAID 0 array? Since units of data are alternating between drives, how does the SF compression work then? Does previously compressible data get less compressible because any given drive is only getting, at best (2-drive array) half of the original data?

    Conversely, does incompressible data happen to get more compressible when you're splitting it amongst two or more drives in an array?

    Server workload on a single drive versus in say a RAID 5 array would be an interesting comparison. I'm sure your tech savvy minds are already over this in your roadmap. I'm just asking in the event it isn't.
    Reply
  • taltamir - Thursday, May 05, 2011 - link

    doesn't the Z version let you access the CPU's video decoding/encoding engine while having an external GPU?
    While with the P and H versions you have to choose one or the other?
    Reply
  • jb510 - Friday, May 06, 2011 - link

    In considerin an SSD for an OS X boot volume should one be more concerned with compressible data or incompressible data? I wondering because I know OS X compreses some of it's OS files and presumably many apps do the same thing. Further I'm assuming the light workload test uses a windows simulation, can anyone say if/how that would differ from OS X?

    I'm probably more worried about it than I need to be but trying to decide between OCZ/OWC, Intel and Crucial and still not clear which is best for a dual drive setup in a MacBook pro.
    Reply
  • dhfkjah - Friday, May 06, 2011 - link


    www.stylishdudes.com

    All kinds of shoes + tide bag

    Free transport
    Reply
  • zilab - Saturday, June 04, 2011 - link

    "OCZ is still SandForce's favorite partner and thus it gets preferential treatment when it comes to firmware."

    I just confirmed with OWC, they're shipping the 6G with the 60K IOPS read/write frimware. Hope you update your article soon. This is kinda misleading, I'm reading comments here and people think that only OCZ drives have the 60K IOPS firmware.
    Reply
  • nish0323 - Wednesday, June 15, 2011 - link

    I had the Crucial C300, OCZ Vertex 3, and the OWC Extreme 6G in my laptop on a Sata6G connection... and honestly I didn't notice a difference in speed between the three of them. Against the Vertex 2 and the Intel X-25, there was a slight difference. In the end, I decided to go with OWC Extreme 6G for one reason... **** FIVE YEAR WARRANTY ****!!! That's friggin' awesome... ONLY SSD to offer a 5 year warranty on an SSD. And costwise, they're around the same or lower as the rest of the competition. Reply

Log in

Don't have an account? Sign up now