Last week I was in Orlando attending CTIA. While enjoying the Florida weather, two SSDs arrived at my office back in NC: Intel's SSD 320, which we just reviewed three days ago and Crucial's m4. Many of you noticed that I had snuck in m4 results in our 320 review but I saved any analysis/conclusions about the drive for its own review.

There are more drives that I've been testing that are missing their own full reviews. Corsair's Performance Series 3 has been in the lab for weeks now, as has Samsung's SSD 470. I'll be talking about both of those in greater detail in an upcoming article as well.

And for those of you asking about my thoughts on the recent OCZ related stuff that has been making the rounds, expect to see all of that addressed in our review of the final Vertex 3. OCZ missed its original March release timeframe for the Vertex 3 in order to fix some last minute bugs with a new firmware revision, so we should be seeing drives hit the market shortly.

There's a lot happening in the SSD space right now. All of the high end manufacturers have put forward their next-generation controllers. With all of the cards on the table it's clear that SandForce is the performance winner once again this round. So far nothing has been able to beat the SF-2200, although some came close—particularly if you're still using a 3Gbps SATA controller.

All isn't lost for competing drives however. While SandForce may be the unequivocal performance leader, compatibility and reliability are both unknowns. SandForce is still a very small company with limited resources. Although validation has apparently improved tremendously since the SF-1200 last year, it takes a while to develop a proven track record. As a result, some users and corporations feel more comfortable buying from non-SF based competitors—although the SF-2200 may do a lot to change some minds once it starts shipping.

The balance of price, performance and reliability is what keeps this market interesting. Do you potentially sacrifice reliability for performance? Or give up some performance for reliability? Or give up one for price? It's even tougher to decide when you take into account that all of the players involved have had major firmware bugs. Even though Intel appears to have the lowest return rate out of all of the drives it's not excluded from the reliability/compatibility debate.

Crucial's m4, Micron's C400

Micron and Intel have a joint venture, IMFT, that produces NAND Flash for both companies as well as their customers. Micron gets 51% of IMFT production for its own use and resale, while Intel gets the remaining 49%.

Micron is mostly a chip and OEM brand, Crucial is its consumer memory/storage arm. Both divisions shipped an SSD called the C300 last year. It was the first 6Gbps SATA SSD we tested and while it posted some great numbers, the drive launched to a very bumpy start.

Crucial's m4 Lineup
  CT064M4SSD2 CT128M4SSD2 CT256M4SSD2 CT512M4SSD2
User Capacity 59.6GiB 119.2GiB 238.4GiB 476.8GiB
Random Read Performance 40K IOPS 40K IOPS 40K IOPS 40K IOPS
Random Write Performance 20K IOPS 35K IOPS 50K IOPS 50K IOPS
Sequential Read Performance Up to 415MB/s Up to 415MB/s Up to 415MB/s Up to 415MB/s
Sequential Write Performance Up to 95MB/s Up to 175MB/s Up to 260MB/s Up to 260MB/s

A few firmware revisions later and the C300 was finally looking good from a reliability perspective. Although recently I have heard reports of performance issues with the latest 006 firmware, the drive has been working well for me thus far. It just goes to show you that company size alone isn't an indication of compatibility and reliability.


Crucial RealSSD C300 (back), Crucial m4 (front)

This time around Crucial wanted to differentiate its product from what was sold to OEMs. Drives sold by Micron will be branded C400 while consumer drives are called the m4. The two are the same, just with different names.


The Marvell 88SS9174-BLD2 in Crucial's m4

Under the hood, er, chassis we have virtually the same controller as the C300. The m4 uses an updated revision of the Marvell 9174 (BLD2 vs. BKK2). Crucial wouldn't go into details as to what was changed, just to say that there were no major architectural differences and it's just an evolution of the same controller used in the C300. When we get to the performance you'll see that Crucial's explanation carries weight. Performance isn't dramatically different from the C300, instead it looks like Crucial played around a bit with firmware. I do wonder if the new revision of the controller is at all less problematic than what was used in the C300. Granted fixing any old problems isn't a guarantee that new ones won't crop up either.


The 88SS9174-BKK2 is in the Intel SSD 510

The m4 is still an 8-channel design. Crucial believes it's important to hit capacities in multiples of 8 (64, 128, 256, 512GB). Crucial also told me that the m4's peak performance isn't limited by the number of channels branching off of the controller so the decision was easy. I am curious to understand why Intel seems to be the only manufacturer that has settled on a 10-channel configuration for its controller while everyone else picked 8-channels.

Crucial sent along a 256GB drive populated with sixteen 16GB 25nm Micron NAND devices. Micron rates its 25nm NAND at 3000 program/erase cycles. By comparison Intel's NAND, coming out of the same fab, is apparently rated at 5000 program/erase cycles. I asked Micron why there's a discrepancy and was told that the silicon's quality and reliability is fundamentally the same. It sounds like the only difference is in testing and validation methodology. In either case I've heard that most 25nm NAND can well exceed its rated program/erase cycles so it's a non-issue.

Furthermore, as we've demonstrated in the past, given a normal desktop usage model even NAND rated for only 3000 program/erase cycles will last for a very long time given a controller with good wear leveling.

Let's quickly do the math again. If you have a 100GB drive and you write 7GB per day you'll program every MLC NAND cell in the drive in just over 14 days—that's one cycle out of three thousand. Outside of SandForce controllers, most SSD controllers will have a write amplification factor greater than 1 in any workload. If we assume a constant write amplification of 20x (and perfect wear leveling) we're still talking about a useful NAND lifespan of almost 6 years. In practice, write amplification for desktop workloads is significantly lower than that.

Remember that the JEDEC spec states that once you've used up all of your rated program/erase cycles, the NAND has to keep your data safe for a year. So even in the unlikely event that you burn through all 3000 p/e cycles and let's assume for a moment that you have some uncharacteristically bad NAND that doesn't last for even one cycle beyond its rating, you should have a full year's worth of data retention left on the drive. By 2013 I'd conservatively estimate NAND to be priced at ~$0.92 per GB and in another three years beyond that you can expect high speed storage to be even cheaper. In short, combined with good ECC and an intelligent controller I wouldn't expect NAND longevity to be a concern at 25nm.

The m4 is currently scheduled for public availability on April 26 (coincidentally the same day I founded AnandTech fourteen years ago), pricing is still TBD. Back at CES Micron gave me a rough indication of pricing however I'm not sure if those prices are higher or lower than what the m4 will ship at. Owning part of a NAND fab obviously gives Micron pricing flexibility, however it also needs to maintain very high profit margins in order to keep said fab up and running (and investors happy).

The Test

CPU

Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled)—for AT SB 2011, AS SSD & ATTO

Motherboard:

Intel DX58SO (Intel X58)

Intel H67 Motherboard

Chipset:

Intel X58 + Marvell SATA 6Gbps PCIe

Intel H67
Chipset Drivers:

Intel 9.1.1.1015 + Intel IMSM 8.9

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

 

Random Read/Write Speed & TRIM Analysis
Comments Locked

103 Comments

View All Comments

  • dingo99 - Thursday, March 31, 2011 - link

    While it's great that you test overall drive performance before and after a manually-triggered TRIM, it's unfortunate that you do not test real-world TRIM performance amidst other drive operations. You've mentioned often that Crucial drives like C300 need TRIM, but you've missed the fact that C300 is a very poor performer *during* the TRIM process. If you try to use a C300-based system on Win7 while TRIM operations are being performed (Windows system image backup from SSD to HD, for example) you will note significant stuttering due to the drive locking up while processing its TRIMs. Disable Win7 TRIM, and all the stuttering goes away. Sadly, the limited TRIM tests you perform now do not tell the whole story about how the drives will perform in day-to-day usage.
  • Anand Lal Shimpi - Thursday, March 31, 2011 - link

    I have noticed that Crucial's drives tend to be slower at TRIMing than the competition. I suspect this is a side effect of very delayed garbage collection and an attempt to aggressively clean blocks once the controller receives a TRIM instruction.

    I haven't seen this as much in light usage of either the C300 or m4 but it has definitely cropped up in our testing, especially during those full drive TRIM passes.
  • jinino - Thursday, March 31, 2011 - link

    Is it possible to include MB with AMD's 890 series chipset to test SATA III performance?

    Thanks
  • whatwhatwhat2011 - Thursday, March 31, 2011 - link

    I continue to be frustrated by the lack of actual, task-oriented, real world benchmarking for SSDs. That is to say, tests that execute common tasks (like booting, loading applications, doing disk intensive desktop tasks like audio/video/photo editing) and reporting exactly how long those tasks took (in seconds) using different disks.

    This is really what we care about when we buy SSDs. Sequential read and write numbers are near irrelevant in real world use. The same could be said for IOPs measurements, which have so many variables involved. I understand that your storage bench is supposed to satisfy this need, but I don't think that it does. The numbers it returns are still abstract values that, in effect, don't really communicate to the reader what the actual performance difference is.

    Bringing it home, my point is that while we all understand that going from an HDD system drive to an SSD results in an enormous performance improvement, we really have no idea how much better a SF-2200 based Vertex 3 is than an Indilinx Vertex 1 is in real world use. Sure, we understand that it's tons faster in the benches, but if that translates to application loading times that are only 1 second faster, who really cares to make that upgrade?

    In particular, I'm thinking of Sandforce drives. They really blow the doors off benchmarking suites, but how does that translate to real world performance? Most of the disk intensive desktop tasks out there involve editing photos and videos that are generally speaking already highly compressed (ie, incompressible).

    Anand, you are a true leader in SSD performance analysis. I hope that you'll take the lead once again and put an end to this practice of reporting benchmark numbers that - while exciting to compare - are virtually useless when it comes to making buying decisions.

    In the interest of being positive and helpful, here are a few tasks that I'd loved to see benched and compared going forward (especially between high end HDDs, second gen SSDs and third gen SSDs).

    1. Boot times (obviously you'd have to standardize the mobo for useful results).
    2. Application load times for "heavy" apps like Photoshop, After Effects, Maya, AutoCAD, etc
    3. Load times for large Lightroom 3 catalogs using RAW files (which are generally incompressible) and large video editing project files (which include a variety of read types) using the various AVC-flavored acquisition codecs out there.
    4. BIG ONE HERE: the real world performance delta for using SSDs as cache drives for content creation apps that deal mostly with incompressible data (like Lightroom and Premiere).

    Thanks again for the great work. And I apologize for the typos. Day job's a-callin'.
  • MilwaukeeMike - Thursday, March 31, 2011 - link

    I think part of the reason you don't see much of this is the difficulty in standardizing it. You’d like to see AutoCAD, but I’d like to see visual studio or RSA. I have seen game loading screens in reviews, and I’d like to see that again. Especially since you don’t really know how much data is being loaded. Will a game level load in 5 seconds vs 25 in a standard HD, or is it more like 15 vs 25? I’d also prefer to see a Veliciraptor on the graph because I own one, but that’s just getting picky. However, I’m sure not going to buy a SSD without knowing this stuff.
  • whatwhatwhat2011 - Thursday, March 31, 2011 - link

    That's a very valid point, but I don't see much in the way of even an effort in this regard. I certainly wouldn't complain if the application loading tests including a bunch of software that I never use just so long as that software has similar loading characteristics and times as the software I do use. Or anything, really, that gives me some idea of the actual difference in user experience.

    I have an ugly hunch that there isn't really much (or any) difference between a first gen and third gen SSD in terms of the actual user experience. My personal experience has more or less confirmed this, but that's just anecdotal. These benchmark numbers, as it is, don't tell us much about what is going on with the user's experience.

    They do, however, get people excited about buying new SSDs every year. They're hundreds of megabytes per second faster! And I love megabytes.
  • Chloiber - Friday, April 1, 2011 - link

    I do agree. Most SSD tests still lack these kind of tests.
    On what should I base my decision when buying an SSD? As the AnandStorage Benches show, the results can be completely different (just compare the new suite (which isn't "better", it's just "different") with the old one! Completely different results! And it's still just a benchmark where we don't actually know what has been benched. Yes, Anand provides some numbers, but it's not transparent enough. It's still ONE scenario.

    I'd also like to see more simple benchmarks. Sit behind your computer and use a stop watch. Yes, it's more work than using simple tools, but the result is worth WAY more than "YABT" (yet another benchmark tool).

    Well yes. Maybe the results are very close. But that's exactly what I want to know. I am very sorry, but right now, I only see synthetic benchmarks in these tests which can't tell me anything.

    - Unzipping
    - Copying
    - Installing
    - Loading times of applications (even multiple apps at once)

    That's the kind of things I care about. And a trace benchmark is nice, but there is still a layer of abstraction that I just do not want.
  • whatwhatwhat2011 - Friday, April 1, 2011 - link

    It's really gratifying to hear other users sharing my thoughts! I have a hunch we're onto something here.

    Anand, I hate to sound harsh - as you've clearly put a ton of work into this - but your storage bench is really a step in the wrong direction. Yes, it consists of real world tasks and produces highly replicable results.

    But the actual test pattern itself is simply not realistic. Only a small percentage of users ever find themselves doing tasks like that, and even those power users only are capable of producing workloads like this everyone once in awhile (when they're highly, highly caffeinated, I would suppose).

    Even more damning, the values it returns are not helpful when making buying decisions. So the Vertex 3 completes the heavy bench some 4-5 minutes ahead of the Crucial m4. What does that actually mean in terms of my user experience?

    See, the core of the issue here is really why people buy SSDs. Contrary to the marketing justification, I don't think anyone buys SSDs for productivity gains (although that might be how they justify the purchase to themselves as well).

    So what are you really getting with an SSD? Confidence and responsiveness. The sort of confidence that comes with immediate responsiveness. Much like how a good sports car will respond immediately to every single touch of the peddles or wheel, we expect a badass computer to respond immediately to our inputs. Until SSDs came along, this simply wasn't a reality.

    So the question really is: is one SSD going to make my computer faster, smoother and more responsive than another?
  • seapeople - Friday, April 1, 2011 - link

    How many times must Anand answer this question. Here's your answer:

    Question: What's the difference between all these SSD's and how they boot/load application X?

    Answer: For every SSD from x25m-g2 on THERE IS VERY LITTLE DIFFERENCE.

    Anand could spend a lot of time benchmarking how long it takes to start up Photoshop or boot up Windows 7 for these SSD's, but then we'd just get a lot of graphs that vary from 7 seconds to 9 seconds, or 25 seconds to 28 seconds. Or you could skew the graphs with a mechanical hard drive which would be 2-5x the loading time.

    In short, the synthetic (or even real-life) torture tests that Anand shows here are the only tests which would show a large difference between these drives, and for everything else you keep asking about there would be very little difference. This is why it sucks that SSD performance is still increasing faster than price is dropping; SSD's are really fast enough to beat hard drives at anything, so the only important factor for most real world situations is the price and how much storage capacity you can live with.
  • whatwhatwhat2011 - Friday, April 1, 2011 - link

    I'm not sure that I understand your indignation. If useful, effective real world benchmarks would demonstrate little difference between SSDs, how is that a waste of anyone's time? If anything, that is exactly the information that both consumers and technologists need.

    Consumers would be able to make better buying decisions, gauging the real-world benefits (or not) of upgrading from one SSD to another, later generation SSD.

    Manufacturers and technologists would benefit from having to confront that fact that clearly performance bottle necks exist elsewhere in the system - either in the hardware I/O subsystems, or in software itself that is still designed to respond to HDD levels of latency. If consumers refused to upgrade from one SSD to another, based upon useful test data that revealed this diminishing real-world benefit, that would also help motivate manufacturers to move on price, instead of focusing on MORE MEGABYTES!

    This charade that is currently going on - in which artificial benchmarks and torture tests are being used to exaggerate the difference between drives - certainly makes for exciting reading, but it does little to inform anyone.

    Anand is a leader in this subject matter. I post here as opposed to other sites that are guilty of the same because I have a hunch that only he has the resources and enthusiasm necessary to tackle this issue.

Log in

Don't have an account? Sign up now