Original Link: http://www.anandtech.com/show/5735/micron-c400-msata-128gb-ssd-review
Micron C400 mSATA (128GB) SSD Reviewby Anand Lal Shimpi on April 10, 2012 8:00 AM EST
The arrival of affordable, high-performance client SSDs gave us two (closely related) things: 1) a high-speed primary storage option that could work in both a notebook or a desktop, and 2) independence from traditional hard drive form factors.
Unlike traditional hard drives, solid state storage didn't have the same correlation between performance and physical size. The 2.5" form factor was chosen initially because of the rising popularity of notebooks and the fact that desktops could use a 2.5" drive with the aid of a cheap adapter. Since then, many desktop cases now ship with 2.5" drive bays.
It turns out that even the 2.5 wide, 9.5mm tall form factor was a bit overkill for many SSDs. We saw the first examples of this with the arrival of drives from Corsair and Kingston, where the majority of the 2.5" enclosure went unused. Intel and others also launched 1.8" versions of their SSDs with performance levels comparable to their 2.5" counterparts.
Moore's Law ensures that large SSDs can be delivered in small packages. Take the original Intel X25-M for example. The first 80GB and 160GB drives used a 50nm 4GB MLC NAND die (1 or 2 die per package), across twenty packages. Intel's SSD 320, on the other hand, uses 25nm NAND to deliver 300GB or 600GB of storage in the same package configuration. As with all things Moore's Law enables, you can scale in both directions - either increase capacity in a 2.5" form factor, or enable smaller form factors with the same capacity.
The Ultrabook movement has encouraged development of the latter. While Apple and ASUS (among others) have picked custom form factors for their smallest form factor SSDs, there's always a need for standardization. One option is the mSATA form factor:
Take a mini PCIe card, use the same connector, but make it electrically compatible with SATA and you've got mSATA. It's even possible to build an mSATA/mini PCIe connector that can switch between the two interfaces.
We met our first mSATA SSD with Intel's SSD 310, however today Micron is announcing an mSATA version of its popular C400 drive:
The best part of the C400 mSATA? Identical performance to its 2.5" counterpart:
|Micron C400 Comparison|
|C400v||C400v mSATA||C400||C400 mSATA|
|Capacity||64GB||32GB, 64GB||128, 256, 512GB||128, 256GB|
|Interface||SATA 6Gbps||mSATA 6Gbps||SATA 6Gbps||mSATA 6Gbps|
|Sequential Read||Up to 500MB/s||Up to 440MB/s (32GB)
Up to 500MB/s (64GB)
|Up to 500MB/s||Up to 500MB/s|
|Sequential Write||Up to 95MB/s||Up to 50MB/s (32GB)
Up to 95MB/s (64GB)
|Up to 175MB/s (128GB)
Up to 260MB/s
|Up to 175MB/s (128GB)
Up to 260MB/s
Given the extremely small surface area the mSATA/mini PCIe form factor allows, there's only enough room for the Marvell controller, 256MB of 1.5V DDR3 DRAM and four NAND packages. On a 128GB drive that works out to be 32GB per package, or four 8GB 25nm MLC die per package. You can scale up and down accordingly depending on capacity. Each package has two channels routed to it, thus behaving like a full eight channel drive but with only four chips.
Micron will offer two distinct flavors: the C400v mSATA and C400 mSATA, similar to what we see in the 2.5" version. The major difference? Write endurance. The C400v is rated for 36TB of writes over the course of the drive compared to 72TB for the C400. The same type of NAND is used on both, this is merely a function of available spare area vs. the workload Micron is using to rate the drives. Note that Micron is using a client (read: largely sequential, PCMark-like) workload to determine endurance ratings here, not a 4KB random write test.
Similar to the breakdown with the 2.5" drive, Micron will sell the C400 to OEMs while Crucial will offer a retail/etail version direct to consumers under the m4 brand. Micron isn't announcing pricing but you can expect it to be a little cheaper than the 2.5" version thanks to a slightly lower BOM (bill of materials).
Micron sent a 128GB C400 mSATA drive along for review so we put it through its paces in our standard SSD test suite. I've included comparison results to Intel's SSD 310 mSATA, but keep in mind this is a much faster, 6Gbps drive.
The results, as I mentioned above, are in-line with the 2.5" version:
Note that this is pretty incredible performance in an extremely small form factor. We're not too far away from being able to have a tablet capable of reading files at 500MB/s thanks to SSDs like this.
I've included our usual benchmarks in the subsequent pages but you can also use Bench to compare drives directly.
The C400/m4 have always been good drives, backed by a trustworthy name. Thanks to Micron's in-house firmware development team and extensive system testing courtesy of the memory side of the house, compatibility should be quite good.
As promised, performance of the C400 mSATA is nearly identical to the 2.5" version, making it a formidable competitor in the mSATA space. It's also insane to think that you can pull 500MB/s from something this small - oh what SSDs have done to the world. I have no issues recommending this drive should you be in the market for an mSATA based SSD. Furthermore, I hope to see more small form factor variants of other major SSDs. While I don't know that this mini PCIe/mSATA form factor will be what replaces 2.5" for ultraportables, it's clear that 2.5" drives are going to be the new 3.5" as far as SSDs are concerned and smaller form factors will emerge to take the place of 2.5" drives.
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
AS-SSD Incompressible Sequential Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
AnandTech Storage Bench 2011
Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.
Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
|AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown|
|IO Size||% of Total|
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.
Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.
There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.
As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.
The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.
AnandTech Storage Bench 2011 - Heavy Workload
We'll start out by looking at average data rate throughout our heavy workload test:
The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:
AnandTech Storage Bench 2011 - Light Workload
Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:
|AnandTech Storage Bench 2011 - Light Workload IO Breakdown|
|IO Size||% of Total|
To ensure TRIM's functionality and understand the SSD's behavior in a highly fragmented state I wrote sequential data across all user addressable LBAs and then wrote random data (4KB, QD=32) for 20 minutes across all LBAs. Finally I used HDTach to give me a simple visualization of write performance across all available LBAs (aka the Malventano Method):
This pattern is what we've come to expect from the C400/m4. TRIMing the drive completely restores performance of course, but if you're running a highly random workload you're going to see a fairly sharp drop off in performance as the C400/m4 were really only designed for client workloads.