Delving Deeper

I had suspicions as to the nature of the problem based on my experience with it in my Mac Pro. The SuperTalent MLC drive in my machine would pause, most noticeably, randomly when I'd want to send an IM. What happens when you send an IM? Your logfile gets updated; a very small, random write to the disk. I turned to Iometer to simulate this behavior.

Iometer is a great tool for simulating disk accesses, you just need to know what sort of behavior you want to simulate. In my case I wanted to write tons of small files to the drive and look at latency, so I told Iometer to write 4KB files to the disk in a completely random pattern (100% random). I left the queue depth at 1 outstanding IO since I wanted to at least somewhat simulate a light desktop workload.

Iometer reports four results of importance: the number of IOs per second, the average MB/s, the average write latency and the maximum write latency. I looked at performance of four drives, the OCZ Core (Jmicron controller MLC), OCZ SLC (Samsung controller), Intel MLC (Intel controller) and the Seagate Momentus 7200.2 (a 7200RPM 2.5" notebook drive).

Though the OCZ core drive is our example, but please remember that this isn't an OCZ specific issue: the performance problems we see with this drive are apparent on all current MLC drives in the market that use a Jmicron controller with Samsung flash.

4KB, 100% random writes, IO queue depth 1 IOs per Second MB/s Average Write Latency Max Write Latency
OCZ Core (JMicron, MLC) 4.06 0.016MB/s 244ms 991ms
OCZ (Samsung, SLC) 109 0.43MB/s 9.17ms 83.2ms
Intel X25-M (Intel, MLC) 11171 43.6MB/s 0.089ms 94.2ms
Seagate Momentus 7200.2 106.9 0.42MB/s 9.4ms 76.5ms

 

Curiouser and curiouser...see a problem? Ignore the absolute ridiculous performance advantage of the Intel drive for a moment and look at the average latency column. The OCZ MLC drive has an average latency of 244 ms, that's over 26x the latency of the OCZ SLC drive and 25.9x the latency of a quick notebook drive. This isn't an MLC problem however, because the Intel MLC drive boasts an average latency of 0.09ms - the OCZ MLC drive has a 2700x higher latency!

Now look at the max latency column, the worst case scenario latency for the OCZ Core is 991ms! That's nearly a full second! This means that it takes an average of a quarter second to write a 4KB file to the drive and worst case scenario, a full second. We complain about the ~100 nanosecond trip a CPU has to take to main memory and here we have a drive that'll take nearly a full second to complete a task - totally unacceptable.

In order to find out if the latency is at all tied to the size of the write I varied the write size from 4KB all the way up to 128KB, but kept the writes 100% random. I'm only reporting latencies here:

100% random writes, IO queue depth 1 4KB 16KB 32KB 64KB 128KB
OCZ Core (JMicron, MLC) 244ms 243ms 241ms 243ms 247ms
OCZ (Samsung, SLC) 9.17ms 14.5ms 21.2ms 28ms 28.5ms
Intel X25-M (Intel, MLC) 0.089ms 0.23ms 0.44ms 0.84ms 1.73ms
Seagate Momentus 7200.2 9.4ms 8.95ms 9.14ms 9.82ms 12.1ms

 

All the way up to 128KB the latency is the same, 0.25s on average and nearly a second worst case for the OCZ Core and other similar MLC drives. If it's not the file size, perhaps it's the random nature of the writes?

For this next test I varied the nature of the writes, I ran the 4KB write test with a 100% sequential workload, 90% sequential (10% random) and 50% sequential (50% random):

4KB writes, IO queue depth 1 100% Sequential/0% Random 90% Sequential/10% Random 50% Sequential/50% Random 0% Sequential/100% Random
OCZ Core (JMicron, MLC) 0.36ms 25.8ms 130ms 244ms
OCZ (Samsung, SLC) 0.16ms 1.97ms 5.19ms 9.17ms
Intel X25-M (Intel, MLC) 0.09ms 0.09ms 0.09ms 0.089ms
Seagate Momentus 7200.2 0.16ms 0.94ms 4.35ms 9.4ms

 

The average latency was higher on the OCZ Core (MLC) than the rest of the drives, but still manageable at 0.36ms when I ran the 100% sequential test, but look at what happened in the 90% sequential test. With just 10% random writes the average latency jumped to 25.8ms, that's 13x the latency of the OCZ SLC drive. Again, this isn't an MLC issue as the Intel drive does just fine. Although I left it out of the table to keep things simpler, the max latency in the 90/10 test was 983ms for the OCZ Core drive once again. The 90/10 test is particularly useful because it closely mimics a desktop write pattern, most writes are sequential in nature but a small percentage (10% or less) are random in nature. What this test shows us is that even 10% of random writes is all it takes to bring the OCZ Core to its knees.

The problem gets worse as you increase the load on the drive. Most desktop systems have less than 1 outstanding IO during normal operation, but under heavy multitasking you can see the IO queue depth hit 4 or 5 IOs for writes. Going much above that and you pretty much have to be in a multi-user environment, either by running your machine as a file server or by actually running a highly trafficked server. I ran the same 100% random, 4KB write test but varied the number of outstanding IOs from 1 all the way up to 64. Honestly, I just wanted to see how bad it would get:

This is just ridiculous. Average write latency climbs up to fifteen seconds, while max latency peaked at over thirty seconds for the JMicron based MLC drives. All this graph tells you is that you shouldn't dare use one of these drives in a server, but even at a queue depth of four the max latency is over two seconds which is completely attainable in a desktop scenario under heavy usage. I've seen this sort of behavior first hand under OS X with the SuperTalent MLC drive, the system will just freeze for anywhere from a fraction of a second to over a full second while a write completes in the background. The write that will set it off will often times be something as simple as writing to my web browser's cache or sending an IM, it's horribly frustrating.

I did look at read performance, and while max latency was a problem (peaking at 250ms) it was a fairly rare case, average latency was more than respectable and comparable to the SLC drives. This seems to be a write issue. Let's see if we can make it manifest itself in some real world tests.

Enter the Poorly Designed MLC The Generic MLC SSD Problem in the Real World
Comments Locked

96 Comments

View All Comments

  • Mocib - Thursday, October 9, 2008 - link

    Good stuff, but why isn't anyone talking about ioXtreme, the PCI-E SSD drive from Fusion-IO? It baffles me just how little talk there is about ioXtreme, and the ioDrive solution in general.
  • Shadowmaster625 - Thursday, October 9, 2008 - link

    I think the Fusion-IO is great as a concept. But what we really need is for Intel and/or AMD to start thinking intelligently about SSDs.

    AMD and Intel need to agree on a standard for an integrated SSD controller. And then create a new open standard for a Flash SSD DIMM socket.

    Then I could buy a 32 or 64 GB SSD DIMM and plug it into a socket next to my RAM, and have a SUPER-FAST hard drive. Imagine a SSD DIMM that costs $50 and puts out even better numbers than the Fusion-IO! With economy of scale, it would only cost a few dollers per CPU and a few dollars more for the motherboard. But the performance would shatter the current paradigm.

    The cost of the DIMMs would be low because there would be no expensive controller on the module, like there is now with flash SSDs. And that is how it should be! There is NO need for a controller on a memory module! How we ended up taking this convoluted route baffles me. It is a fatally flawed design that is always going to be bottlenecked by the SATA interface, no matter how fast it is. The SSD MUST have a direct link to the CPU in order to unleash its true performance potential.

    This would increase performance so much that if VIA did this with their Nano CPU, they would have an end product that outperforms even Nehalem in real-world everyday PC usage. If you dont believe me, you need to check out the Fusion-IO. With SSD controller integration, you can have Fusion-IO level performance for dirt cheap.

    If you understand what I am talking about here, and can see that this is truly the way to go with SSDs, then you need to help get the word to AMD and Intel. Whoever does it first is going to make a killing. I'd prefer it to be AMD at this point but it just needs to get done.
  • ProDigit - Tuesday, October 7, 2008 - link

    Concerning the Vista boottime,I think it'd make more sense to express that in seconds rather than MB/s.
    I rather have a Windows boot in 38seconds article,than a windows boots with 51MB/s speeds.. That'd be totally useless to me.

    Also, I had hoped for entry level SSD cards, replacements for mininotebooks rather in the category of sub 150$ drives.
    On an XP machine, 32GB is more then enough for a mininotebook (8GB has been done before). Mininotebooks cost about $500,and cheap ones below $300. I,as many out there, am not willing to spend $500 on a SSD drive, when the machine costs the same or less.

    I had hoped maybe a slightly lower performance 40GB SSD drive could be sold for 149$,which is the max price for SSD cards for mini notebooks.
    for laptops and normal notebooks drives upto 200-250$ range would be enough for 64-80GB. I don't agree on the '300-400' region being good for SSD drives. Prices are still waaay too high!
    Ofcourse we're paying a lot of R&D right now,prices should drop 1 year from now. Notebooks with XP should do with drives starting from 64GB,mini notebooks with drives from 32-40GB,and for desktops 160GB is more than enough. In fact, desktops usually have multiple harddrives, and an SSD is only good for netbooks for it's faster speeds, and lower powerconsumption.
    If you want to benefit from speeds on a desktop,a 60-80GB will more then do, since only the Windows, office applications, anti-virus and personal programs like coreldraw, photoshop, or games need to be on the SSD drive.
    Downloaded things, movies, mp3 files, all those things that take up space might as well be saved on an external/internal second HD.

    Besides if you can handle the slightly higher game loadtimes on conventional HD's, many older games already run fine (over 30fps) on full detail, 1900x??? resolution.
    Installing older games on an SSD doesn't really benefit anyone, apart from the slightly lower loadtimes.

    Seeing that I'd say for the server market highest speed and largest diskspace-size matter, and occasionally also lowest power consumption matter.
    => highest priced SSD's. X >$1000, X >164GB /SSD

    For the desktop high to highest speed matters, less focus on diskspace size and power consumption.
    => normal priced SSD's $250 > X > $599 X > 80GB/SSD

    For the notebooks high speed and lowest power consumption matter, smaller size as compensation for price.
    => Normal priced SSD's $175 > X > $399 X > 60GB/SSD

    For the mininotebook normal speed, and more focus on lower power consumption and lowest pricing matter!
    => Low powered small SSD's $75 > X > $199 X > 32GB/SSD
  • gemsurf - Sunday, October 5, 2008 - link

    Just in case anyone hasn't noticed, these are showing up for sale all over the net in the $625 to $750 range. Using live search, I bought one from teckwave on ebay yesterday for $481.80 after the live search cashback from microsoft.

    BTW, Does Jmicron do anything right? Seems I had AHCI/Raid issues on the 965 series Intel chipsets a few years back with jmicron controllers too!
  • Shadowmaster625 - Wednesday, September 24, 2008 - link

    Obviously Intel has greater resources than you guys. No doubt they threw a large number of bodies into write optimizations.

    But it isnt too hard to figure out what they did. I'm assuming that when the controller is free from reads or writes, that is when it takes the time to actually go and erase a block. The controller probably adds up all the pages that are flagged for erasure, and when it has enough to fill an entire block, then it goes and erases and writes that block.

    Assuming 4KB pages and 512KB blocks (~150,000 blocks per 80GB device) what Intel must be doing is just writing each page wherever they could shove it. And erasing one block while writing to all those other blocks. (With that many blocks you could do a lot of writing without ever having to wait for one to erase.) And I would go ahead and have the controller acknowledge the data was written once it is all in the buffer. That would free up the drive as far as Windows is concerned.

    If I was designing one of these devices, I would definately demand as much SRAM as possible. I dont buy that line of bull about Intel not using the SRAM for temporary data storage. That makes no sense. You can take steps to ensure the data is protected, but making use of SRAM is key to greater performance in the future. That is what allows you to put off erasing and writing blocks until the drive is idle. Even a SRAM storage time limit of just one second would add a lot of performance, and the risk of data loss would be negligable.

  • Shadowmaster625 - Wednesday, September 24, 2008 - link

    OCZ OCZSSD2-1S32G

    32GB SLC, currently $395

    The 64GB version is more expensive than the Intel right now, but with the money they've already raked in who really thinks they wont be able to match intel performance or pricewise? Of course they will. So how can this possibly be that great of a thing? So its a few extra GB. Gimme a break, I would rather take the 32GB and simply juggle around stuff onto my media drive every now and then. Did you know you can simply copy your entire folder from the Program Files directory over to your other drive and then put it back when you want to use it? I do that with games all the time. It takes all of 2 minutes... Why pay hundreds of dollars extra to avoid having to do that? It's just a background task anyway. That's how 32GB has been enough space for my system drive for a long time now. (Well, that and not using Vista.) At any rate this is hardly a game changer. The other MLC vendors will address the latency issue.
  • cfp - Tuesday, September 16, 2008 - link

    Have you seen any UK/Euro shops with these available (for preorder even?) yet? There are many results on the US Froogle (though none of them seem to have stock or availability dates) but still none on the UK one.
  • Per Hansson - Friday, September 12, 2008 - link

    What about the Mtron SSD's
    You said they used a different controller vs Samsung in the beginning of the article but you never benchmarked them?
  • 7Enigma - Friday, September 19, 2008 - link

    I would like to know the question to this as well...
  • NeoZGeo - Thursday, September 11, 2008 - link

    The whole review is based on Intel vs OCZ Core. We all know OCZ core had issues that you have mentioned. However, what I would like to see is other drives test bench against OCZ core drive, or even the core II drive. Suppose the controller has a different firmware according to some guys from OCZ on the core 2, and I find that a bit bias if you are using a different spec item to represent all the other drives in the market.

Log in

Don't have an account? Sign up now