The Return of the JMicron based SSD

With the SSD slowdown addressed it’s time to start talking about new products. And I’ll start by addressing the infamous JMicron JMF602 based SSDs.

For starters, a second revision of the JMF602 controller came out last year - the JMF602B. This controller had twice the cache of the original JMF602A and thus didn’t pause/stutter as often.

The JMicron JMF602B is the controller found in G.Skill’s line of SSDs as well as OCZ’s Core V2, the OCZ Solid and the entire table below of SSDs:

JMicron JMF602B Based SSDs
G.Skill FM-25S2
G.Skill Titan
OCZ Apex
OCZ Core V2
OCZ Solid
Patriot Warp
SuperTalent MasterDrive

 

All I need to do is point to our trusty iometer test to tell you that the issues that plagued the original JMicron drives I complained about apply here as well:

Iometer 4KB Random Writes, IOqueue=3, 8GB sector space IOs per second MB/s Average Latency Maximum Latency
JMF602B MLC Drive 5.61 0.02 MB/s 532.2 ms 2042 ms

 

On average it takes nearly half a second to complete a random 4KB write request to one of these drives. No thanks.

The single-chip JMF602B based drives are now being sold as value solutions. While you can make the argument that the pausing and stuttering is acceptable for a very light workload in a single-tasking environment, simply try doing anything while installing an application or have anti-virus software running in the background and you won’t be pleased by these drives. Save your money, get a better drive.

The next step up from the JMF602B based drives are drives based on two JMF602B controllers. Confused? Allow me to explain. The problem is that JMicron’s next SSD controller design won’t be ready anytime in the near future, and shipping mediocre product is a better option than shipping no product, so some vendors chose to take two JMF602B controllers and put them in RAID, using another JMicron controller.


Two JMF602B controllers and a JMicron RAID controller

The problem, my dear friends, is that the worst case scenario latency penalty - at best, gets cut in half using this approach. You’ll remember that the JMF602 based drives could, under normal load, have a write-latency of nearly 0.5 - 2 seconds. Put two controllers together and you’ll get a worst-case scenario write latency of about one second under load or half a second with only a single app running. To test the theory I ran the now infamous 4K random write iometer script on these “new” drives:

Iometer 4KB Random Writes, IOqueue=3, 8GB sector space IOs per second MB/s Average Latency Maximum Latency
JMF602B MLC Drive 5.61 0.02 MB/s 532.2 ms 2042 ms
Dual JMF602B MLC Controller Drive 8.18 0.03 MB/s 366.1 ms 1168.2 ms

 

To some irate SSD vendors, these may just be numbers, but let’s put a little bit of thought into what we’re seeing here shall we? These iometer results are saying that occasionally when you go to write a 4KB file (for example, loading a website, sending an IM and having the conversation logged or even just saving changes to a word doc) the drive will take over a second to respond.

I don’t care what sort of drive you’re using, 2.5”, 3.5”, 5400RPM or 7200RPM, if you hit a 1 second pause you notice it and such performance degradation is not acceptable. Now these tests are more multitasking oriented, but if you run a single IO on the drive you'll find that the maximum latency is still half a second.

The average column tells an even more bothersome story. Not only is the worst case scenario a 1168 ms write, on average you’re looking at over a quarter of a second just to write 4KB.

The G.Skill Titan has recently garnered positive reviews for being a fast, affordable, SSD. Many argued that it was even on the level of the Intel X25-M. I’m sorry to say it folks, that’s just plain wrong.


One of the most popular dual JMF602B drives

If you focus exclusively on peak transfer rates then these drives work just fine. You’ll find that, unless you’re running a Blu-ray rip server, you don’t spend most of your time copying multi-GB files to and from the drive. Instead, on a normal desktop, the majority of your disk accesses will be small file reads and writes and these drives can’t cut it there.

Some vendors have put out optimization guides designed to minimize stuttering with these JMF602B based drives. The guides generally do whatever they can to limit the number and frequency of small file writes to your drive (e.g. disabling search indexing, storing your temporary internet files on a RAM drive). While it’s true that doing such things will reduce stuttering on these drives, the optimizations don’t solve the problem - they merely shift the cause of it. The moment an application other than Vista or your web browser goes to write to your SSD you’ll have to pay the small file write penalty once more. Don’t settle.

But what option is there? Is Intel’s X25-M the only drive on the market worth recommending? What if you can’t afford spending $390 for 80GB. Is there no cheaper option?

Latency vs. Bandwidth: What to Look for in a SSD OCZ Tries Again with the Vertex
Comments Locked

250 Comments

View All Comments

  • punjabiplaya - Wednesday, March 18, 2009 - link

    Great info. I'm looking to get an SSD but was put off by all these setbacks. Why should I put away my HDDS and get something a million times more expensive that stutters?
    This article is why I visit AT first.
  • Hellfire26 - Wednesday, March 18, 2009 - link

    Anand, when you filled up the drives to simulate a full drive, did you also write to the extended area that is reserved? If you didn't, wouldn't the Intel SLC drive (as an example) not show as much of a performance drop, versus the MLC drive? As you stated, Intel has reserved more flash memory on the SLC drive, above the stated SSD capacity.

    I also agree with GourdFreeMan, that the physical block size needs to be reduced. Due to the constant erasing of blocks, the Trim command is going to reduce the life of the drive. Of course, drive makers could increase the size of the cache and delay using the Trim command until the number of blocks to be erased equals the cache available. This would more efficiently rearrange the valid data still present in the blocks that are being erased (less writes). Microsoft would have to design the Trim command so it would know how much cache was available on the drive, and drive makers would have to specifically reserve a portion of their cache for use by the Trim command.

    I also like Basilisk's comment about increasing the cluster size, although if you increase it too big, you are likely to be wasting space and increasing overhead. Surely, even if MS only doubles the cluster size for NTFS partitions to 8KB's, write cycles to SSD's would be reduced. Also, There is the difference between 32bit and 64bit operating systems to consider. However, I don't have the knowledge to say whether Microsoft can make these changes without running into serious problems with other aspects of the operating system.
  • Anand Lal Shimpi - Wednesday, March 18, 2009 - link

    I only wrote to the LBAs reported to the OS. So on the 80GB Intel drive that's from 0 - 74.5GB.

    I didn't test the X25-E as extensively as the rest of the drives so I didn't look at performance degradation as closely just because I was running out of time and the X25-E is sooo much more expensive. I may do a standalone look at it in the near future.

    Take care,
    Anand
  • gss4w - Wednesday, March 18, 2009 - link

    Has anyone at Anandtech talked to Microsoft about when the "Trim" command will be supported in Windows 7. Also it would be great if you could include some numbers from Windows 7 beta when you do a follow-up.

    One reason I ask is that I searched for "Windows 7 ssd trim" and I saw a presentation from WinHEC that made it sound like support for the trim command would be a requirement for SSD drives to meet the Windows 7 logo requirements. I would think if this were the case then Windows 7 would have support for trim. However, this article made it sound like support for Trim might not be included when Windows 7 is initially released, but would be added later.

  • ryedizzel - Thursday, March 19, 2009 - link

    I think it is obvious that Windows 7 will support TRIM. The bigger question this article points out is whether or not the current SSDs will be upgradeable via firmware- which is more important for consumers wanting to buy one now.
  • Martimus - Wednesday, March 18, 2009 - link

    It took me an hour to read the whole thing, but I really enjoyed it. It reminded me of the time I spent testing circuitry and doing root cause analysis.
  • alpha754293 - Wednesday, March 18, 2009 - link

    I think that it would be interesting if you were to be able to test the drives for the "desktop/laptop/consumer" front by writing a 8 GiB file using 4 kiB block sizes, etc. for the desktop pattern and also to test the drive then with a larger sizes and larger block size for the server/workstation pattern as well.

    You present some very very good arguments and points, and I found your article to be thoroughly researched and well put.

    So I do have to commend you on that. You did an excellent job. It is thoroughly enjoyable to read.

    I'm currently looking at a 4x 256 GB Samsung MLC on Solaris 10/ZFS for apps/OS (for PXE boot), and this does a lot of the testing; but I would be interested to see how it would handle more server-type workloads.
  • korbendallas - Wednesday, March 18, 2009 - link

    If The implementation of the Trim command is as you described here, it would actually kind of suck.

    "The third step was deleting the original 4KB text file. Since our drive now supports TRIM, when this deletion request comes down the drive will actually read the entire block, remove the first LBA and write the new block back to the flash:"

    First of all, it would create a new phenomenon called Erase Amplification. This would negatively impact the lifetime of a drive.

    Secondly, you now have worse delete performance.


    Basically, an SSD 4kB block can be in 3 different states: erased, data, garbage. A block enters the garbage state when a block is "overwritten" or the Trim command marks the contents as invalid.

    The way i would imagine it working, marking block content as invalid is all the Trim command does.

    Instead the drive will spend idle time finding the 512kB pages with the most garbage blocks. Once such a page is found, all the data blocks from that page would be copied to another page, and the page would be erased. Doing it in this way maximizes the number of garbage blocks being converted to erased.
  • alpha754293 - Wednesday, March 18, 2009 - link

    BTW...you might be able to simulate the drive as well using Cygwin where you go to the drive and run the following:

    $ dd if=/dev/random of=testfile bs=1024k count=76288

    I'm sure that you can come up with fancier shell scripts and stuff that uses the random number generator for the offsets (and if you really want it to work well, partition it so that when it does it, it takes up the entire initial 74.5 GB partition, and when you're done "dirtying" the data using dd and offset in a random pattern, grow the partition to take up the entire disk again.)

    Just as a suggestion for future reference.

    I use parts of that to some (varying) degree for when I do my file/disk I/O subsystem tests.
  • nubie - Wednesday, March 18, 2009 - link

    I should think that most "performance" laptops will come with a Vertex drive in the near future.

    Finally a performance SSD that comes near mainstream pricing.

    Things are looking up, if more manufacturers get their heads out of the sand we should see prices drop as competition finally starts breeding excellence.

Log in

Don't have an account? Sign up now