Original Link: http://www.anandtech.com/show/8013/jmicron-jmf667h-reference-design-review



Back in 2008 and 2009, JMicron was a relatively big name in the SSD industry. The industry as a whole was a small niche compared to what it is today and you could nearly count the players with the fingers on one hand. Neither SandForce nor Marvell were in the game yet, so SSD OEMs like OCZ and Patriot who didn't have their own controller technology mainly relied on JMicron for controllers. There were a couple of other options as well, such as Indilinx and Samsung, but the reason many OEMs found JMicron so alluring was the competitive pricing it offered. Since SSDs were very expensive in general, having a cheaper product than your competition meant a lot. For example, a 64GB JMicron-based OCZ Core was around $240 while Intel asked $390 for their 80GB X25-M, so the advantage JMicron provided was much more than just a few bucks.

But the pricing had its dark side: to put it bluntly, the performance was awful. The JMF602 was so bad that the drives would pause for several seconds before becoming responsive again under normal desktop use, which was completely unacceptable given that users were paying several dollars per gigabyte. Obviously, that left a bad taste of JMicron to everyone's mouth. The OEMs were not happy because reviewers were giving them a hard time. and the buyers were not exactly satisfied either. During the worst times we had major difficulties getting any JMicron based SSDs for review because Anand had been frank and said that the drives should never have hit the retail in the first place. It was hard for the OEMs to blame JMicron because ultimately it was their decision to utilize JMicron's controllers. They had validated the drives and found them good enough for retail. 

Two JMF602B controllers in RAID 0 -- the effort to make it slightly less bad

When SandForce introduced its first generation controllers in late 2009, the game totally changed. Similar to JMicron, SandForce did not sell any SSDs. Sandforce sold the controller, firmware and software as a single stack to the SSD manufacturers who would then do the assembly. The difference was that SandForce's controller could challenge Intel and provide a user experience that was worth the money. Unsurprisingly many SSD OEMs decided to ditch JMicron and go with SandForce because SandForce simply had a better product. As a result, JMicron started to fade away from the market. Sure there were still OEMs that used their controllers in some of their products but the days of JMicron being the go-to company for controllers were over. 

JMicron tried to get back into the game, and it did release new and better SATA 3Gbps controllers (such as the JMF618 found in some Kingston and Toshiba SSDs) but SandForce had quickly taken a lion share of the market and the industry as a whole was already preparing for SATA 6Gbps. JMicron's roadmaps showed that a SATA 6Gbps JMF66x series was planned for the second half of 2010, which made sense given that Intel was integrating SATA 6Gbps to their 6-series chipsets in early 2011. But for some reason, the JMF66x never made it to the market on time. 

Now, over three years later, JMicron is looking to make a comeback with its new flagship SATA 6Gbps controller, the JMF667H.

 



The JMF667H: JMicron's Long-Awaited SATA 6Gbps Controller

The JMF667H is JMicron's first controller to be used in mainstream client SSDs since the JMF618 in 2010. As one would expect, JMicron is betting a lot on the controller because it's its best and possibly the only chance to get back in the market.

Typical to lower-end controllers, the JMF667H is a 4-channel design. Limiting the channel count allows for smaller die size, which in turn is proportional to the manufacturing cost. Marvell has been doing the same for a while now by offering a 4-channel "Lite" version at a lower cost since especially caching and mSATA SSDs cannot take the full benefit of eight channels anyway.

Furthermore, the JMF667H supports 8-way interleave per channel, which means that up to eight requests can be interleaved in a single channel. Interleaving is the reason why higher capacity SSDs perform better because instead of waiting for each NAND die/plane to respond before issuing a new request, the controller can issue requests to up to eight dies/planes. There is still some overhead compared to having more individual channels because each request takes one clock cycle but interleaving allows for the most efficient utilization of the available channels and NAND. 

The JMF667H supports all the latest SLC and MLC NAND available, including IMFT's 20nm 128Gbit MLC NAND and Toshiba's A19nm NAND. However, the maximum capacity is limited to 256GB, which is a slight drawback. 128GB and 256GB are still definitely the most popular capacities, so in that sense the JMF667H should cover the majority of the market but bigger SSDs are constantly gaining popularity as prices go down. JMicron told me that this is something they'll definitely take into account in future product planning and I would expect the next generation controller from JMicron to support at least 512GB of NAND.

Our first experience with the JMF667H was with the WD Black2 dual-drive earlier this year. Back then I wrote:

"Furthermore, the SSD in the Black2 is only mediocre, although I must say I wasn't expecting much in the first place. There must be a reason why none of the big OEMs have adopted JMicron's controllers and I think performance is one of the top reasons."

JMicron understood our concern. The SSD market is more competitive than ever and mediocre performance is no longer acceptable. JMicron went back to the drawing board and started working on a new firmware with focus on performance consistency, as one of my biggest criticisms was towards the fact that the IOPS would frequently drop to zero in the WD Black2.

After months of development and validation, the firmware is finally ready for the public. It is so new that even the OEMs do not have it available yet and thus JMicron sent us reference design drives. In addition, OEMs tend to make their own customizations and tweaks whereas the reference designs we have are based on the original JMicron firmware.

JMicron sent us four drives in total: two 128GB and two 256GB drives. The difference between the drives are their NAND configurations as one half of the drives use Micron's 20nm 128Gbit MLC NAND while the other half uses Toshiba's A19nm 64Gbit MLC NAND. These are the NAND configurations you would expect to see from the OEMs -- IMFT's 20nm 128Gbit NAND is currently the cheapest option in the market but Toshiba's A19nm 64Gbit NAND provides higher performance. The 256GB model with Toshiba NAND we received is actually over-provisioned down to 240GB for better performance but obviously this is up to the OEM to decide.

JMicron JMF667H Reference Design Specifications
NAND Toshiba A19nm 64Gbit MLC IMFT 20nm 128Gbit MLC
Capacity 128GB 256GB 128GB 256GB
Cache (DDR3-1600) 128MB 256MB 128MB 256MB
Sequential Read 540MB/s 522MB/s 539MB/s 520MB/s
Sequential Write 328MB/s 456MB/s 152MB/s 309MB/s
4KB Random Read 81K IOPS 83K IOPS 65K IOPS 77K IOPS
4KB Random Write 76K IOPS 78K IOPS 37K IOPS 74K IOPS
Power Consumption 2mW / 30mW / 3.4W (DevSLP / Slumber / Maximum)
Encryption No

The table above summarizes the performance differences between IMFT's 128Gbit NAND and Toshiba's 64Gbit NAND pretty well. In best case scenarios the drives with Toshiba NAND provide more than twice the throughput at the same capacity. This is mostly due to the difference in capacity per die because as we have covered before, the capacity per die has a tremendous impact on performance since it's directly related to parallelism. 

NAND Configurations
NAND Toshiba A19nm 64Gbit MLC IMFT 20nm 128Gbit MLC
Capacity 128GB 256GB 128GB 256GB
# of NAND Packages 16 16 8 8
NAND Package Configuration 1x8GB 2x8GB 1x16GB 2x16GB

The JMF667H supports DevSleep and other power saving states (HIPM+DIPM) but it does not provide any form of encryption support. Lack of support for TCG Opal 2.0 and IEEE-1667 is definitely a shortcoming and I would really like to see more manufacturers pay attention to this. Hopefully JMicron's future designs will address this.

One of the biggest issues JMicron had with its first controllers was the lack of cache. When the internal cache was filled up, the drive would start to stutter. Fortunately the later designs brought support for external DRAM, which is the case with JMF667H as well. The controller supports up to 512MB (4Gb) of DDR3, which is plenty of cache for a 256GB drive to store the NAND mapping table and ensure smooth operation. Our 128GB samples use DRAM from Elite Semiconductor Memory Technology Inc while the 256GB samples shipped with Nanya's DRAM, although this is again up to the OEM to decide.

JMicron's biggest advantage is obviously pricing. From what I have heard, the JMF667H is around $4 to $8 cheaper per unit compared to SandForce's offerings. Marvell is a bit cheaper than SandForce and the price delta is about $4 is favor of JMicron, although I must note that Marvell doesn't provide any form of firmware whereas JMicron and SandForce do. That is actually quite a lot given that most 120/128GB SSDs retail for $75 - $90 nowadays and that $4 can be the key to make the drive profitable. Of course, ultimately the pricing depends on the customer and quantity, so the above figures should be taken with a grain of salt, but they still provide some guidance of where the JMF667H stands compared to the competition. 



Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  JMicron JMF667H (Toshiba NAND) JMicron JMF667H (IMFT NAND) WD Black2 Samsung SSD 840 EVO mSATA Crucial M550
Default
25% OP

Compared to the WD  Black2, there has certainly been improvement in the IO consistency area. When looking at the model equipped with IMFT's NAND (similar to the Black2), the line at ~5,000 IOPS is now much thicker, meaning that more IOs are happening at the 5K range instead of below it. Another big plus is that the IOPS no longer drops to zero, which is something that the Black2 constantly did. The minimum performance is still in the order of a few hundred IOPS, which compared to many competitors is not exactly great, but it is a start. 

The model with Toshiba NAND performs substantially better. Even the 128GB model with the same 7% over-provisioning can keep the IOPS at around 3,000 at the lowest. The NAND can certainly have a play in steady-state performance and IO consistency as IMFT's 20nm 128Gbit NAND has twice the pages per block (512 vs 256) compared to Toshiba's A19nm 64Gbit NAND. More pages means that the erase time for each block is longer, which means that the controller may have to wait longer for an empty block. Remember that the drives are effectively doing read-modify-write during steady-state and the longer it takes to erase one block, the lower the performance will be.

It is likely that the drops in performance are caused by this but I am wondering if it's just a matter of poor optimization or something else. Technically it should be possible to achieve a steady line regardless of the NAND as long as the firmware is optimized for the specific NAND. In the end, the block erase times can be predicted pretty well, so with the right algorithms it should be possible to drop the maximum IOPS by a bit to achieve more consistent performance. For a low-end client drive, it is not that significant as the drives are very unlikely to be put under such heavy workload but I think there is still a chance to improve the IO consistency especially in the model with IMFT NAND.

Fortunately there is one easy way to increase IO consistency: over-provisioning. The IO consistency scales pretty well with over-provisioning, although there are still dips in performance that I would not like to see. However, especially the 240GB Toshiba model provides excellent consistency with 25% over-provisioning and beats, for instance, Intel's SSD 530 and SanDisk's Extreme II, which is something I certainly did not expect. 

  JMicron JMF667H (Toshiba NAND) JMicron JMF667H (IMFT NAND) WD Black2 Samsung SSD 840 EVO mSATA Crucial M550
Default
25% OP

 

  JMicron JMF667H (Toshiba NAND) JMicron JMF667H (IMFT NAND) WD Black2 Samsung SSD 840 EVO mSATA Crucial M550
Default
25% OP

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

And it is. 



AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based - we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

While the JMF667H appears to be one of the slowest controllers we have tested, the situation isn't all that bad. 256GB Crucial M550 and Plextor M6S are only a few megabytes faster per second and bear in mind that these use more expensive 8-channel controllers from Marvell. When you look at it that way, the JMF667H is actually relatively competitive.

Unfortunately I did not have the time to process the trace of the 256GB drive with Toshiba NAND but I do have the service time for it, and it looks a lot better. Bear in mind that this drive is over-provisioned down to 240GB and thus can't be compared directly with the 256GB drive with IMFT NAND but compared to the other 240GB drives in the market, the JMF667H with Toshiba NAND certainly seems competitive.

Update 6/12: I was able to process the trace for the 240GB Toshiba drive and the performance is twice the IMFT version. It's a bit faster than the Phison-based Corsair Force LS, which is great news in terms of competition. 

Storage Bench 2013 - The Destroyer (Service Time)



AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

Wow, this is something I didn't expect. With Toshiba NAND, the JMF667H can in fact challenge Samsung's 840 Pro and beats several other SSDs. With IMFT NAND the performance is a bit worse but it's still average and about similar to what Samsung's 840 EVO offers.

Light Workload 2011 - Average Data Rate



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Desktop Iometer - 4KB Random Read

Once again there is quite a big difference depending on what NAND is used. With Toshiba NAND, the JMF667H is at the same level with the high-end drives but IMFT NAND drops it to the lower-end. From what I have heard, Toshiba's NAND is faster in general and the higher die count further boosts performance.

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

Random write performance is fairly average. I'm guessing the controller/firmware introduces some bottlenecks because the performance isn't really affected by the NAND other than in the 128GB configuration. 

Sequential Read/Write Speed

To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Sequential performance is also average. There is some variation especially in write performance depending on the NAND but even the IMFT version performs relatively well.

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance



Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. Read performance is very similar to the ADATA SP920, which isn't a terrible thing but as you can see there are better SSDs. I'm surprised that there isn't really any boost from Toshiba NAND, although I'm guessing this is more of a controller bottleneck than a NAND one since reading from NAND is much faster than writing in the first place.

As for write performance, it's not top notch at the bigger IO sizes but the performance scales pretty well before reaching its limit. Now the NAND shows a major role as the the drives equipped with Toshiba's 64Gbit NAND perform much better than their IMFT counterparts.

Click for full size



Power Consumption

The JMF667H supports both DevSLP and slumber power states. Unfortunately we don't have the equipment to test DevSLP power (yet) but we do have the necessary equipment for slumber power testing. For some reason, the 128GB versions drew a lot more power compared to the 256GB Toshiba model. I tested the 128GB drives twice to make sure there was no measuring error but both times the results were similar. Anyway, the slumber power especially for the 256GB Toshiba drive is very impressive. Having a 4-channel design can certainly help to cut down power consumption and the JMF667H turns out to be more efficient than the latest Marvell silicon.

Power consumption under load is also among the lowest we have tested and the JMF667H stays below three watts at all times. 

The 256GB IMFT model wouldn't power on during power tests for some weird reason, so unfortunately I don't have power figures for it. It still functioned normally when plugged straight into the power supply, so I don't think there is anything wrong with the drive itself. I've had similar issues with some drives and it seems to be related to the extra resistance caused by the multimeter and additional wiring. 

SSD Slumber Power (HIPM+DIPM) - 5V Rail

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

I have to say I am positively surprised. Given how bad some of JMicron's early designs were, I could not expect much from the JMF667H. We had a try with the JMF667H in the WD Black2 but in that case we only got the picture of how the JMF667H performs with one NAND configuration. The Black2 itself had some other limitation as well (like the lack of caching option) that made the product as a whole not worth the money and probably made the JMF667H look worse than it really was. However, as our benchmarks show, the JMF667H can be very competitive when paired with the right NAND and pretty decent even with cheaper NAND.

Sure the JMF667H still is not the fastest controller on the market but the good thing is that it is not trying to be. JMicron's strategy has always been to provide more of a budget alternative for the mainstream market instead of competing for the performance crown. 

The JMF667H is not perfect and there are a couple of things I would like to see. The first one is support for TCG Opal 2.0 and IEEE-1667 encryption standards. The data we carry around is constantly becoming more valuable and as a result more vulnerable to theft, so support for these two standards is crucial. In addition, I bet it would help JMicron to get their controller to more OEMs, especially in the high profit business/IT space where the customers are willing to pay the extra for encryption support. Hopefully this is something JMicron will include in their next generation controller.

The second thing is IO consistency. While the new firmware improved IO consistency and the performance no longer drops to zero IOPS, I think there is still room for improvement. I would like to see the performance being more consistent even if it means lower maximum IOPS because consistent performance means the end-user won't see variation in performance. However, I'm willing to overlook this since we are dealing with a low-cost controller and the IO consistency is already okay, but it could always be better.

All in all, when paired with the right NAND (i.e. Toshiba), the JMF667H can certainly be a noteworthy controller. It provides performance that is similar or very close to Marvell based SSDs but at a lower cost and with bundled firmware. However, I think the big question is whether the JMF667H offers enough cost savings when paired with the more expensive Toshiba NAND. As NAND makes up the biggest part of the bill of materials, it can be hard to overcome the saving from cheaper NAND, but I believe this is ultimately up to the OEM and their relations with Toshiba, Micron, JMicron, and so on. With IMFT NAND the JMF667H is still decent and I can see it being the lowest-end drive for OEMs, which is where it sits well due to the cheaper controller and NAND. For a light user the difference in performance is likely negligible anyway, and that is ultimately the market for low-end SSDs.

I am eagerly waiting to hear about JMicron's plans for PCIe. The JMF667H was admittedly late and at this point it can be rather difficult to gain interest from OEMs as everyone is preparing for PCIe. Hopefully JMicron's PCIe solution will be more timely and hopefully I'll have some details after meeting with them at Computex next week.

Log in

Don't have an account? Sign up now