Bringing You Up to Speed: The History Lesson

Everyone remembers their first bike right? Mine was red. It had training wheels. I never really learned how to ride it, not that I didn’t go outdoors, I was just too afraid to take those training wheels off I guess. That was a long time ago, but I remember my first bike.

I also remember my first SSD. It was a 1.8” PATA drive made by Samsung for the MacBook Air. It was lent to me by a vendor so I could compare its performance to the stock 1.8” mechanical HDD in the Air.

The benchmarks for that drive didn’t really impress. Most application tests got a little slower and transfer speeds weren’t really any better. Application launch times and battery life both improved, the former by a significant amount. But the drive was expensive; $1000 from Apple and that’s if you bought it with the MacBook Air. Buying it from a vendor would set you back even more. It benchmarked faster than hard drive, but the numbers didn’t justify the cost. I pulled the drive out and sent it back after I was done with the review.

The next time I turned on my MacBook Air I thought it was broken. It took an eternity to boot and everything took forever to launch. Even though the benchmarks showed the SSD shaving off a few seconds of application launch time here and there, in the real world, it was noticeable. The rule of thumb is that it takes about a 10% difference in performance for a user to notice. The application tests didn’t show a 10% difference in performance, but the application launch tests, those were showing 50% gains. It still wasn’t worth $1000, but it was worth a lot more than I originally thought.

It was the MacBook Air experience that made me understand one important point about SSDs: you don’t think they’re fast, until I take one away from you.

My second SSD was a 60GB SuperTalent drive. I built a HTPC using it. It was my boot drive and I chose it because it drew less power and was silent; it helped keep my HTPC cool and I wouldn’t have to worry about drive crunching while watching a movie. My movies were stored elsewhere so the space didn’t really matter. The experience was good, not great because I wasn’t really hitting the drive for data, but it was problem-free.

SuperTalent was the first manufacturer to sell a SSD in a 3.5” enclosure, so when they announced their 120GB drive I told them I’d like to do a review of their SSD in a desktop. They shipped it to me and I wrongly assumed that it was the same as the 60GB drive in my HTPC just with twice the flash.

This drive did have twice the flash, but it was MLC (Multi-Level Cell) flash. While the 60GB drive I had was a SLC drive that used Samsung’s controller, the MLC drive used a little known controller from a company called JMicron. Samsung had a MLC controller at the time but it was too expensive than what SuperTalent was shooting for. This drive was supposed to be affordable, and JMicron delivered an affordable controller.

After running a few tests, the drive went in my Mac Pro as my boot/application drive. I remembered the lesson I learned from my first SSD. I wasn’t going to be able to fairly evaluate this drive until I really used it, then took it away. Little did I know what I was getting myself into.

The first thing I noticed about the drive was how fast everything launched. This experience was actually the source of my SSD proof-of-value test; take a freshly booted machine and without waiting for drive accesses to stop, launch every single application you want to have up and running at the same time. Do this on any system with a HDD and you’ll be impatiently waiting. I did it on the SuperTalent SSD and, wow, everything just popped up. It was like my system wasn’t even doing anything. Not even breaking a sweat.

I got so excited that I remember hopping on AIM to tell someone about how fast the SSD was. I had other apps running in the background and when I went to send that first IM and my machine paused. It was just for a fraction of a second, before the message I'd typed appeared in my conversation window. My system just paused.

Maybe it was a fluke.

I kept using the drive, and it kept happening. The pause wasn’t just in my IM client, it would happen in other applications or even when switching between apps. Maybe there was a strange OS X incompatibility with this SSD? That’d be unfortunate, but also rather unbelievable. So I did some digging.

Others had complained about this problem. SuperTalent wasn’t the only one to ship an affordable drive based on this controller; other manufacturers did as well. G.Skill, OCZ, Patriot and SiliconPower all had drives shipping with the same controller, and every other drive I tested exhibited the same problem.

I was in the midst of figuring out what was happening with these drives when Intel contacted me about reviewing the X25-M, its first SSD. Up to this point Intel had casually mentioned that their SSD was going to be different than the competition and prior to my JMicron experience I didn’t really believe them. After all, how hard could it be? Drive controller logic is nowhere near as complicated as building a Nehalem, surely someone other than Intel could do a good-enough job.

After my SuperTalent/JMicron experience, I realized that there was room for improvement.

Drive vendors were mum on the issue of pausing or stuttering with their drives. Lots of finger pointing resulted. It was surely Microsoft’s fault, or maybe Intel’s. But none of the Samsung based drives had these problems.

Then the issue was cache. The JMicron controller used in these drives didn’t support any external DRAM. Intel and Samsung’s controllers did. It was cache that caused the problems, they said. But Intel’s drive doesn’t use the external DRAM for user data.

Fingers were pointed everywhere, but no one took responsibility for the fault. To their credit, OCZ really stepped up and took care of their customers that were unhappy with their drives. Despite how completely irate they were at my article, they seemed to do the right thing after it was published. I can’t say the same for some of the other vendors.

The issue ended up being random write performance. These “affordable” MLC drives based on the JMicron controller were all tuned for maximum throughput. The sequential write speed of these drives could easily match and surpass that of the fastest hard drives.

If a company that had never made a hard drive before could come out with a product that on its first revision could outperform WD’s VelociRaptor and be more reliable thanks to zero moving parts...well, you get the picture. Optimize for sequential reads and writes!

The problem is that modern day OSes tend to read and write data very randomly, albeit in specific areas of the disk. And the data being accessed is rarely large, it’s usually very small on the order of a few KB in size. It’s these sorts of accesses that no one seemed to think about; after all these vendors and controller manufacturers were used to making USB sticks and CF cards, not hard drives.

  Sequential Read Performance
JMicron JMF602B MLC 134.7 MB/s
Western Digital VelociRaptor 300GB 118 MB/s

 

The chart above shows how much faster these affordable MLC SSDs were than the fastest 3.5” hard drive in sequential reads, but now look at random write performance:

  Random Write Latency Random Write Bandwidth
JMicron JMF602B MLC 532.2 ms 0.02 MB/s
Western Digital VelociRaptor 300GB 7.2 ms 1.63 MB/s

 

While WD’s VelociRaptor averaged less than 8ms to write 4KB, these JMicron drives took around 70x that! Let me ask you this, what do you notice more - things moving very fast or things moving very slow?

The traditional hard drive benchmarks showed that these SSDs were incredible. The real world usage and real world tests disagreed. Storage Review was one of the first sites to popularize real world testing of hard drives nearly a decade ago. It seems that we’d all forgotten the lessons they taught us.

Random write performance is quite possibly the most important performance metric for SSDs these days. It’s what separates the drives that are worth buying from those that aren’t. All SSDs at this point are luxury items, their cost per GB is much higher than that of conventional hard drives. And when you’re buying a luxury anything, you don’t want to buy a lame one.

  Cost Per GB from Newegg.com
Intel X25-E 32GB $12.88
Intel X25-M 80GB $4.29
OCZ Solid 60GB $2.33
OCZ Apex 60GB $2.98
OCZ Vertex 120GB $3.49
Samsung SLC 32GB $8.71
Western Digital Caviar SE16 640GB $0.12
Western Digital VelociRaptor 300GB $0.77
Index Why You Should Want an SSD
Comments Locked

250 Comments

View All Comments

  • SkullOne - Wednesday, March 18, 2009 - link

    Fantastic article. Definitely one of the best I've read in a long time. Incredibly informative. Everyone who reads this article is a little bit smarter afterwards.

    All the great information about SSDs aside, I think the best part though is how OCZ is willing to take blame for failure earlier and fix the problems. Companies like that are the ones who will get my money in the future especially when it is time for me to move from HDD to SSD.
  • Apache2009 - Wednesday, March 18, 2009 - link

    i got one Vertex SSD. Why suspend will cause system halt ? My laptop is nVidia chipset and it is work fine with HDD. Somebody know it ?
  • MarcHFR - Wednesday, March 18, 2009 - link

    Hi,

    You wrote that there is spare-area on X25-M :

    "Intel ships its X25-M with 80GB of MLC flash on it, but only 74.5GB is available to the user"

    It's a mistake. 80 GB of Flash look like 74.5GB for the user because 80,000,000,000 bytes of flash is 74.5 Go for the user point of view (with 1 KB = 1024 byte).

    You did'nt point out the other problem of the X25-M : LBA "optimisation". After doing a lot of I/O random write the speed in sequential write can get down to only 10 MB /s :/
  • Kary - Thursday, March 19, 2009 - link

    The extra space would be invisible to the end user (it is used internally)

    Also, addressing is normally done in binary..as a result actual sizes are typically in binary in memory devices (flash, RAM...):
    64gb
    128gb

    80 GB...not compatible with binary addressing

    (though 48GB of a 128GB drive being used for this seems pretty high)
  • ssj4Gogeta - Wednesday, March 18, 2009 - link

    Did you bother reading the article? He pointed out that you can get any SSD (NOT just Intel's) stuck into a situation when only a secure erase will help you out. The problem is not specific to Intel's SSD, and it doesn't occur during normal usage.
  • MarcHFR - Wednesday, March 18, 2009 - link

    The problem i've pointed out has nothing to do with the performance dregradation related to the write on a filled page, it's a performance degradation related to an LBA optimisation that is specific to Intel SSD.
  • VaultDweller - Wednesday, March 18, 2009 - link

    So where would Corsair's SSD fit into this mix? It uses a Samsung MLC controller... so would it be comparable to the OCZ Summit? I would expect not since the rated sequential speeds on the Corsair are tremendously lower than the Summit, but the Summit is the closest match in terms of the internals.
  • kensiko - Wednesday, March 18, 2009 - link

    No, OCZ Summit = newest Samsung controller. The Corsair use the previous controller, smaller performance.
  • VaultDweller - Wednesday, March 18, 2009 - link

    So what's the difference?

    The Summit is optimized for sequential performance at the cost of random I/O, as per the article. That is clearly not the case with the Corsair drive, so how does the Corsair hold up in terms of random I/O? That's what I'm interested in, since the sequential on the Corsair is "fast enough" if the random write performance is good.
  • jatypc - Wednesday, March 18, 2009 - link

    A detailed description of how SSDs operate makes me wonder: Imagene hypothetically I have a SSD drive that is filled from more than 90% (e.g., 95%) and those 90% are read-only things (or almost read-only things such as exe and other application files). The remaining 10% is free or frequently written to (e.g., page/swap file). Then the use of drive results - from what I understood in the article - in very fast aging of those 10% of the SSD disk because the 90% are occupied by read-only stuff. If the disk in question has for instance 32GB, those 10% are 3.2 GB (e.g., a size of a usual swap file) and after writing it approx. 10000 times, the respective part of the disk would become dead. Being occupies by a swap file, this number of reads/writes can be achieved in one or two years... Am I right?

Log in

Don't have an account? Sign up now