Bringing You Up to Speed: The History Lesson

Everyone remembers their first bike right? Mine was red. It had training wheels. I never really learned how to ride it, not that I didn’t go outdoors, I was just too afraid to take those training wheels off I guess. That was a long time ago, but I remember my first bike.

I also remember my first SSD. It was a 1.8” PATA drive made by Samsung for the MacBook Air. It was lent to me by a vendor so I could compare its performance to the stock 1.8” mechanical HDD in the Air.

The benchmarks for that drive didn’t really impress. Most application tests got a little slower and transfer speeds weren’t really any better. Application launch times and battery life both improved, the former by a significant amount. But the drive was expensive; $1000 from Apple and that’s if you bought it with the MacBook Air. Buying it from a vendor would set you back even more. It benchmarked faster than hard drive, but the numbers didn’t justify the cost. I pulled the drive out and sent it back after I was done with the review.

The next time I turned on my MacBook Air I thought it was broken. It took an eternity to boot and everything took forever to launch. Even though the benchmarks showed the SSD shaving off a few seconds of application launch time here and there, in the real world, it was noticeable. The rule of thumb is that it takes about a 10% difference in performance for a user to notice. The application tests didn’t show a 10% difference in performance, but the application launch tests, those were showing 50% gains. It still wasn’t worth $1000, but it was worth a lot more than I originally thought.

It was the MacBook Air experience that made me understand one important point about SSDs: you don’t think they’re fast, until I take one away from you.

My second SSD was a 60GB SuperTalent drive. I built a HTPC using it. It was my boot drive and I chose it because it drew less power and was silent; it helped keep my HTPC cool and I wouldn’t have to worry about drive crunching while watching a movie. My movies were stored elsewhere so the space didn’t really matter. The experience was good, not great because I wasn’t really hitting the drive for data, but it was problem-free.

SuperTalent was the first manufacturer to sell a SSD in a 3.5” enclosure, so when they announced their 120GB drive I told them I’d like to do a review of their SSD in a desktop. They shipped it to me and I wrongly assumed that it was the same as the 60GB drive in my HTPC just with twice the flash.

This drive did have twice the flash, but it was MLC (Multi-Level Cell) flash. While the 60GB drive I had was a SLC drive that used Samsung’s controller, the MLC drive used a little known controller from a company called JMicron. Samsung had a MLC controller at the time but it was too expensive than what SuperTalent was shooting for. This drive was supposed to be affordable, and JMicron delivered an affordable controller.

After running a few tests, the drive went in my Mac Pro as my boot/application drive. I remembered the lesson I learned from my first SSD. I wasn’t going to be able to fairly evaluate this drive until I really used it, then took it away. Little did I know what I was getting myself into.

The first thing I noticed about the drive was how fast everything launched. This experience was actually the source of my SSD proof-of-value test; take a freshly booted machine and without waiting for drive accesses to stop, launch every single application you want to have up and running at the same time. Do this on any system with a HDD and you’ll be impatiently waiting. I did it on the SuperTalent SSD and, wow, everything just popped up. It was like my system wasn’t even doing anything. Not even breaking a sweat.

I got so excited that I remember hopping on AIM to tell someone about how fast the SSD was. I had other apps running in the background and when I went to send that first IM and my machine paused. It was just for a fraction of a second, before the message I'd typed appeared in my conversation window. My system just paused.

Maybe it was a fluke.

I kept using the drive, and it kept happening. The pause wasn’t just in my IM client, it would happen in other applications or even when switching between apps. Maybe there was a strange OS X incompatibility with this SSD? That’d be unfortunate, but also rather unbelievable. So I did some digging.

Others had complained about this problem. SuperTalent wasn’t the only one to ship an affordable drive based on this controller; other manufacturers did as well. G.Skill, OCZ, Patriot and SiliconPower all had drives shipping with the same controller, and every other drive I tested exhibited the same problem.

I was in the midst of figuring out what was happening with these drives when Intel contacted me about reviewing the X25-M, its first SSD. Up to this point Intel had casually mentioned that their SSD was going to be different than the competition and prior to my JMicron experience I didn’t really believe them. After all, how hard could it be? Drive controller logic is nowhere near as complicated as building a Nehalem, surely someone other than Intel could do a good-enough job.

After my SuperTalent/JMicron experience, I realized that there was room for improvement.

Drive vendors were mum on the issue of pausing or stuttering with their drives. Lots of finger pointing resulted. It was surely Microsoft’s fault, or maybe Intel’s. But none of the Samsung based drives had these problems.

Then the issue was cache. The JMicron controller used in these drives didn’t support any external DRAM. Intel and Samsung’s controllers did. It was cache that caused the problems, they said. But Intel’s drive doesn’t use the external DRAM for user data.

Fingers were pointed everywhere, but no one took responsibility for the fault. To their credit, OCZ really stepped up and took care of their customers that were unhappy with their drives. Despite how completely irate they were at my article, they seemed to do the right thing after it was published. I can’t say the same for some of the other vendors.

The issue ended up being random write performance. These “affordable” MLC drives based on the JMicron controller were all tuned for maximum throughput. The sequential write speed of these drives could easily match and surpass that of the fastest hard drives.

If a company that had never made a hard drive before could come out with a product that on its first revision could outperform WD’s VelociRaptor and be more reliable thanks to zero moving parts...well, you get the picture. Optimize for sequential reads and writes!

The problem is that modern day OSes tend to read and write data very randomly, albeit in specific areas of the disk. And the data being accessed is rarely large, it’s usually very small on the order of a few KB in size. It’s these sorts of accesses that no one seemed to think about; after all these vendors and controller manufacturers were used to making USB sticks and CF cards, not hard drives.

  Sequential Read Performance
JMicron JMF602B MLC 134.7 MB/s
Western Digital VelociRaptor 300GB 118 MB/s

 

The chart above shows how much faster these affordable MLC SSDs were than the fastest 3.5” hard drive in sequential reads, but now look at random write performance:

  Random Write Latency Random Write Bandwidth
JMicron JMF602B MLC 532.2 ms 0.02 MB/s
Western Digital VelociRaptor 300GB 7.2 ms 1.63 MB/s

 

While WD’s VelociRaptor averaged less than 8ms to write 4KB, these JMicron drives took around 70x that! Let me ask you this, what do you notice more - things moving very fast or things moving very slow?

The traditional hard drive benchmarks showed that these SSDs were incredible. The real world usage and real world tests disagreed. Storage Review was one of the first sites to popularize real world testing of hard drives nearly a decade ago. It seems that we’d all forgotten the lessons they taught us.

Random write performance is quite possibly the most important performance metric for SSDs these days. It’s what separates the drives that are worth buying from those that aren’t. All SSDs at this point are luxury items, their cost per GB is much higher than that of conventional hard drives. And when you’re buying a luxury anything, you don’t want to buy a lame one.

  Cost Per GB from Newegg.com
Intel X25-E 32GB $12.88
Intel X25-M 80GB $4.29
OCZ Solid 60GB $2.33
OCZ Apex 60GB $2.98
OCZ Vertex 120GB $3.49
Samsung SLC 32GB $8.71
Western Digital Caviar SE16 640GB $0.12
Western Digital VelociRaptor 300GB $0.77
Index Why You Should Want an SSD
Comments Locked

250 Comments

View All Comments

  • strikeback03 - Thursday, March 19, 2009 - link

    I understand your point, but I am not sure you understand the point I (and others) are trying to make. The SSD makers (should) know their market. As they seem to be marketing these SSDs to consumers, they should know that means the vast majority are on Vista or OSX, so the OS won't be optimized for SSDs. It also means the majority will be using integrated disk controllers. Therefore, in choosing a SSD controller which does not operate properly given those restrictions, they chose poorly. The testing here at Anandtech shows that regardless of how the drives might perform in ideal circumstances, they have noticeable issues when used the way most users would use them, which is really all those users care about.
  • tshen83 - Thursday, March 19, 2009 - link

    In the history of computing, it was always the case that software compensated for the new hardware, not the other way around. When new hardware comes out that obsoletes current generation of software, new software will be written to take advantage of the new hardware.
    Think of it this way: you always write newer version of drivers to drive the newest GPUs. When is the last time newer GPUs work with older drivers?

    Nobody should be designing hardware now that makes DOS run fast right? All file systems (except ZFS and soon BTRFS) are obsolete now for SSDs, so we write new file systems. I am not sure Intel X25-M's approach of virtualizing flash to the likings of NTFS and EXT3 is the correct one. It is simply a bridge to get to the next solution.

    SSD makers right now are making a serious mistake pushing SSDs down consumer's throats during an economic crisis. They should have focused on the enterprise market, targeting DB servers. But in that space, Intel X25-E sits alone without competition. (Supertalent UltraDrive LEs should be within 25% of X25-E by my estimation)
  • pmonti80 - Thursday, March 19, 2009 - link

    Now I understand what you meant in the beginning. But still I don't agree with you, the system reviewed is the one 99% of SSD buyers will use(integrated mobo controller + NTFS). So, why optimize the benchmark to show the bad drives in a good light?

    About the Vertex, I don't understand what you are complaining about. After reading this article most people got the idea that Vertex is a good drive and at half Intel's price (I know, I searched on google for comments about this article).
  • tshen83 - Thursday, March 19, 2009 - link

    Professional people only look at two SSD benchmarks: random read IOPS at 4k and random write IOPS at 4k(Maybe 8K too for native DB pages).

    The Vertex random write IOPS at 4K size is abysmal. 2.4MB/sec at 4K means it only does 600ish random write IOPs. Something was wrong, and Vista/ICH10R didn't help. The 8GB/sec write boundary Anand imposed on the random write IOPS test is fishy. So is the artificial io queue depth = 3.

    The vertex random write IOPS should be better. The random read IOPS also should be slightly better. I have seen OCZ's own benchmark placing the Vertex very close to Intel X25-M at random read/ write IOPS tests.

    I personally think that if you use NTFS, just ignore the SSDs for now until Windows 7 RTM. You can't hurt waiting for SSD price to drop some more in the next 6 months. Same thing for Linux, although I would argue that Linux is even in a worse position for SSDs right now than windows 7. EXT3/EXT4/JFS/XFS/REISERFS all suck on SSDs.
  • gss4w - Thursday, March 19, 2009 - link

    Anandtech should adopt the same comment system as Dailytech so that comments that don't make any sense can be rated down. Who would want to read a review of something using a beta OS, or worse an OS that is only used on servers? I think it would be interesting to see if Windows 7 beta offered any improvements, but that should not be the focus of the review.
  • 7Enigma - Thursday, March 19, 2009 - link

    Here's another vote for the Dailytech comments section. The ability to rate up down, but more importantly HIDE the comments below a threshold would make for much more enjoyable reading.
  • curtisfong - Wednesday, March 18, 2009 - link

    Why should Anand test with Windows 7b or *nix? What is the majority OS?

    Kudos to Anand for testing real world performance on an OS that most use, and to Intel for tuning their drives for it. I'm happy the other manufacturers are losing business..maybe they will also tune their drives for real world performance and not synthetic benchmarks.

    To the poster above: do you work for OCZ or Samsung?
  • Glenn - Wednesday, March 18, 2009 - link

    tshen83 "A very thorough review by tshen83, an hour ago
    BUT, still based on Windows Vista.
    "

    As long as these drives are marketed toward said OS, why would you not use it? Most of us wouldn't recognize Solaris if we saw it! And I believe you seriously overestimate yourself if your gonna drill anything into Anands head! You might need your own site, huh?

    Great Job Anand! Don't forget to remind these CEO's that they also need to provide any software needed to configure and optimize these drives to work properly. ie go to OCZ Forums and try to figure out how to align, optimize and keep your drive running like it's supposed to, in less than 4 hours of reading! It would be nice if these companies would do their own beta testing and not rely on early adopters to do it for them!
  • Roland00 - Wednesday, March 18, 2009 - link

    It was a joy to read all 31 pages
  • MagicPants - Wednesday, March 18, 2009 - link

    Anand it would be really helpful to have a list of SSD companies blacklisting you so I know which ones to avoid. In general it would be nice to know who doesn't provide review samples to reputable sites.

Log in

Don't have an account? Sign up now