One thing AMD has taught me is that you can never beat Intel at its own game. Simply trying to do what Intel does will leave you confined to whatever low margin market Intel deems too unattractive to pursue. It’s exactly why AMD’s most successful CPU architectures are those that implement features that Intel doesn’t have today, but perhaps will have in a few years. Competing isn’t enough, you must innovate. Trying to approach the same problem in the same way but somehow do it better doesn’t work well when your competition makes $9B a quarter.

We saw this in the SSD space as well. In the year since Intel’s X25-M arrived, the best we’ve seen is a controller that can sort-of do what Intel’s can just at a cheaper price. Even then, the cost savings aren’t that great because Intel gets great NAND pricing. We need companies like Indilinx to put cost pressure on Intel, but we also need the equivalent of an AMD. A company that can put technological pressure on Intel.

That company, at least today, is SandForce. And its disciple? OCZ. Yep, they’re back.

Why I Hate New SSDs

I’ll admit, I haven’t really been looking forward to this day. Around the time when OCZ and Indilinx finally got their controller and firmware to acceptable levels, OCZ CMO Alex Mei dropped a bombshell on me - OCZ’s Vertex 2 would use a new controller by a company I’d never heard of. Great.

You may remember my back and forth with OCZ CEO Ryan Petersen about the first incarnation of the Vertex drive before it was released. Needless to say, what I wrote in the SSD Anthology was an abridged (and nicer) version of the back and forth that went on in the months prior to that product launch. After the whole JMicron fiasco, I don’t trust these SSD makers or controller manufacturers to deliver products that are actually good.


Aw, sweet. You'd never hurt me would you?

Which means that I’ve got to approach every new drive and every new controller with the assumption that it’s either going to somehow suck, or lose your data. And I need to figure out how. Synonyms for daunting should be popping into your heads now.

Ultimately, the task of putting these drives to the test falls on the heads of you all - the early adopters. It’s only after we collectively put these drives through hundreds and thousands of hours of real world usage that we can determine whether or not they’re sponge-worthy. Even Intel managed to screw up two firmware releases and they do more in-house validation than any company I’ve ever worked with. The bugs of course never appeared in my testing, but only in the field in the hands of paying customers. I hate that it has to be this way, but we live in the wild west of solid state storage. It’ll be a while before you can embrace any new product with confidence.

And it only gets more complicated from here on out. The old JMicron drives were easy to cast aside. They behaved like jerks when you tried to use them. Now the true difference between SSDs rears its head after months or years of use.

I say that because unlike my first experience with OCZ’s Vertex, the Vertex 2 did not disappoint. Or to put it more directly: it’s the first drive I’ve seen that’s actually better than Intel’s X25-M G2.

If you haven't read any of our previous SSD articles, I'd suggest brushing up on The Relapse before moving on. The background will help.

Enter the SandForce
Comments Locked

100 Comments

View All Comments

  • semo - Saturday, January 2, 2010 - link

    Anand,

    After reading your very informative SSD articles, I still found something new from GullLars. I think it would be useful to include the queue length when stating IOPS figures as it will give us more technical insight of the inner workings of the different SSD models and give hints to performance for future uses.

    When dial up was the most common way of connecting to the internet, most sites were small with static content. As connection and CPU speeds grew, so did the websites. Try going to a big ugly site like cnet with a 7-8 year old pc with even the fastest internet connection. I'm sure that all this supposed untapped performance in SSDs will be quickly utilized in future (probably because of inefficient software in most cases rather than for legit reasons). With virtualization slowly entering the consumer space (XP mode, VM unity and so on) as giant sandboxes and legacy platforms, surely disk queue lengths can only grow...
  • shawkie - Saturday, January 2, 2010 - link

    Anand,

    I agree that its also helpful to know what the hardware can really do. It seems to me that longer queue depths are becoming important for high performance on all storage devices (even hard disks have NCQ and can be put in RAID arrays). At some point software manufacturers are going to wake up to that fact. This is just like the situation with multi-core CPUs. I'm fortunate because in my work I not only select the hardware platform but also develop the software to run on it.
  • DominionSeraph - Monday, January 4, 2010 - link

    A jumble of numbers that don't apply to the scenario at hand is nothing but misleading.

    Savvio 15K.1 SAS: 416 IOPS
    1TB Caviar Black: 181.

    Ooooh... the 15k SAS is waaaay faster!! Sure, in a file server access pattern at a queue depth of 64. Try benchmarking desktop use and you'll find the 7200RPM SATA is generally faster.
  • BrightCandle - Friday, January 1, 2010 - link

    With which software and parameters did you achieve the results you are talking about? Everything I've thrown at my X25-M has shown results in the same park as Anand's figures so I'm interested to see how you got to those numbers.
  • GullLars - Friday, January 1, 2010 - link

    These numbers have been generated by several testing methods.
    *AS SSD benchmark shows 4KB random read and random write at Queue Depth (QD) 64, and x25-M gets in the area of 120-160MB/s on read and 65-85MB/s on write.
    *Crystal Disk Mark 3.0 (beta) tests 4KB random at both QD1 and QD32. At QD32 4KB random read, Intel x25-M gets 120-160MB/s, and at random write it gets 65-85MB/s here too.
    Here's to a screenshot of CDM 2.2 and 3.0 of x25-M 80GB on 750SB with AHCI in fresh state. http://www.diskusjon.no/index.php?act=attach&t...">http://www.diskusjon.no/index.php?act=attach&t...
    *Testing with IOmeter, parameters 2GB length, 30 sec runtime, 1 worker, 32 outstanding IO's (QD), 100% read, 100% random, 4KB blocks, burst lenght 1. On a forum i frequent most users with x25-M get between 30-40.000 IOPS with theese parameters. For the same parameters only 100% write the norm is around 15K IOPS on a fresh drive, and a bit closer to 10K in used state with OS running from the drive. x25-E has been benched to 43K random write 4KB IOPS.

    Regarding the practical difference 4KB IOPS makes, the biggest difference can be seen in the PCmark vantage test Application Launching. Such workloads involve reading a massive amount of small files and database listings, pluss logging all file access this creates. Prefetch and superfetch may help storage units with less than a few thousand IOPS, but x25-M in many cases actually get worse launch times with these activated. Using a RAM disk for known targets of small random writes make sense, and i've put my browser cache and temp files on a RAM disk even though i have an SSD.
    With x25-M's insane IOPS performance, the random part of most workloads is done whitin a second and what you are left waiting for is the loading of larger files and the CPU. Attempting to lower the load time of small random reads during an application launch from say 0,5 sec by running a superfetch script or read-caching with a RAMdisk makes little sense.
  • Zool - Friday, January 1, 2010 - link

    For a average user 4KB random performance are the most useless results out there. If a user encounters that much random 4KB read/writes than he need to change the operating system asap.
    And if something realy needs to randomly read/write 4KB files than your best bet is to cache it to Ram or make Ram disk i think.
  • LTG - Thursday, December 31, 2009 - link

    This statement seems really dubious - Isn't it in fact the opposite?

    The majority of storage space is taken up by things that don't compress well: Music, Videos, Photos, Zip style archives...

    Everything else is smaller.


    Anand Says:
    ==========================
    That means compressed images, videos or file archives will most likely exhibit higher write amplification than SandForce’s claimed 0.5x. Presumably that’s not the majority of writes your SSD will see on a day to day basis, but it’s going to be some portion of it.
  • DominionSeraph - Friday, January 1, 2010 - link

    That stuff just gets written once.
    Day-to-day operations sees a whole lot of transient data.
  • Shining Arcanine - Thursday, December 31, 2009 - link

    As someone else suggested, I imagine that the SATA driver could take all of the data written/read to the drive and transparently implement the algorithms on the much more powerful CPU.

    Is there anything to stop people from reverse engineering the firmware to figure out exactly what the drive in terms of compression is doing and then externalizing it to the SATA driver, so other SSDs can benefit from it as well? i.e. Are there any legal issues with this?
  • Anand Lal Shimpi - Friday, January 1, 2010 - link

    Patents :) SandForce holds a few of them with regards to this technology.

    Obviously that's up to the courts to determine if they are enforceable or not, SandForce believes they are. Other companies could license the technology though...

    Take care,
    Anand

Log in

Don't have an account? Sign up now