While it happens a lot less now than a couple of years ago, I still see the question of why SSDs are worth it every now and then. Rather than give my usual answer, I put together a little graph to illustrate why SSDs are both necessary and incredibly important.

Along the x-axis we have different types of storage in a modern computer. They range from the smallest, fastest storage elements (cache) to main memory and ultimately at the other end of the spectrum we have mechanical storage (your hard drive). The blue portion of the graph indicates typical capacity of these storage structures (e.g. 1024KB L2, 1TB HDD, etc...). The further to the right you go, the larger the structure happens to be.

The red portion of the graph lists performance as a function of access latency. The further right you go, the slower the storage medium becomes.

This is a logarithmic scale so we can actually see what’s going on. While capacity transitions relatively smoothly as you move left to right, look at what happens to performance. The move from main memory to mechanical storage occurs comes with a steep performance falloff.

We could address this issue by increasing the amount of DRAM in a system. However, DRAM prices are still too high to justify sticking 32 - 64GB of memory in a desktop or notebook. And when we can finally afford that, the applications we'll want to run will just be that much bigger.

Another option would be to improve the performance of mechanical drives. But we’re bound by physics there. Spinning platters at more than 10,000 RPM proves to be power, sound and reliability prohibitive. The majority of hard drives still spin at 7200 RPM or less.

Instead, the obvious solution is to stick another level in the memory hierarchy. Just as AMD/Intel have almost fully embraced the idea of a Level 3 cache in their desktop/notebook processors, the storage industry has been working towards using NAND as an intermediary between DRAM and mechanical storage. Let’s look at the same graph if we stick a Solid State Drive (SSD) in there:

Not only have we smoothed out the capacity curve, but we’ve also addressed that sharp falloff in performance. Those of you who read our most recent VelociRaptor VR200M review will remember that we recommend a fast SSD for your OS/applications, and a large HDD for games, media and other large data storage. The role of the SSD in the memory hierarchy today is unfortunately user-managed. You have to manually decide what goes on your NAND vs. mechanical storage, but we’re going to see some solutions later this year that hope to make some of that decision for you.

Why does this matter? If left unchecked, sharp dropoffs in performance in the memory/storage hierarchy can result in poor performance scaling. If your CPU doubles in peak performance, but it has to wait for data the majority of the time, you’ll rarely realize that performance increase. In essence, the transistors that gave your CPU its performance boost will have been wasted die area and power.

Thankfully we tend to see new levels in the memory/storage hierarchy injected preemptively. We’re not yet at the point where all performance is bound by mass storage, but as applications like virtualization become even more prevalent the I/O bottleneck is only going to get worse.

Motivation for the Addiction

It’s this sharp falloff in performance between main memory and mass storage that makes SSDs so enticing. I’ve gone much deeper into how these things work already, so if you’re curious I’d suggest reading our SSD Relapse.

SSD performance is basically determined by three factors: 1) NAND, 2) firmware and 3) controller. The first point is obvious; SLC is faster (and more expensive) than MLC, but is limited to server use mostly. Firmware is very important to SSD performance. Much of how an SSD behaves is determined by the firmware. It handles all data mapping to flash, how to properly manage the data that’s written on the drive and ensures that the SSD is always operating as fast as possible. The controller is actually less important than you’d think. It’s really a combination of the firmware and controller that help determine whether or not an SSD is good.

For those of you who haven’t been paying attention, we basically have six major controller manufacturers competing today: Indilinx, Intel, Micron, Samsung, SandForce and Toshiba. Micron uses a Marvell controller, and Toshiba has partnered up with JMicron on some of its latest designs.

Of that list, the highest performing SSDs come from Indilinx, Intel, Micron and SandForce. Micron makes the only 6Gbps controller, while the rest are strictly 3Gbps. Intel is the only manufacturer on our shortlist that we’ve been covering for a while. The rest of the companies are relative newcomers to the high end SSD market. Micron just recently shipped its first competitive SSD, the RealSSD C300 as did SandForce.

We first met Indilinx a little over a year ago when OCZ introduced a brand new drive called the Vertex. While it didn’t wow us with its performance, OCZ’s Vertex seemed to have the beginnings of a decent alternative to Intel’s X25-M. Over time the Vertex and other Indilinx drives got better, eventually earning the title of Intel alternative. You wouldn’t get the same random IO performance, but you’d get better sequential performance and better pricing.

Several months later OCZ introduced another Indilinx based drive called Agility. Using the same Indilinx Barefoot controller as the Vertex, the only difference was Agility used 50nm Intel or 40nm Toshiba NAND. In some cases this resulted in lower performance than Vertex, while in others we actually saw it pull ahead.

OCZ released many other derivatives based on Indilinx’s controller. We saw the Vertex EX which used SLC NAND for enterprise customers, as well as the Agility EX. Eventually as more manufacturers started releasing Indilinx based drives, OCZ attempted to differentiate by releasing the Vertex Turbo. The Vertex Turbo used an OCZ exclusive version of the Indilinx firmware that ran the controller and external DRAM at a higher frequency.

Despite a close partnership with Indilinx, earlier this month OCZ announced that its next generation Vertex 2 and Agility 2 drives would not use Indilinx controllers. They’d instead be SandForce based.

OCZ's Agility 2 and the SF-1200
POST A COMMENT

60 Comments

View All Comments

  • speden - Wednesday, April 21, 2010 - link

    I still don't understand if the SandForce compression increases the available storage space. Is that discussed in the article somewhere? Is the user storage capacity 93.1 GB if you write uncompressable data, but much larger if you are writing normal data? If so that would effectively lower the cost per gigabyte quite a bit. Reply
  • Ryan Smith - Wednesday, April 21, 2010 - link

    It does not increase the storage capacity of the drive. The OS still sees xxGB worth of data as being on the drive even if it's been compressed by the controller, which means something takes up the same amount of reported space regardless of compressibility.

    The intention of SandForce's compression abilities was not to get more data on to the drive, it was to improve performance by reading/writing less data, and to reduce wear & tear on the NAND as a result of the former.

    If you want to squeeze more storage space out of your SSD, you would need to use transparent file system compression. This means the OS compresses things ahead of time and does smaller writes, but the cost is that the SF controller won't be able to compress much if anything, negating the benefits of having the controller do compression if this results in you putting more data on the drive.
    Reply
  • arehaas - Thursday, April 22, 2010 - link

    The Sandforce drives report the same space available to the user, even if there is less data written to the drive. Does it mean that Sandforce drives should last longer because there are fewer actual writes to the NAND? One would reach the 10 million (or whatever) writes with Sandforce later than with other drives. Reply
  • Ryan Smith - Thursday, April 22, 2010 - link

    Exactly. Reply
  • MadMan007 - Wednesday, April 21, 2010 - link

    On a note somewhat related to another post here, I have a request. Could you guys please post final 'available to OS' capacity in *gibibytes*? (or if you must post gigabytes to go along with the marketers at the drive companies make it clear you are using GIGA and not GIBI) After I realized how much 'real available to OS' capacity can vary among drives which supposedly have the same capacity this would be very useful information...people need to know how much actual data they can fit on the drives and 'gibibytes available to the OS' is the best standard way to do that. Reply
  • vol7ron - Wednesday, April 21, 2010 - link

    Attach all corrections to this post.

    1st Paragraph: incredible -> incredibly
    Reply
  • pattycake0147 - Thursday, April 22, 2010 - link

    On the second page, there is no link where the difference between the 1500 and the 1200 are referenced. Reply
  • Roland00 - Thursday, April 22, 2010 - link

    The problem with logarithmic scales is your brain interprets the data linerally instead of exponentially unless you force yourself not too. Reply
  • Per Hansson - Thursday, April 22, 2010 - link

    I agree
    Just as an idea you could have the option to click the graph and get a bigger version, I guess it would be something like 600x3000 in size but would give another angle at the data
    Because for 90% of your users I think a logramithic scale is very hard to comprehend :)
    Reply
  • Impulses - Thursday, April 22, 2010 - link

    Not sure if this is the best place to post this but as I just remembered to do so, here goes... I have no issues whatsoever with your site re-design on my desktop, it's clean, it's pretty, looks fine to me.

    HOWEVER, it's pretty irritating on my netbook and on my phone... On the netbook the top edge of the page simply takes up too much space and leads to more scrolling than necessary on every single page. I'm talking about the banner ad, followed by the HUGE Anandtech logo (it's bigger than before isn't it), flanked by site navigation links, and followed by several more large bars for all the product categories. Even the font's big on those things... I don't get it, seems to take more space than necessary.

    Those tabs or w/e on the previous design weren't as clean looking, but they were certainly more compact. At 1024x600 I can barely see the title of the article I'm on when I'm scrolled all the way up (or not at all if I've enlarged text size a notch or two). It's not really that big a deal, but it just seems like there's a ton of wasted space around the site navigation links and the logo. /shrug

    Now on to the second issue, on my phone while using Opera Mini I'm experiencing some EXTREME slowdowns when navigating your page... This is a much bigger deal, it's basically useless... Can't even scroll properly. I've no idea what's wrong, since Opera Mini doesn't even load ads or anything like that, but it wasn't happening a week or two ago either so it's not because of the site re-design itself...

    It's something that has NEVER happened to me on any other site tho, they may load slow initially, but after it's open I've never had a site scrolls slowly or behave sluggishly within Opera Mini like Anandtech is doing right now... Could it be a rogue ad or something?

    I load the full-version of all pages on Opera Mini all the time w/o issue, but is there a mobile version of Anandtech that might be better suited for my phone/browser combination in the meantime?
    Reply

Log in

Don't have an account? Sign up now