Latency vs. Bandwidth: What to Look for in a SSD

It took me months to get my head wrapped around it, but I think I finally get it. We often talk about the concepts of bandwidth and latency but rarely are they as tangible as they are here today.

When I speak of latency I’m talking about how long it takes to complete a request, or fetch a block of data. When I mention bandwidth, I’m talking about how much you can read/write at once. Think of latency as the speed limit and bandwidth as the number of lanes on a high way.

If you’re the only car on the highway, you’re going to notice the impact of latency more than bandwidth. A speed limit of 70 mph instead of 35 is going to impact you much more than if you added more lanes to the road.

If you’re a city planner however and your only concern is getting as many people to work and back, you’re going to notice the impact of bandwidth more than latency. It doesn’t matter how fast a single car can move, what matters is how many cars you can move during rush hour traffic.

I’d argue that if you’re a desktop user and you’re using an SSD as a boot/application drive, what will matter most is latency. After you’ve got your machine setup the way you want it, the majority of accesses are going to be sequential reads and random reads/writes of very small file sizes. Things like updating file tables, scanning individual files for viruses, writing your web browser cache. What influences these tasks is latency, not bandwidth.

If you were constantly moving large multi-gigabyte files to and from your disk then total bandwidth would be more important. SSDs are still fairly limited in size and I don’t think you’ll be backing up many Blu-ray discs to them given their high cost per GB. It’s latency that matters here.

Obviously I’ll be testing both latency and bandwidth, but I wanted to spend a moment talking about the synthetic latency tests.

Iometer is a tool that can simulate any combination of disk accesses you can think of. If you know how an application or OS hits the disk, iometer can simulate it. While random disk accesses are the reason that desktop/notebook hard drives feel so slow, the accesses are generally confined to particular areas of the disk. For example, when you’re writing a file the OS needs to update a table mapping the file you’re writing to the LBAs it allocated for the file. The table that contains all of the LBA mapping is most likely located far away from the file you’re writing, thus the process of writing files to the same area can look like random writes to two different groups of LBAs. But the accesses aren’t spread out across the entire drive.

In my original X25-M article I ran a 4KB random write test over the entire span of the drive. That’s a bit more ridiculous than even the toughest user will be on his/her desktop. For this article I’m limiting the random write test to an 8GB space of the drive; it makes the benchmark a little more realistic for a desktop/notebook workload.

The other thing I’ve done is increased the number of outstanding IOs from 1 to 3. I’ve found that in a multitasking user environment Vista will generally have a maximum of 3 or 4 outstanding IOs (read/write requests).

The combination of the two results in a 100% random file write of 4KB files with 3 outstanding IOs to an 8GB portion of the drive for 3 minutes. That should be enough time to get a general idea of how well these drives will perform when it comes to random file write latency in a worst case, but realistic usage scenario.

The Verdict The Return of the JMicron based SSD
POST A COMMENT

250 Comments

View All Comments

  • Luddite - Friday, March 20, 2009 - link

    So even with the TRIM command, when working with large files, say, in photoshop and saving multiple layers, the performance will stil drop off? Reply
  • proviewIT - Thursday, March 19, 2009 - link

    I bought a Vertex 120GB and it is NOT working on my Nvidia chipsets motherboard. Anyone met the same problem? I tried intel chipsets motherboard and seems ok.
    I used HDtach to test the read/write performance 4 days ago, wow, it was amazing. 160MB/s in write. But today I felt it slower and used HDtach to test again, it downs to single digit MB per second. Can I recover it or I need to return it?
    Reply
  • kmmatney - Thursday, March 19, 2009 - link

    Based on the results and price, I would say that the OCZ Vertex deserves a Editor's choice of some sort (Gold, Silver)... Reply
  • Tattered87 - Thursday, March 19, 2009 - link

    While I must admit I skipped over some of the more technical bits where SSD was explained in detail, I read the summaries and I've gotta admit this article was extremely helpful. I've been wanting to get one of these for a long time now but they've seemed too infantile in technological terms to put such a hefty investment in, until now.

    After reading about OCZ's response to you and how they've stepped it up and are willing to cut unimportant statistics in favor of lower latencies, I actually decided to purchase one myself. Figured I might as well show my appreciation to OCZ by grabbing up a 60GB SSD, not to mention it looks like it's by far the best purchase I can make SSD-wise for $200.

    Thanks for the awesome article, was a fun read, that's for sure.
    Reply
  • bsoft16384 - Thursday, March 19, 2009 - link

    Anand, I don't want to sound too negative in my comments. While I wouldn't call them unusable, there's no doubt that the random write performance of the JMicron SSDs sucks. I'm glad that you're actually running random I/O tests when so many other websites just run HDTune and call it a day.

    That X25-M for $340 is looking mighty tempting, though.
    Reply
  • MrSpadge - Thursday, March 19, 2009 - link

    Hi,

    first: great article, thanks to Anand and OCZ!

    Something crossed my mind when I saw the firmware-based trade-off between random writes and sequential transfer rates: couldn't that be adjusted dynamically to get the best of both worlds? Default to the current behaviour but switch into something resembling te old one when extensive sequential transfers are detected?

    Of course this neccesiates that the processor would be able to handle additional load and that the firmware changes don't involve permanent changes in the organization of the data.

    Maybe the OCZ-Team already thought about this and maybe nobody's going to read this post, buried deep within the comments..

    MrS
    Reply
  • Per Hansson - Thursday, March 19, 2009 - link

    Great work on the review Anand
    I really enjoyed reading it and learning from it
    Will there be any tests of the old timers like Mtron etc?
    Reply
  • tomoyo - Thursday, March 19, 2009 - link

    That was kind of strange to me too. But I assume Anand really means the desktop market, not the server storage/business market. Since it's highly doubtful that the general consumer will spend many times as much money for 15k SAS drives. Reply
  • Gary Key - Thursday, March 19, 2009 - link

    The intent was based it being the fastest for a consumer based desktop drive, the text has been updated to reflect that fact. Reply
  • tomoyo - Thursday, March 19, 2009 - link

    I've always been someone who wants real clarify and truth to the information on the internet. That's a problem because probably 90% of things are not. But Anand is one man I feel a lot of trust for because of great and complete articles such as this. This is truly the first time that I feel like I really understand what goes into ssd performance and why it can be good or bad. Thank you so much for being the most inciteful voice in the hardware community. And keep fighting those damn manufacturers who are scared of the facts getting in the way of their 200MB/s marketing bs. Reply

Log in

Don't have an account? Sign up now