Latency vs. Bandwidth: What to Look for in a SSD

It took me months to get my head wrapped around it, but I think I finally get it. We often talk about the concepts of bandwidth and latency but rarely are they as tangible as they are here today.

When I speak of latency I’m talking about how long it takes to complete a request, or fetch a block of data. When I mention bandwidth, I’m talking about how much you can read/write at once. Think of latency as the speed limit and bandwidth as the number of lanes on a high way.

If you’re the only car on the highway, you’re going to notice the impact of latency more than bandwidth. A speed limit of 70 mph instead of 35 is going to impact you much more than if you added more lanes to the road.

If you’re a city planner however and your only concern is getting as many people to work and back, you’re going to notice the impact of bandwidth more than latency. It doesn’t matter how fast a single car can move, what matters is how many cars you can move during rush hour traffic.

I’d argue that if you’re a desktop user and you’re using an SSD as a boot/application drive, what will matter most is latency. After you’ve got your machine setup the way you want it, the majority of accesses are going to be sequential reads and random reads/writes of very small file sizes. Things like updating file tables, scanning individual files for viruses, writing your web browser cache. What influences these tasks is latency, not bandwidth.

If you were constantly moving large multi-gigabyte files to and from your disk then total bandwidth would be more important. SSDs are still fairly limited in size and I don’t think you’ll be backing up many Blu-ray discs to them given their high cost per GB. It’s latency that matters here.

Obviously I’ll be testing both latency and bandwidth, but I wanted to spend a moment talking about the synthetic latency tests.

Iometer is a tool that can simulate any combination of disk accesses you can think of. If you know how an application or OS hits the disk, iometer can simulate it. While random disk accesses are the reason that desktop/notebook hard drives feel so slow, the accesses are generally confined to particular areas of the disk. For example, when you’re writing a file the OS needs to update a table mapping the file you’re writing to the LBAs it allocated for the file. The table that contains all of the LBA mapping is most likely located far away from the file you’re writing, thus the process of writing files to the same area can look like random writes to two different groups of LBAs. But the accesses aren’t spread out across the entire drive.

In my original X25-M article I ran a 4KB random write test over the entire span of the drive. That’s a bit more ridiculous than even the toughest user will be on his/her desktop. For this article I’m limiting the random write test to an 8GB space of the drive; it makes the benchmark a little more realistic for a desktop/notebook workload.

The other thing I’ve done is increased the number of outstanding IOs from 1 to 3. I’ve found that in a multitasking user environment Vista will generally have a maximum of 3 or 4 outstanding IOs (read/write requests).

The combination of the two results in a 100% random file write of 4KB files with 3 outstanding IOs to an 8GB portion of the drive for 3 minutes. That should be enough time to get a general idea of how well these drives will perform when it comes to random file write latency in a worst case, but realistic usage scenario.

The Verdict The Return of the JMicron based SSD
Comments Locked

250 Comments

View All Comments

  • Jamor - Wednesday, March 18, 2009 - link

    The best tech article I've ever read, and I've read a few.
  • haze4peace - Wednesday, March 18, 2009 - link

    Wow, excellent article and so much useful information in an easy to understand way. I have just recently been paying attention to SSDs and thanks to this article I am armed with the information to make the correct choice for my needs. Thanks AnandTech, its the deep and honest articles like these that keep me coming back for more.
  • Alseki - Wednesday, March 18, 2009 - link

    I just registered then simply to say, great article. Really informative and enjoyable to read.
  • alexsch8 - Wednesday, March 18, 2009 - link

    Anand,

    Thank you for this article, very informative.

    Looking at the example you are giving with your self-manufactured SSD drive: If I save the DOC I use up a page. Based on what you are saying, if I make a change to that DOC, it would then be saved in the next page instead of overwriting the existing page? If that is true, then the File Allocation system (FAT or MFT) itself would contribute quite a bit to the 'filling up of pages' phenomena. Could you elaborate if the proposed file system for SSD addresses this?
  • Ytterbium - Wednesday, March 18, 2009 - link

    Fantastic article, shame that the vendors blacklisted you for telling the truth and OCZ rock for working so hard to address issues.

    I'll be ordering my Intel SSD soon, I'll defintly consider the Summit when it comes out for my encoding rig as there sequental writes matter to me.
  • mindless1 - Wednesday, March 18, 2009 - link

    Great even, but I've have to disagree with the significance of the passage that suggested the Indilinx controller makes data loss as bad on those SSD as on a conventional hard drive.

    The primary cause of data loss is mechanical or component failure, not power loss. If we want to consider power loss, it's not just the drive which is prone to lose data, the entire system memory suffers far more data loss than that.

    Further, a sufficiently sized supercapacitor should keep the drive operating for a period of time beyond when the rest of the system would be operational, it could be sufficient for the controller to finish writing to flash all received data (or just use an UPS, that's what they're for?).

    Second, I can't believe that OCZ only tests designs with HDTach and Atto, I think it more likely they knew of the problem but didn't expect anyone to find it so quickly, and felt the higher sequential speeds made it more marketable. This makes me feel that manufacturers, then online sellers should differentiate their drives with a standardized random read/write score.

    What would be really nice is if the Indilinx based SSDs had an application available, similar to a HDD acoustic management bit changing app, that lets the owner set their own preference for IO versus sequential read performance.
  • gomakeit - Wednesday, March 18, 2009 - link

    This is by far the BEST article on SSD I've ever read! Great job anand and yes I read every single word of it!
  • MagicPants - Wednesday, March 18, 2009 - link

    Don't they ever try using their own devices? One second of latency should slap any user in the face. It should be very easy for a manufacturer to build a system with their new technology put it in front of people and see what happens, but apparently they're not doing this.

    They wait for reviewers to do the work for them and then get upset when they find a problem.

    What the manufacturers should be taking away from this article is:

    1) Try your competitor's products
    2) Try your own products
    3) Try them in real life as opposed to synthetic tests
    4) Compare everything you've tried and market the performance that matters
  • 7Enigma - Thursday, March 19, 2009 - link

    But that would make sense....and we know marketing rarely does.
  • paulinus - Wednesday, March 18, 2009 - link

    That art is great. Finally someone done ssd test's right, and said loud what we, customers, can get for that hefty pricetags.
    I've supposed that only choices are intel and new ocz's. Now I know, and big kudos for that.
    Just need a bit more $$ for x25-m, it'll be ideal for heavy workstation use, and biggest vertex'll replace wd black in my aging 6910p :)

Log in

Don't have an account? Sign up now