Latency vs. Bandwidth: What to Look for in a SSD

It took me months to get my head wrapped around it, but I think I finally get it. We often talk about the concepts of bandwidth and latency but rarely are they as tangible as they are here today.

When I speak of latency I’m talking about how long it takes to complete a request, or fetch a block of data. When I mention bandwidth, I’m talking about how much you can read/write at once. Think of latency as the speed limit and bandwidth as the number of lanes on a high way.

If you’re the only car on the highway, you’re going to notice the impact of latency more than bandwidth. A speed limit of 70 mph instead of 35 is going to impact you much more than if you added more lanes to the road.

If you’re a city planner however and your only concern is getting as many people to work and back, you’re going to notice the impact of bandwidth more than latency. It doesn’t matter how fast a single car can move, what matters is how many cars you can move during rush hour traffic.

I’d argue that if you’re a desktop user and you’re using an SSD as a boot/application drive, what will matter most is latency. After you’ve got your machine setup the way you want it, the majority of accesses are going to be sequential reads and random reads/writes of very small file sizes. Things like updating file tables, scanning individual files for viruses, writing your web browser cache. What influences these tasks is latency, not bandwidth.

If you were constantly moving large multi-gigabyte files to and from your disk then total bandwidth would be more important. SSDs are still fairly limited in size and I don’t think you’ll be backing up many Blu-ray discs to them given their high cost per GB. It’s latency that matters here.

Obviously I’ll be testing both latency and bandwidth, but I wanted to spend a moment talking about the synthetic latency tests.

Iometer is a tool that can simulate any combination of disk accesses you can think of. If you know how an application or OS hits the disk, iometer can simulate it. While random disk accesses are the reason that desktop/notebook hard drives feel so slow, the accesses are generally confined to particular areas of the disk. For example, when you’re writing a file the OS needs to update a table mapping the file you’re writing to the LBAs it allocated for the file. The table that contains all of the LBA mapping is most likely located far away from the file you’re writing, thus the process of writing files to the same area can look like random writes to two different groups of LBAs. But the accesses aren’t spread out across the entire drive.

In my original X25-M article I ran a 4KB random write test over the entire span of the drive. That’s a bit more ridiculous than even the toughest user will be on his/her desktop. For this article I’m limiting the random write test to an 8GB space of the drive; it makes the benchmark a little more realistic for a desktop/notebook workload.

The other thing I’ve done is increased the number of outstanding IOs from 1 to 3. I’ve found that in a multitasking user environment Vista will generally have a maximum of 3 or 4 outstanding IOs (read/write requests).

The combination of the two results in a 100% random file write of 4KB files with 3 outstanding IOs to an 8GB portion of the drive for 3 minutes. That should be enough time to get a general idea of how well these drives will perform when it comes to random file write latency in a worst case, but realistic usage scenario.

The Verdict The Return of the JMicron based SSD
POST A COMMENT

270 Comments

View All Comments

  • Hellfire26 - Thursday, March 26, 2009 - link

    In reference to SSD's, I have read a lot of articles and comments about improved firmware and operating system support. I hope manufacturers don't forget about the on-board RAID controller.

    From the articles and comments made by users around the web, who have tested SSD's in a Raid 0 configuration, I believe that two Intel X25-M SSD's in a RAID 0 configuration would more than saturate current on-board RAID controllers.

    Intel is doing a die shrink of the NAND memory that is going into their SSD's come this fall. I would expect these new Intel SSD's to show faster read and write times. Other manufacturers will also find ways to increase the speed of their SSD's.

    SSD's scale well in a RAID configuration. It would be a shame if the on-board RAID controller limited our throughput. The alternative would be very expensive add-in RAID cards.
    Reply
  • primacontent.com - Friday, January 26, 2018 - link

    Yeah you are correct, thanks for this wonderful post that has given me some enlightenment :)
    https://havemeforfree.com/
    Reply
  • FlaTEr1C - Wednesday, March 25, 2009 - link

    Anand, once again you wrote an article that no one else could've written. This is why I'm reading this site since 2004 and will always do. Your articles and reviews are without exception unique and a must-read. Thank you for this thorough background, analysis and review of SSD.

    I was looking a long time for a solution to make my desktop experience faster and I think I'll order a 60GB Vertex. 200€(germany) is still a lot of money but it will be worth it.

    Once again, great work Anand!
    Reply
  • blackburried - Wednesday, March 25, 2009 - link

    It's referred to as "discard" in the kernel functions.

    It works very well w/ SSD's that support TRIM, like fusion-io's drives.
    Reply
  • Iger - Wednesday, March 25, 2009 - link

    This is the best review I've read in a very long time.
    Thank you very much!
    Reply
  • BailoutBenny - Tuesday, March 24, 2009 - link

    Great in depth article on flash based SSDs. I'm waiting for PRAM though. Reply
  • orclordrh - Tuesday, March 24, 2009 - link

    Very illuminating article, very well written and researched. It made me glad that I didn't pull the trigger on an SSD for my I7 machine and regret not buying OCZ memory! I'm interested in adding an SSD as the scratch disk for Photoshop CS4 to use. I don't really launch applications very often, say once a week on the weekly reboot and keep 6-8 apps open at all times. I have 12GB of memory for that. The benchmarks were very interesting, but what sort of activity does Photoshop scratch usage create? Large files or random writes? What type of SSD would be most cost effective here?
    An SSD does sound better than a SSD!
    Reply
  • semo - Wednesday, March 25, 2009 - link

    wait for ddr3 to enter the mainstream and buy loads of memory.

    use a ramdisk for your adobe scratch area. much faster than ssd and no wear to worry about (not that you would worry that much with modern ssds anyway).

    http://www.ghacks.net/2007/12/14/use-a-ramdisk-to-...">http://www.ghacks.net/2007/12/14/use-a-ramdisk-to-...

    there is also a paid for and more feature rich ramdisk out there. can't remember the name
    Reply
  • strikeback03 - Wednesday, March 25, 2009 - link

    I'll have to check when I get home, but I believe the recommended size for the scratch disk is upwards of 10GB. So would need a motherboard that supports a LOT of RAM to give enough to main memory plus a scratch disk. Reply
  • strikeback03 - Wednesday, March 25, 2009 - link

    I was wondering the same thing. I'd guess it would be a lot of writing/erasing, so an SSD might not be the best from a longevity standpoint, but if your system is hitting the scratch disk often then the speed might make it worthwhile. Reply

Log in

Don't have an account? Sign up now