Latency vs. Bandwidth: What to Look for in a SSD

It took me months to get my head wrapped around it, but I think I finally get it. We often talk about the concepts of bandwidth and latency but rarely are they as tangible as they are here today.

When I speak of latency I’m talking about how long it takes to complete a request, or fetch a block of data. When I mention bandwidth, I’m talking about how much you can read/write at once. Think of latency as the speed limit and bandwidth as the number of lanes on a high way.

If you’re the only car on the highway, you’re going to notice the impact of latency more than bandwidth. A speed limit of 70 mph instead of 35 is going to impact you much more than if you added more lanes to the road.

If you’re a city planner however and your only concern is getting as many people to work and back, you’re going to notice the impact of bandwidth more than latency. It doesn’t matter how fast a single car can move, what matters is how many cars you can move during rush hour traffic.

I’d argue that if you’re a desktop user and you’re using an SSD as a boot/application drive, what will matter most is latency. After you’ve got your machine setup the way you want it, the majority of accesses are going to be sequential reads and random reads/writes of very small file sizes. Things like updating file tables, scanning individual files for viruses, writing your web browser cache. What influences these tasks is latency, not bandwidth.

If you were constantly moving large multi-gigabyte files to and from your disk then total bandwidth would be more important. SSDs are still fairly limited in size and I don’t think you’ll be backing up many Blu-ray discs to them given their high cost per GB. It’s latency that matters here.

Obviously I’ll be testing both latency and bandwidth, but I wanted to spend a moment talking about the synthetic latency tests.

Iometer is a tool that can simulate any combination of disk accesses you can think of. If you know how an application or OS hits the disk, iometer can simulate it. While random disk accesses are the reason that desktop/notebook hard drives feel so slow, the accesses are generally confined to particular areas of the disk. For example, when you’re writing a file the OS needs to update a table mapping the file you’re writing to the LBAs it allocated for the file. The table that contains all of the LBA mapping is most likely located far away from the file you’re writing, thus the process of writing files to the same area can look like random writes to two different groups of LBAs. But the accesses aren’t spread out across the entire drive.

In my original X25-M article I ran a 4KB random write test over the entire span of the drive. That’s a bit more ridiculous than even the toughest user will be on his/her desktop. For this article I’m limiting the random write test to an 8GB space of the drive; it makes the benchmark a little more realistic for a desktop/notebook workload.

The other thing I’ve done is increased the number of outstanding IOs from 1 to 3. I’ve found that in a multitasking user environment Vista will generally have a maximum of 3 or 4 outstanding IOs (read/write requests).

The combination of the two results in a 100% random file write of 4KB files with 3 outstanding IOs to an 8GB portion of the drive for 3 minutes. That should be enough time to get a general idea of how well these drives will perform when it comes to random file write latency in a worst case, but realistic usage scenario.

The Verdict The Return of the JMicron based SSD
Comments Locked

250 Comments

View All Comments

  • Erickffd - Friday, March 20, 2009 - link

    Also created an account just to post this comment.

    Really impressive and well done article ! Will stay tune for further developments and reviews. Thank you so much :)

    Also... very impressed by OCZ's respond and commitment upon end users needs and product quality assurance (unfortunately not so commun by large this days among other companies). Certanly will buy from them my next SSDs to reward and support their healty policy.

    Be well ! ;)
  • Gasaraki88 - Friday, March 20, 2009 - link

    This truly was a GREAT article. I enjoyed reading it and was very informative. Thank you so much. That's why Anandtech is the best site out there.
  • davidlants - Friday, March 20, 2009 - link

    This is one of the best tech articles I have ever read, I created an account just to post this comment. I've been a fan of Anandtech for years and articles like this (and the RV700 article from a while back) show the truly unique perspective and access that Anand has that simply no other tech site can match. GREAT WORK!!!
  • Zak - Friday, March 20, 2009 - link

    I just got the Apex. I'd probably cough up more dough for the Vertex after reading this. However, I've run it for two days as my system disk in MacPro and haven't noticed any issues, it's really fast. But I guess I'll get Vertex for my Windows 7 build.

    Z.
  • Nemokrad - Friday, March 20, 2009 - link

    What I find intriguing about this article is that these smaller manufacturers do not do real world internal testing for these things. They should not need 3rd parties like you to figure this shit out for them. Maybe now OCZ will learn what they need to do for the future.
  • JonasR - Friday, March 20, 2009 - link


    Thanks for an excellent article. I have one question does anyone know which controller is beeing used in the new Patriot 256GB V.3 SSD?
  • tgwgordon - Friday, March 20, 2009 - link

    Anyone know if the Vertex Anand used had 32M or 64M cache?
  • Dennis Travis - Friday, March 20, 2009 - link

    Excellent and informative article as always Anand. Thanks so much for posting the truth!!
  • IsLNdbOi - Friday, March 20, 2009 - link

    Can't remember what page it was, but you showed some charts on the performance of SSDs at their lowest possible performance levels.

    At their lowest possible performance levels are they still faster than the 300GB Raptor?
  • Edgemeal - Friday, March 20, 2009 - link

    It's too bad Windows and applications don't let you select where all the data that needs to be updated and saved to is stored. If that was an option a SSD could be used to only load data (EXE files and support files) and a HDD could be used to store files that are updated frequently, like a web browser for example, their constantly caching files, from the sound of this article that would kill the performance of a SSD in no time.

    Great article, I'll stick to HDDs for now.

Log in

Don't have an account? Sign up now