Latency vs. Bandwidth: What to Look for in a SSD

It took me months to get my head wrapped around it, but I think I finally get it. We often talk about the concepts of bandwidth and latency but rarely are they as tangible as they are here today.

When I speak of latency I’m talking about how long it takes to complete a request, or fetch a block of data. When I mention bandwidth, I’m talking about how much you can read/write at once. Think of latency as the speed limit and bandwidth as the number of lanes on a high way.

If you’re the only car on the highway, you’re going to notice the impact of latency more than bandwidth. A speed limit of 70 mph instead of 35 is going to impact you much more than if you added more lanes to the road.

If you’re a city planner however and your only concern is getting as many people to work and back, you’re going to notice the impact of bandwidth more than latency. It doesn’t matter how fast a single car can move, what matters is how many cars you can move during rush hour traffic.

I’d argue that if you’re a desktop user and you’re using an SSD as a boot/application drive, what will matter most is latency. After you’ve got your machine setup the way you want it, the majority of accesses are going to be sequential reads and random reads/writes of very small file sizes. Things like updating file tables, scanning individual files for viruses, writing your web browser cache. What influences these tasks is latency, not bandwidth.

If you were constantly moving large multi-gigabyte files to and from your disk then total bandwidth would be more important. SSDs are still fairly limited in size and I don’t think you’ll be backing up many Blu-ray discs to them given their high cost per GB. It’s latency that matters here.

Obviously I’ll be testing both latency and bandwidth, but I wanted to spend a moment talking about the synthetic latency tests.

Iometer is a tool that can simulate any combination of disk accesses you can think of. If you know how an application or OS hits the disk, iometer can simulate it. While random disk accesses are the reason that desktop/notebook hard drives feel so slow, the accesses are generally confined to particular areas of the disk. For example, when you’re writing a file the OS needs to update a table mapping the file you’re writing to the LBAs it allocated for the file. The table that contains all of the LBA mapping is most likely located far away from the file you’re writing, thus the process of writing files to the same area can look like random writes to two different groups of LBAs. But the accesses aren’t spread out across the entire drive.

In my original X25-M article I ran a 4KB random write test over the entire span of the drive. That’s a bit more ridiculous than even the toughest user will be on his/her desktop. For this article I’m limiting the random write test to an 8GB space of the drive; it makes the benchmark a little more realistic for a desktop/notebook workload.

The other thing I’ve done is increased the number of outstanding IOs from 1 to 3. I’ve found that in a multitasking user environment Vista will generally have a maximum of 3 or 4 outstanding IOs (read/write requests).

The combination of the two results in a 100% random file write of 4KB files with 3 outstanding IOs to an 8GB portion of the drive for 3 minutes. That should be enough time to get a general idea of how well these drives will perform when it comes to random file write latency in a worst case, but realistic usage scenario.

The Verdict The Return of the JMicron based SSD
Comments Locked

250 Comments

View All Comments

  • strikeback03 - Thursday, March 19, 2009 - link

    I understand your point, but I am not sure you understand the point I (and others) are trying to make. The SSD makers (should) know their market. As they seem to be marketing these SSDs to consumers, they should know that means the vast majority are on Vista or OSX, so the OS won't be optimized for SSDs. It also means the majority will be using integrated disk controllers. Therefore, in choosing a SSD controller which does not operate properly given those restrictions, they chose poorly. The testing here at Anandtech shows that regardless of how the drives might perform in ideal circumstances, they have noticeable issues when used the way most users would use them, which is really all those users care about.
  • tshen83 - Thursday, March 19, 2009 - link

    In the history of computing, it was always the case that software compensated for the new hardware, not the other way around. When new hardware comes out that obsoletes current generation of software, new software will be written to take advantage of the new hardware.
    Think of it this way: you always write newer version of drivers to drive the newest GPUs. When is the last time newer GPUs work with older drivers?

    Nobody should be designing hardware now that makes DOS run fast right? All file systems (except ZFS and soon BTRFS) are obsolete now for SSDs, so we write new file systems. I am not sure Intel X25-M's approach of virtualizing flash to the likings of NTFS and EXT3 is the correct one. It is simply a bridge to get to the next solution.

    SSD makers right now are making a serious mistake pushing SSDs down consumer's throats during an economic crisis. They should have focused on the enterprise market, targeting DB servers. But in that space, Intel X25-E sits alone without competition. (Supertalent UltraDrive LEs should be within 25% of X25-E by my estimation)
  • pmonti80 - Thursday, March 19, 2009 - link

    Now I understand what you meant in the beginning. But still I don't agree with you, the system reviewed is the one 99% of SSD buyers will use(integrated mobo controller + NTFS). So, why optimize the benchmark to show the bad drives in a good light?

    About the Vertex, I don't understand what you are complaining about. After reading this article most people got the idea that Vertex is a good drive and at half Intel's price (I know, I searched on google for comments about this article).
  • tshen83 - Thursday, March 19, 2009 - link

    Professional people only look at two SSD benchmarks: random read IOPS at 4k and random write IOPS at 4k(Maybe 8K too for native DB pages).

    The Vertex random write IOPS at 4K size is abysmal. 2.4MB/sec at 4K means it only does 600ish random write IOPs. Something was wrong, and Vista/ICH10R didn't help. The 8GB/sec write boundary Anand imposed on the random write IOPS test is fishy. So is the artificial io queue depth = 3.

    The vertex random write IOPS should be better. The random read IOPS also should be slightly better. I have seen OCZ's own benchmark placing the Vertex very close to Intel X25-M at random read/ write IOPS tests.

    I personally think that if you use NTFS, just ignore the SSDs for now until Windows 7 RTM. You can't hurt waiting for SSD price to drop some more in the next 6 months. Same thing for Linux, although I would argue that Linux is even in a worse position for SSDs right now than windows 7. EXT3/EXT4/JFS/XFS/REISERFS all suck on SSDs.
  • gss4w - Thursday, March 19, 2009 - link

    Anandtech should adopt the same comment system as Dailytech so that comments that don't make any sense can be rated down. Who would want to read a review of something using a beta OS, or worse an OS that is only used on servers? I think it would be interesting to see if Windows 7 beta offered any improvements, but that should not be the focus of the review.
  • 7Enigma - Thursday, March 19, 2009 - link

    Here's another vote for the Dailytech comments section. The ability to rate up down, but more importantly HIDE the comments below a threshold would make for much more enjoyable reading.
  • curtisfong - Wednesday, March 18, 2009 - link

    Why should Anand test with Windows 7b or *nix? What is the majority OS?

    Kudos to Anand for testing real world performance on an OS that most use, and to Intel for tuning their drives for it. I'm happy the other manufacturers are losing business..maybe they will also tune their drives for real world performance and not synthetic benchmarks.

    To the poster above: do you work for OCZ or Samsung?
  • Glenn - Wednesday, March 18, 2009 - link

    tshen83 "A very thorough review by tshen83, an hour ago
    BUT, still based on Windows Vista.
    "

    As long as these drives are marketed toward said OS, why would you not use it? Most of us wouldn't recognize Solaris if we saw it! And I believe you seriously overestimate yourself if your gonna drill anything into Anands head! You might need your own site, huh?

    Great Job Anand! Don't forget to remind these CEO's that they also need to provide any software needed to configure and optimize these drives to work properly. ie go to OCZ Forums and try to figure out how to align, optimize and keep your drive running like it's supposed to, in less than 4 hours of reading! It would be nice if these companies would do their own beta testing and not rely on early adopters to do it for them!
  • Roland00 - Wednesday, March 18, 2009 - link

    It was a joy to read all 31 pages
  • MagicPants - Wednesday, March 18, 2009 - link

    Anand it would be really helpful to have a list of SSD companies blacklisting you so I know which ones to avoid. In general it would be nice to know who doesn't provide review samples to reputable sites.

Log in

Don't have an account? Sign up now