Latency vs. Bandwidth: What to Look for in a SSD

It took me months to get my head wrapped around it, but I think I finally get it. We often talk about the concepts of bandwidth and latency but rarely are they as tangible as they are here today.

When I speak of latency I’m talking about how long it takes to complete a request, or fetch a block of data. When I mention bandwidth, I’m talking about how much you can read/write at once. Think of latency as the speed limit and bandwidth as the number of lanes on a high way.

If you’re the only car on the highway, you’re going to notice the impact of latency more than bandwidth. A speed limit of 70 mph instead of 35 is going to impact you much more than if you added more lanes to the road.

If you’re a city planner however and your only concern is getting as many people to work and back, you’re going to notice the impact of bandwidth more than latency. It doesn’t matter how fast a single car can move, what matters is how many cars you can move during rush hour traffic.

I’d argue that if you’re a desktop user and you’re using an SSD as a boot/application drive, what will matter most is latency. After you’ve got your machine setup the way you want it, the majority of accesses are going to be sequential reads and random reads/writes of very small file sizes. Things like updating file tables, scanning individual files for viruses, writing your web browser cache. What influences these tasks is latency, not bandwidth.

If you were constantly moving large multi-gigabyte files to and from your disk then total bandwidth would be more important. SSDs are still fairly limited in size and I don’t think you’ll be backing up many Blu-ray discs to them given their high cost per GB. It’s latency that matters here.

Obviously I’ll be testing both latency and bandwidth, but I wanted to spend a moment talking about the synthetic latency tests.

Iometer is a tool that can simulate any combination of disk accesses you can think of. If you know how an application or OS hits the disk, iometer can simulate it. While random disk accesses are the reason that desktop/notebook hard drives feel so slow, the accesses are generally confined to particular areas of the disk. For example, when you’re writing a file the OS needs to update a table mapping the file you’re writing to the LBAs it allocated for the file. The table that contains all of the LBA mapping is most likely located far away from the file you’re writing, thus the process of writing files to the same area can look like random writes to two different groups of LBAs. But the accesses aren’t spread out across the entire drive.

In my original X25-M article I ran a 4KB random write test over the entire span of the drive. That’s a bit more ridiculous than even the toughest user will be on his/her desktop. For this article I’m limiting the random write test to an 8GB space of the drive; it makes the benchmark a little more realistic for a desktop/notebook workload.

The other thing I’ve done is increased the number of outstanding IOs from 1 to 3. I’ve found that in a multitasking user environment Vista will generally have a maximum of 3 or 4 outstanding IOs (read/write requests).

The combination of the two results in a 100% random file write of 4KB files with 3 outstanding IOs to an 8GB portion of the drive for 3 minutes. That should be enough time to get a general idea of how well these drives will perform when it comes to random file write latency in a worst case, but realistic usage scenario.

The Verdict The Return of the JMicron based SSD
Comments Locked

250 Comments

View All Comments

  • punjabiplaya - Wednesday, March 18, 2009 - link

    Great info. I'm looking to get an SSD but was put off by all these setbacks. Why should I put away my HDDS and get something a million times more expensive that stutters?
    This article is why I visit AT first.
  • Hellfire26 - Wednesday, March 18, 2009 - link

    Anand, when you filled up the drives to simulate a full drive, did you also write to the extended area that is reserved? If you didn't, wouldn't the Intel SLC drive (as an example) not show as much of a performance drop, versus the MLC drive? As you stated, Intel has reserved more flash memory on the SLC drive, above the stated SSD capacity.

    I also agree with GourdFreeMan, that the physical block size needs to be reduced. Due to the constant erasing of blocks, the Trim command is going to reduce the life of the drive. Of course, drive makers could increase the size of the cache and delay using the Trim command until the number of blocks to be erased equals the cache available. This would more efficiently rearrange the valid data still present in the blocks that are being erased (less writes). Microsoft would have to design the Trim command so it would know how much cache was available on the drive, and drive makers would have to specifically reserve a portion of their cache for use by the Trim command.

    I also like Basilisk's comment about increasing the cluster size, although if you increase it too big, you are likely to be wasting space and increasing overhead. Surely, even if MS only doubles the cluster size for NTFS partitions to 8KB's, write cycles to SSD's would be reduced. Also, There is the difference between 32bit and 64bit operating systems to consider. However, I don't have the knowledge to say whether Microsoft can make these changes without running into serious problems with other aspects of the operating system.
  • Anand Lal Shimpi - Wednesday, March 18, 2009 - link

    I only wrote to the LBAs reported to the OS. So on the 80GB Intel drive that's from 0 - 74.5GB.

    I didn't test the X25-E as extensively as the rest of the drives so I didn't look at performance degradation as closely just because I was running out of time and the X25-E is sooo much more expensive. I may do a standalone look at it in the near future.

    Take care,
    Anand
  • gss4w - Wednesday, March 18, 2009 - link

    Has anyone at Anandtech talked to Microsoft about when the "Trim" command will be supported in Windows 7. Also it would be great if you could include some numbers from Windows 7 beta when you do a follow-up.

    One reason I ask is that I searched for "Windows 7 ssd trim" and I saw a presentation from WinHEC that made it sound like support for the trim command would be a requirement for SSD drives to meet the Windows 7 logo requirements. I would think if this were the case then Windows 7 would have support for trim. However, this article made it sound like support for Trim might not be included when Windows 7 is initially released, but would be added later.

  • ryedizzel - Thursday, March 19, 2009 - link

    I think it is obvious that Windows 7 will support TRIM. The bigger question this article points out is whether or not the current SSDs will be upgradeable via firmware- which is more important for consumers wanting to buy one now.
  • Martimus - Wednesday, March 18, 2009 - link

    It took me an hour to read the whole thing, but I really enjoyed it. It reminded me of the time I spent testing circuitry and doing root cause analysis.
  • alpha754293 - Wednesday, March 18, 2009 - link

    I think that it would be interesting if you were to be able to test the drives for the "desktop/laptop/consumer" front by writing a 8 GiB file using 4 kiB block sizes, etc. for the desktop pattern and also to test the drive then with a larger sizes and larger block size for the server/workstation pattern as well.

    You present some very very good arguments and points, and I found your article to be thoroughly researched and well put.

    So I do have to commend you on that. You did an excellent job. It is thoroughly enjoyable to read.

    I'm currently looking at a 4x 256 GB Samsung MLC on Solaris 10/ZFS for apps/OS (for PXE boot), and this does a lot of the testing; but I would be interested to see how it would handle more server-type workloads.
  • korbendallas - Wednesday, March 18, 2009 - link

    If The implementation of the Trim command is as you described here, it would actually kind of suck.

    "The third step was deleting the original 4KB text file. Since our drive now supports TRIM, when this deletion request comes down the drive will actually read the entire block, remove the first LBA and write the new block back to the flash:"

    First of all, it would create a new phenomenon called Erase Amplification. This would negatively impact the lifetime of a drive.

    Secondly, you now have worse delete performance.


    Basically, an SSD 4kB block can be in 3 different states: erased, data, garbage. A block enters the garbage state when a block is "overwritten" or the Trim command marks the contents as invalid.

    The way i would imagine it working, marking block content as invalid is all the Trim command does.

    Instead the drive will spend idle time finding the 512kB pages with the most garbage blocks. Once such a page is found, all the data blocks from that page would be copied to another page, and the page would be erased. Doing it in this way maximizes the number of garbage blocks being converted to erased.
  • alpha754293 - Wednesday, March 18, 2009 - link

    BTW...you might be able to simulate the drive as well using Cygwin where you go to the drive and run the following:

    $ dd if=/dev/random of=testfile bs=1024k count=76288

    I'm sure that you can come up with fancier shell scripts and stuff that uses the random number generator for the offsets (and if you really want it to work well, partition it so that when it does it, it takes up the entire initial 74.5 GB partition, and when you're done "dirtying" the data using dd and offset in a random pattern, grow the partition to take up the entire disk again.)

    Just as a suggestion for future reference.

    I use parts of that to some (varying) degree for when I do my file/disk I/O subsystem tests.
  • nubie - Wednesday, March 18, 2009 - link

    I should think that most "performance" laptops will come with a Vertex drive in the near future.

    Finally a performance SSD that comes near mainstream pricing.

    Things are looking up, if more manufacturers get their heads out of the sand we should see prices drop as competition finally starts breeding excellence.

Log in

Don't have an account? Sign up now