Random Read/Write Performance

Arguably much more important to any PC user than sequential read/write performance is random access performance. It's not often that you're writing large files sequentially to your disk, but you do encounter tons of small file reads/writes as you use your PC.

To measure random read/write performance I created an iometer script that peppered the drive with random requests, with an IO queue depth of 3 (to add some multitasking spice to the test). The write test was performed over an 8GB range on the drive, while the read test was performed across the whole drive. I ran the test for 3 minutes.

The three hard drives all posted scores below 1MB/s and thus aren't visible on our graph above. This is where SSDs shine and no hard drive, regardless of how many you RAID together, can come close.

The two Intel drives top the charts and maintain a huge lead. The OCZ Vertex actually beats out the more expensive (and unreleased) Summit drive with a respectable 32MB/s transfer rate here. Note that the Vertex is also faster than last year's Samsung SLC drive that everyone was selling for $1000. Even the JMicron drives do just fine here.

If we look at latency instead of transfer rate it helps put things in perspective:

Read latencies for hard drives have always been measured in several ms, but every single SSD here manages to complete random reads in less than 1ms under load.

Random write speed is where we can thin the SSD flock:

Only the Intel drives and to an extent, the OCZ Vertex, post numbers visible on this scale. Let's go to a table to see everything in greater detail:

4KB Random Write Speed  
Intel X25-E 31.7 MB/s
Intel X25-M 23.1 MB/s
JMicron JMF602B MLC 0.02 MB/s
JMicron JMF602Bx2 MLC 0.03 MB/s
OCZ Summit 0.77 MB/s
OCZ Vertex 2.41 MB/s
Samsung SLC 0.53 MB/s
Seagate Momentus 5400.6 0.81 MB/s
Western Digital Caviar SE16 1.26 MB/s
Western Digital VelociRaptor 1.63 MB/s

Every single drive other than the Intel X25-E, X25-M and OCZ's Vertex is slower than the 2.5" Seagate Momentus 5400.6 hard drive in this test. The Vertex, thanks to OCZ's tweaks, is now 48% faster than the VelociRaptor.

The Intel drives are of course architected for the type of performance needed on a desktop/notebook and thus they deliver very high random write performance.

Random write performance is merely one corner of the performance world. A drive needs good sequential read, sequential write, random read and random write performance. The fatal mistake is that most vendors ignore random write performance and simply try to post the best sequential read/write speeds; doing so simply produces a drive that's undesirable.

While the Vertex is slower than Intel's X25-M, it's also about half the price per GB. And note that the Vertex is still 48% faster than the VelociRaptor here, and multiple times faster in the other tests.

Sequential Read/Write Performance SYSMark 2007
POST A COMMENT

313 Comments

View All Comments

  • GourdFreeMan - Wednesday, March 18, 2009 - link

    Substitute "allocated" for "dirty" in my above post. I muddled the terminology, and there is no edit function to repair my mistake.

    Also, I suppose the SSD could store some per block data about page allocation appended to the blocks themselves at a small latency penalty to get around the RAM issue, but I am not sure if existing SSDs do such a thing.

    My concerns about added wear in my original post still stand, and doing periodic internal defragmentation is going to necessitate some unpredictable sporadic periods of poor response by the drive as well if this feature is to be offered by the drive and not the OS.
    Reply
  • Basilisk - Wednesday, March 18, 2009 - link

    I think your concerns parallel mine, allbeit we have different conclusions.

    Parag.1: I think you misunderstand the ERASE concept: as I read it, after an ERASE parts of the block are re-written and parts are left erased -- those latter parts NEED NOT be re-erased before they are written, later. If the TRIM function can be accomplished at an idle moment, access time will be "saved"; if the TRIM can erase (release) multiple clusters in one block [unlikely?], that will reduce both wear & time.

    Parag.2: This argument reverses the concept that OS's should largely be ignorant about device internals. As devices with different internal structures have proliferated over the years -- and will continue so with SSD's -- such OS differentiation is costly to support.

    Parag 3 and onwards: Herein lies the problem: we want to save wear by not re-writing files to make them contiguous, but we now have a situation where wear and erase times could be considerably reduced by having those files be contiguous. A 2MB file fragmented randomly in 4KB clusters will result in around 500 erase cycles when it's deleted; if stored contiguously, that would only require 4-5 erase cycles (of 512KB SSD-blocks)... a 100:1 reduction in erases/wear.

    It would be nice to get the SSD blocks down to 4KB in size, but I have to infer there are counter arguments or it would've been done already.

    With current SSDs, I'd explore using larger cluster sizes -- and here we have a clash with MS [big surprise]. IIRC, NTFS clusters cannot exceed 4KB [for something to do with file compression!]. That makes it possible that FAT32 with 32KB clusters [IIRC clusters must be less than 64KB for all system tools to properly function] might be the best choice for systems actively rewriting large files. I'm unfamiliar with FAT32 issues that argue against this, but if the SSD's allocate clusters contiguously, wouldn't this reduce erases by a factor of 8 for large file deletions? 32KB clusters might ham-string caching efficiency and result in more disk accesses, but it might speed-up linear reads and s/w loads.

    The impact of very small file/directory usage and for small incremental file changes [like appending to logs] wouldn't be reduced -- it might be increased as data-transfer sizes would increase -- so the overall gain for having fewer clusters-per-SSD-block is hard to intuit, and it would vary in different environments.
    Reply
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    RE Parag. 1: As I understand it, the entire 512 KiB block must always be erased if there is even a single page of valid data written to it... hence my concerns. You may save time reading and writing data if the device could know a block were partially full, but you still suffer the 2ms erase penalty. Please correct me if I am mistaken in my assumption.

    RE Parag. 2: The problem is the SSD itself only knows the physical map of empty and used space. It doesn't have any knowledge of the logical file system. NTFS, FAT32, ext3 -- it doesn't matter to the drive, that is the OS'es responsibility.

    RE Parag. 3: I would hope that reducing the physical block size would also reduce the block erase time from 2ms, but I am not a flash engineer and so cannot comment. One thing I can state for certain, however is that moving to smaller physical block sizes would not increase wear across the surface of the drive, except possibly for the necessity to keep track of a map of used blocks. Rewriting 128 blocks on a hypothetical SSD with 4 KiB blocks versus 1 512 KiB block still erases 512 KiB of disk space (excepting the overhead in tracking which blocks are filled).

    Regarding using large filesystem clusters: 4 KiB clusters offer a nice tradeoff between filesystem size, performance and slack (lost space due to cluster size). If you wanted to make an SSD look artificially good versus a hard drive, a 512 KiB cluster size would do so admirably, but no one would use such a large cluster size except for a data drive used to store extremely large files (e.g. video) exclusively. BTW, in case you are unaware, you can format a non-OS partition with NTFS to cluster sizes other than 4 KiB. You can also force the OS to use a different cluster size by first formating the drive for the OS as a data drive with a different cluster size under Windows and then installing Windows on that partition. I have a 2 KiB cluster size on a drive that has many hundreds of thousands of small files. However, I should note that since virtual memory pages are by default 4 KiB (another compelling reason for the 4 KiB default cluster size), most people don't have a use for other cluster sizes if they intend to have a page file on the drive.
    Reply
  • ssj4Gogeta - Wednesday, March 18, 2009 - link

    Thanks for the wonderful article. And yes, I read every single word. LOL Reply
  • rudolphna - Wednesday, March 18, 2009 - link

    Hey anand, page 3, the random read latency graph, they are mixed up. it is listed as the WD Velociraptor having a .11ms latency, I think you might want to fix that. :) Reply
  • SkullOne - Wednesday, March 18, 2009 - link

    Fantastic article. Definitely one of the best I've read in a long time. Incredibly informative. Everyone who reads this article is a little bit smarter afterwards.

    All the great information about SSDs aside, I think the best part though is how OCZ is willing to take blame for failure earlier and fix the problems. Companies like that are the ones who will get my money in the future especially when it is time for me to move from HDD to SSD.
    Reply
  • Apache2009 - Wednesday, March 18, 2009 - link

    i got one Vertex SSD. Why suspend will cause system halt ? My laptop is nVidia chipset and it is work fine with HDD. Somebody know it ? Reply
  • MarcHFR - Wednesday, March 18, 2009 - link

    Hi,

    You wrote that there is spare-area on X25-M :

    "Intel ships its X25-M with 80GB of MLC flash on it, but only 74.5GB is available to the user"

    It's a mistake. 80 GB of Flash look like 74.5GB for the user because 80,000,000,000 bytes of flash is 74.5 Go for the user point of view (with 1 KB = 1024 byte).

    You did'nt point out the other problem of the X25-M : LBA "optimisation". After doing a lot of I/O random write the speed in sequential write can get down to only 10 MB /s :/
    Reply
  • Kary - Thursday, March 19, 2009 - link

    The extra space would be invisible to the end user (it is used internally)

    Also, addressing is normally done in binary..as a result actual sizes are typically in binary in memory devices (flash, RAM...):
    64gb
    128gb

    80 GB...not compatible with binary addressing

    (though 48GB of a 128GB drive being used for this seems pretty high)
    Reply
  • ssj4Gogeta - Wednesday, March 18, 2009 - link

    Did you bother reading the article? He pointed out that you can get any SSD (NOT just Intel's) stuck into a situation when only a secure erase will help you out. The problem is not specific to Intel's SSD, and it doesn't occur during normal usage. Reply

Log in

Don't have an account? Sign up now