Why You Should Want an SSD

For the past several months I’ve been calling SSDs the single most noticeable upgrade you can do to your computer. Whether desktop or laptop, stick a good SSD in there and you’ll notice the difference.

I’m always angered by the demos in any Steve Jobs keynote. Not because the demos themselves are somehow bad, but because Jobs always has a perfectly clean machine to run the demos on - and multiple machines at that. Anyone who has built a computer before knows the glory of a freshly installed system; everything just pops up on your screen. Applications, windows, everything - the system is just snappy.

Of course once you start installing more applications and have more things running in the background, your system stops being so snappy and you tend to just be thankful when it doesn’t crash.

A big part of the problem is that once you have more installed on your system, there are more applications sending read/write requests to your IO subsystem. While our CPUs and GPUs thrive on being fed massive amounts of data in parallel, our hard drives aren’t so appreciative of our multitasking demands. And this is where SSDs truly shine.

Before we go too far down the rabbit hole I want to share a few numbers with you.

This is Western Digital’s VelociRaptor. It’s a 300GB drive that spins its platters at 10,000RPM and is widely considered the world’s fastest consumer desktop hard drive.

The 300GB VelociRaptor costs about $0.77 per GB.

This is the Intel X25-M. The Conroe of the SSD world, the drive I reviewed last year. It costs about $4.29 per GB; that’s over 5x the VelociRaptor’s cost per GB.

The VelociRaptor is the dominant force in the consumer HDD industry and the X25-M is the svelte bullfighter of the SSD world.

Whenever anyone mentions a more affordable SSD you always get several detractors saying that you could easily buy 2 VelociRaptors for the same price. Allow me to show you one table that should change your opinion.

This is the Average Read Access test from Lavalys’ Everest Disk benchmark. The test simply writes a bunch of files at random places on the disk and measures how long it takes to access the files.

Measuring random access is very important because that’s what generally happens when you go to run an application while doing other things on your computer. It’s random access that feels the slowest on your machine.

  Random Read Latency in ms
Intel X25-M 0.11 ms
Western Digital VelociRaptor 6.83 ms

 

The world’s fastest consumer desktop hard drive, Western Digital’s 300GB VelociRaptor can access a random file somewhere on its platters in about 6.83ms; that’s pretty quick. Most hard drives will take closer to 8 or 9ms in this test. The Intel X25-M however? 0.11ms. The fastest SSDs can find the data you’re looking for in around 0.1ms. That’s an order of magnitude faster than the fastest hard drive on the market today.

The table is even more impressive when you realize that wherever the data is on your SSD, the read (and write) latency is the same. While HDDs are fastest when the data you want is in the vicinity of the read/write heads, all parts of a SSD are accessed the same way. If you want 4KB of data, regardless of where it is, you’ll get to it at the same speed from a SSD.

The table below looks at sequential read, sequential write and random write performance of these two kings of their respective castles. The speeds are in MB/s.

  Sequential Read (2MB Block) Sequential Write (2MB Block) Random Write (4KB Block)
Intel X25-M 230 MB/s 71 MB/s 23 MB/s
Western Digital VelociRaptor 118 MB/s 119 MB/s 1.6 MB/s

 

If you’re curious, these numbers are best case scenario for the VelociRaptor and worst case scenario for the X25-M (I’ll explain what that means later in the article). While the VelociRaptor is faster in the large block sequential writes look at the sequential read and random write performance. The X25-M destroys the VelociRaptor in sequential reads and is an order of magnitude greater in random write performance. If you’re curious, it’s the random write performance that you’re most likely to notice and that’s where a good SSD can really shine; you write 4KB files far more often than you do 2MB files while using your machine.

If the table above doesn’t convince you, let me share one more datapoint with you. Ever play World of Warcraft? What we’re looking at here is the amount of time it takes to get from the character selection screen into a realm with everything loaded. This is on a fully configured system with around 70GB of applications and data as well as real time anti-virus scanning going on in the background on every accessed file.

  WoW Enter Realm Time in Seconds
Intel X25-M 4.85 s
Western Digital VelociRaptor 12.5 s

 

The world’s fastest hard drive gets us into the game in 12.5 seconds. The Intel X25-M does it in under 5.

SSDs make Vista usable. It doesn’t matter how much background crunching the OS is doing, every application and game launches as if it were the only thing running on the machine. Everything launches quickly. Much faster than on a conventional hard drive. If you have the ability, try using your system with a SSD for a day then go back to your old hard drive; if that test doesn’t convince you, nothing will.

That’s just a small taste of why you’d want an SSD, now let’s get back to finding a good one.

Bringing You Up to Speed: The History Lesson Hey, There’s an Elephant in the Room
Comments Locked

250 Comments

View All Comments

  • Kary - Thursday, March 19, 2009 - link

    Why use TRIM at all?!?!?

    If you have extras Blocks on the drive (NOT PAGES, FULL BLOCKS) then there is no need for TRIM command.

    1)Currently in use BLOCK is half full
    2)More than half a block needs to be written
    3)extra BLOCK is mapped into the system
    4)original/half full block is mapped out of system.. can be erased during idle time.

    You could even bind multiple continuous blocks this way (I assume that it is possible to erase simultaneously any of the internal groupings pages from Blocks on up...they probably share address lines...ex. erase 0000200 -> just erase block #200 ....erase 00002*0 -> erase block 200 to 290...btw, did addressing in base ten instead of binary just to simplify for some :)
  • korbendallas - Wednesday, March 18, 2009 - link

    Actually i think that the Trim command is merely used for marking blocks as free. The OS doesn't know how the data is placed on the SSD, so it can't make informed decision on when to forcefully erase pages. In the same way, the SSD doesn't know anything about what files are in which blocks, so you can't defrag files internally in the drive.

    So while you can't defrag files, you CAN now defrag free space, and you can improve the wear leveling because deleted data can be ignored.

    So let's say you have 10 pages where 50% of the blocks were marked deleted using the Trim command. That means you can move the data into 5 other pages, and erase the 10 pages. The more deleted blocks there are in a page, the better a candidate for this procedure. And there isn't really a problem with doing this while the drive is idle - since you're just doing something now, that you would have to do anyway when a write command comes.
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    This is basically what I am arguing both for and against in the fourth paragraph of my original post, though I assumed it would be the OS'es responsibility, not the drive's.

    Do SSDs track dirty pages, or only dirty blocks? I don't think there is enough RAM on the controller to do the former...
  • korbendallas - Wednesday, March 18, 2009 - link

    Well, let's take a look at how much storage we actually need. A block can be erased, contain data, or be marked as trimmed or deallocated.

    That's three different states, or two bits of information. Since each block is 4kB, a 64GB drive would have 16777216 blocks. So that's 4MB of information.

    So yeah, saving the block information is totally feasible.
  • GourdFreeMan - Thursday, March 19, 2009 - link

    Actually the drive only needs to know if the page is in use or not, so you can cut that number in half. It can determine a partially full block that is a candidate for defragmentation by looking at whether neighboring pages are in use. By your calculation that would then be 2 MiB.

    That assumes the controller only needs to support drives of up to 64 GiB capacity, that pages are 4 KiB in size, and that the controller doesn't need to use RAM for any other purpose.

    Most consumer SSD lines go up to 256 GiB in capacity, which would bring the total RAM needed up to 8 MiB using your assumption of a 4 KiB page size.

    However, both hard drives and SSDs use 512 byte sectors. This does not necessarily mean that internal pages are therefore 512 bytes in size, but lacking any other data about internal pages sizes, let's run the numbers on that assumption. To support a 256 MiB drive with 512 byte pages, you would need 64 MiB of RAM -- which only the Intel line of SSDs has more than -- dedicated solely to this purpose.

    As I said before there are ways of getting around this RAM limitation (e.g. storing page allocation data per block, keeping only part of the page allocation table in RAM, etc.), so I don't think the technical challenge here is insurmountable. There still remains the issue of wear, however...
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    Substitute "allocated" for "dirty" in my above post. I muddled the terminology, and there is no edit function to repair my mistake.

    Also, I suppose the SSD could store some per block data about page allocation appended to the blocks themselves at a small latency penalty to get around the RAM issue, but I am not sure if existing SSDs do such a thing.

    My concerns about added wear in my original post still stand, and doing periodic internal defragmentation is going to necessitate some unpredictable sporadic periods of poor response by the drive as well if this feature is to be offered by the drive and not the OS.
  • Basilisk - Wednesday, March 18, 2009 - link

    I think your concerns parallel mine, allbeit we have different conclusions.

    Parag.1: I think you misunderstand the ERASE concept: as I read it, after an ERASE parts of the block are re-written and parts are left erased -- those latter parts NEED NOT be re-erased before they are written, later. If the TRIM function can be accomplished at an idle moment, access time will be "saved"; if the TRIM can erase (release) multiple clusters in one block [unlikely?], that will reduce both wear & time.

    Parag.2: This argument reverses the concept that OS's should largely be ignorant about device internals. As devices with different internal structures have proliferated over the years -- and will continue so with SSD's -- such OS differentiation is costly to support.

    Parag 3 and onwards: Herein lies the problem: we want to save wear by not re-writing files to make them contiguous, but we now have a situation where wear and erase times could be considerably reduced by having those files be contiguous. A 2MB file fragmented randomly in 4KB clusters will result in around 500 erase cycles when it's deleted; if stored contiguously, that would only require 4-5 erase cycles (of 512KB SSD-blocks)... a 100:1 reduction in erases/wear.

    It would be nice to get the SSD blocks down to 4KB in size, but I have to infer there are counter arguments or it would've been done already.

    With current SSDs, I'd explore using larger cluster sizes -- and here we have a clash with MS [big surprise]. IIRC, NTFS clusters cannot exceed 4KB [for something to do with file compression!]. That makes it possible that FAT32 with 32KB clusters [IIRC clusters must be less than 64KB for all system tools to properly function] might be the best choice for systems actively rewriting large files. I'm unfamiliar with FAT32 issues that argue against this, but if the SSD's allocate clusters contiguously, wouldn't this reduce erases by a factor of 8 for large file deletions? 32KB clusters might ham-string caching efficiency and result in more disk accesses, but it might speed-up linear reads and s/w loads.

    The impact of very small file/directory usage and for small incremental file changes [like appending to logs] wouldn't be reduced -- it might be increased as data-transfer sizes would increase -- so the overall gain for having fewer clusters-per-SSD-block is hard to intuit, and it would vary in different environments.
  • GourdFreeMan - Wednesday, March 18, 2009 - link

    RE Parag. 1: As I understand it, the entire 512 KiB block must always be erased if there is even a single page of valid data written to it... hence my concerns. You may save time reading and writing data if the device could know a block were partially full, but you still suffer the 2ms erase penalty. Please correct me if I am mistaken in my assumption.

    RE Parag. 2: The problem is the SSD itself only knows the physical map of empty and used space. It doesn't have any knowledge of the logical file system. NTFS, FAT32, ext3 -- it doesn't matter to the drive, that is the OS'es responsibility.

    RE Parag. 3: I would hope that reducing the physical block size would also reduce the block erase time from 2ms, but I am not a flash engineer and so cannot comment. One thing I can state for certain, however is that moving to smaller physical block sizes would not increase wear across the surface of the drive, except possibly for the necessity to keep track of a map of used blocks. Rewriting 128 blocks on a hypothetical SSD with 4 KiB blocks versus 1 512 KiB block still erases 512 KiB of disk space (excepting the overhead in tracking which blocks are filled).

    Regarding using large filesystem clusters: 4 KiB clusters offer a nice tradeoff between filesystem size, performance and slack (lost space due to cluster size). If you wanted to make an SSD look artificially good versus a hard drive, a 512 KiB cluster size would do so admirably, but no one would use such a large cluster size except for a data drive used to store extremely large files (e.g. video) exclusively. BTW, in case you are unaware, you can format a non-OS partition with NTFS to cluster sizes other than 4 KiB. You can also force the OS to use a different cluster size by first formating the drive for the OS as a data drive with a different cluster size under Windows and then installing Windows on that partition. I have a 2 KiB cluster size on a drive that has many hundreds of thousands of small files. However, I should note that since virtual memory pages are by default 4 KiB (another compelling reason for the 4 KiB default cluster size), most people don't have a use for other cluster sizes if they intend to have a page file on the drive.
  • ssj4Gogeta - Wednesday, March 18, 2009 - link

    Thanks for the wonderful article. And yes, I read every single word. LOL
  • rudolphna - Wednesday, March 18, 2009 - link

    Hey anand, page 3, the random read latency graph, they are mixed up. it is listed as the WD Velociraptor having a .11ms latency, I think you might want to fix that. :)

Log in

Don't have an account? Sign up now