The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
Impact of Idle Garbage Collection
The other option that Indilinx provides its users to improve used performance is something called idle or background garbage collection. The idea is that without any effort on your or the OS’ part your drive, while idle, will defragment itself.
The feature was actually first introduced by Samsung for its RBB based drives, but I’ll get to the issues with Samsung’s drives momentarily.
It either works by looking at the data on the drive and organizing it into a less fragmented state, or by looking at the file system on the drive and attempting to TRIM based on what it finds. Both Indilinx and Samsung have attempted to implement this sort of idle garbage collection and it appears they do it in different ways. While the end result is the same, how they get there determines the usefulness of this feature.
In the first scenario, this is not simply TRIMing the contents of the drive, the drive doesn’t know what to TRIM; it must still keep track of all data. Instead, the drive is re-organizing its data to maximize performance.
The second scenario requires a compatible file system (allegedly NTFS for the Samsung drives) and then the data is actually TRIMed as it would be with the TRIM instruction.
Details are slim, but the idle garbage collection does work in improving performance:
PCMark Vantage HDD Score | New | "Used" | After TRIM/Idle GC | % of New Perf |
Corsair P256 (Samsung MLC) | 26607 | 18786 | 24317 | 91% |
Presumably this isn’t without some impact to battery life in a notebook. Furthermore, it’s impossible to tell what impact this has on the lifespan of the drive. If a drive is simply reorganizing data on the fly into a better (higher performing) state, that’s a lot of reads and writes when you’re doing nothing at all. And unfortunately, there’s no way to switch it off.
While Indilinx is following in Samsung's footsteps with enabling idle garbage collection, I believe it's a mistake. Personally, real TRIM support (or at least the wiper tool) is the way to go and it sounds like we’ll be getting it for most if not all of these SSDs in the next couple of months. Idle garbage collection worries me.
295 Comments
View All Comments
zodiacfml - Wednesday, September 2, 2009 - link
Very informative, answered more than anything in my mind. Hope to see this again in the future with these drive capacities around $100.mgrmgr - Wednesday, September 2, 2009 - link
Any idea if the (mid-Sept release?) OCZ Colossus's internal RAID setup will handle the problem of RAID controllers not being able to pass Windows 7's TRIM command to the SSD array. I'm intent on getting a new Photoshop machine with two SSDs in Raid-0 as soon as Win7 releases, but the word here and elsewhere so far is that RAID will block the TRIM function.kunedog - Wednesday, September 2, 2009 - link
All the Gen2 X-25M 80GB drives are apparently gone from Newegg . . . so they've marked up the Gen1 drives to $360 (from $230):http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...
Unbelievable.
gfody - Wednesday, September 2, 2009 - link
What happened to the gen2 160gb on Newegg? For a month the ETA was 9/2 (today) and now it's as if they never had it in the first place. The product page has been removed.It's like Newegg are holding the gen2 drives hostage until we buy out their remaining stock of gen1 drives.
iwodo - Tuesday, September 1, 2009 - link
I think it acts as a good summary. However someone wrote last time about Intel drive handling Random Read / Write extremely poorly during Sequential Read / Write.Has Aanand investigate yet?
I am hoping next Gen Intel SSD coming in Q2 10 will bring some substantial improvement.
statik213 - Tuesday, September 1, 2009 - link
Does the RAID controller propagate TRIM commands to the SSD? Or will having RAID negate TRIM?justaviking - Tuesday, September 1, 2009 - link
Another great article, Anand! Thanks, and keep them coming.If this has already been discussed, I apologize. I'm still exhausted from reading the wonderful article, and have not read all 17 pages of comments.
On PAGE 3, it talks about the trade-off of larger vs. smaller pages.
I wonder if it would be feasible to make a hybrid drive, with a portion of the drive using small pages for faster performance when writing small files, and the majority of it being larger pages to keep the management of the drive reasonable.
Any file could be written anywhere, but the controller would bias small writes to the small pages, and large writes to large files.
Externally it would appear as a single drive, of course, but deep down in the internals, it would essentially be two drives. Each of the two portions would be tuned for maximum performance in different areas, but able to serve as backup or overflow if the other portion became full or ever got written to too many times.
Interesting concept? Or a hair brained idea buy an ignorant amateur?
CList - Tuesday, September 1, 2009 - link
Great article, wonderful to see insightful, in depth analysis.I'd be curious to hear anyone's thoughts on the implications are of running virtual hard disk files on SSD's. I do a lot of work these days on virtual machines, and I'd love to get them feeling more snappy - especially on my laptop which is limited to 4GB of ram.
For example;
What would the constant updates of those vmdk (or "vhd") files do to the disk's lifespan?
If the OS hosting the VM is windows 7, but the virtual machine is WinServer2003 will the TRIM command be used properly?
Cheers,
CList
pcfxer - Tuesday, September 1, 2009 - link
Great article!"It seems that building Pidgin is more CPU than IO bound.."
Obviously, Mr. Anand doesnt' understand how compilers work ;). Compilers will always be CPU and memory bound, reduce your memory in the computer to say 256MB (or lower) and you'll see what I mean. The levels of recursion necessary to follow the production (grammars that define the language) use up memory but would rarely use the drive unless the OS had terrible resource management. :0.
CMGuy - Wednesday, September 2, 2009 - link
While I can't comment on the specifics of software compilers I know that faster disk IO makes a big difference when your performing a full build (compilation and packaging) of software.IDEs these days spend a lot their time reading/writing small files (thats a lot of small, random, disk IO) and a good SSD can make a huge difference to this.