Used vs. New Performance: Revisited

Nearly all good SSDs perform le sweet when brand new. None of the blocks have any data in them, each write is performed at full speed, all is bueno. Over time, your drive gets written to, all blocks get occupied with data (both valid and invalid) and now every time you write to the SSD its controller has to do that painful read modify write and cleaning.

In the Anthology I simulated this worst used case by first filling the drive with data, deleting the partition, then installing the OS and running my benchmarks. This worked very well because it filled every single flash block with data. The OS installation and actual testing added a few sprinkles of randomness that helped make the scenario even more strenuous, which I liked.

The problem here is that if a drive properly supports TRIM, the act of formatting the drive will erase all of the wonderful used data I purposefully filled the drive with. My “used” case on a drive supporting TRIM will now just be like testing a drive in a brand new state.

To prove this point I provide you with an example of what happens when you take a drive supporting TRIM, fill it with data and then format the drive:

SuperTalent UltraDrive GX 1711 4KB Random Write IOPS
Clean Drive 13.1 MB/s
Used Drive 6.93 MB/s
Used Drive After TRIM 12.9 MB/s

 

Oh look, performance doesn’t really change. The cleaning process takes longer now but other than that, the performance is the same.

So, I need a new way to test. It’s a shame because I’m particularly attached to the old way I tested, mostly because it provides a very stressful situation for the drives to deal with. After all, I don’t want to fool anyone into thinking a drive is faster than it is.

Once TRIM is enabled on all drives, the way I will test is by filling a drive after it’s been graced with an OS. I will fill it with both valid and invalid data, delete the invalid data and measure performance. This will measure how well the drive performs closer to capacity as well as how well it can TRIM data.

Unfortunately, no drives properly support TRIM yet. The beta Indilinx firmware with TRIM support works well, unless you put your system to sleep. Then there’s a chance you might lose your data. Woops. There’s also the problem with Intel’s Matrix Storage Manager not passing TRIM to your drives. All of this will get fixed before the end of the year, but it’s just a bit too early to get TRIM happy.

What we get today is the first stage of migrating the way we test. In order to simulate a real user environment I take a freshly secure erased drive, install Windows 7 x64 on it (no cloning, full install this time), then install drivers/apps, then fill the remaining space on the drive and delete it. This fills the drive with invalid data that the drive must keep track of and juggle, much like what you'd see by simply using your system.

I’m using the latest IMSM driver so TRIM doesn’t get passed to the drives; I’m such a jerk to these poor SSDs.

I’ll start look at both new and used performance on the coming pages. Once TRIM gets here in full force I’ll just start using it and we won't have to worry about looking at new vs. used performance.

The Test

CPU Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)
Motherboard: Intel DX58SO (Intel X58)
Chipset: Intel X58
Chipset Drivers: Intel 9.1.1.1015 + Intel IMSM 8.9
Memory: Qimonda DDR3-1066 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64
Tying it All Together: SSD Performance Degradation Intel's X25-M 34nm vs 50nm: Not as Straight Forward As You'd Think
Comments Locked

295 Comments

View All Comments

  • Wwhat - Sunday, September 6, 2009 - link

    If you read the first part of the article alone you would see how important a good controller is in a SSD and you would no ask his question probably, plus SSD's use the flash in parallel where a bunch of USB drives would not, the parallel thing is also mentioned in the article.
    And USB has a lot of overhead actually on the system, both in CPU cycles as well as in IO interrupts.

    There are plug in PCI(e) cards to stick SD cards in though, to get a similar setup, but it's a bit of a hack and with the overhead and the management and controllers used and the price to buy many SD cards it's not competitive in the end and you are better of with a real SSD I'm told.
  • Transisto - Sunday, September 6, 2009 - link

    You are right, the controller is very important.

    I think caching about 4-8 gig of most often accessed program files has the best price/performance ratio, for improving application load time. It it also very easily scalable.

    One of the problem I see is integrating this ssd cache in the OS or before booting so it act where it matter the most.

    I think there could be a near x25-m speedup from optimized caching and good controller no matter what SSD form factor it rely on. SD, CF, usb , pci or onboard.

    Why it seams nobody talk about eboostr type of caching AND ,,, on other news ,,, Intel's Braidwood flash memory module could kill SSD market.

    I am quite of a performance seeker.

    But I don't think I need 80gig of SSD in my desktop,just some 8gb of good caching. Mabe a 60gb ssd on a laptop.

    Well... I'm gonna pay for that controller once, not twice (160gb?)
  • Wwhat - Saturday, September 5, 2009 - link

    Not that it's not a good article, although it does seem like 2 articles in one, but what I miss is getting to brass tacks regarding the filesystem used, and why there isn't a SSD-specific file system made, and what choices can be made during formatting in regards to blocksize, obviously if you select large blocks on filesystem level a would impact he performance of the garbage collection right? It actually seem the author never delved very deeply into filesystems from reading this.
    The thing is that even with large blocks on filesystem level the system might still use small segments for the actuall keepin track, and if it needs to write small bits to keep track of large blocks you'd still have issues, that's why I say a specific SSD filesystem migh be good, but only if there isn't a new form of SSD in the near future that makes the effort poinless, and if a filesystem for SSD was made then the firmware should not try to compensate for exising filesystem issues with SSD's.
    I read that the SD people selected exFAT as filesystem for their next generation, and that also makes me wonder, is that just to do with licensing costs or is NTFS bad for flash based devices?
    Point being at the filesystem needs to be highlighted more I think,
  • Bolas - Friday, September 4, 2009 - link

    Would someone please hit Dell with the clue-board and convince them to offer the Intel SSD's in their Alienware systems? The Samsung SSD's are all that is stopping me from buying an Alienware laptop at the moment.
  • EatTheMeat - Friday, September 4, 2009 - link

    Congratulations on another fab masterclass. This is easily the best educational material on the internet regarding SSDs, and contrary to some comments, I think you've pitched your recommendations just right. I can also appreciate why you approached this article with some trepidation. Bravo.

    I have a RAID question for Anand (or anyone else who feels qualified :-))

    I'm thinking of setting up 2 160GB x25-m G2 drives in RAID-0 for Win 7. I'd simply use the ICH10R controller for it. It's not so much to increase performance but rather to increase capacity and make sure each drive wears equally. After considering it further I'm wondering if SSD RAID is wise. First there's the eternal question of stripe size and write amplification. It makes sense to me to set the stripe size to be the same as, or a fraction of, the block size of the SSD. If you choose the wrong stripe size does it influence write amplification?

    I'm aware that performance should increase with larger spripes, but I'm more concerned about what's healthy for the SSD.

    Do you think I should just let SSD RAID wait until RAID drivers are optimised for SSDs?

    I know you're planning a RAID article for SSDs - I for one look forward to it greatly. I've read all your other SSD articles like four times!
  • Bolas - Friday, September 4, 2009 - link

    If SSD's in RAID lose the benefit of the TRIM command, then you're shooting yourself in the foot if you set them up in RAID. If you need more capacity, wait for the Intel 320GB SSD drives next year. Or better yet, use a 160 GB for your boot drive, then set up some traditional hard disk drives in RAID for your storage requirements.
  • EatTheMeat - Friday, September 4, 2009 - link

    Thanks for reply. I definitely hear you about the TRIM functionality as I doubt RAID drivers will pass this through before 2010. Still though, it doesn't look like the G2s drop much in performance with use anyway from Anand's graphs. With regard to waiting for 320 GB drives - I can't. These things are just too enticing, and you could always say that technology will be better / faster / cheaper next year. I've decided to take the plunge now as I'm fed up with an i7 965 booting and loading apps / games like a snail even from a RAID drive.

    I just don't want to bugger the SSDs up with loads of write amplification / fragmentation due to RAID-0. ie, is RAID-0 bad for the health of SSDs like defragmentation / prefetch is? I wonder if anyone knows the answer to this question yet.
  • jagreenm - Saturday, September 5, 2009 - link

    What about just using Windows drive spanning for 2 160's?
  • EatTheMeat - Saturday, September 5, 2009 - link

    As far as I know drive spanning doesn't even the wear between the discs. It just fills up first one and then the other. That's important with SSDs because RAID can really help reduce drive wear by spreading all reads and writes across 2 drives. In fact, it should more than half drive wear as both drives will have large scratch portions. Not so with spanning as far as I know.

    Does anyone know if I'm talking sh1t here? :-)
  • pepito - Monday, November 16, 2009 - link

    If you are not sure, then why do you assert such things?

    I don't know about Windows, but at least in Linux when using LVM2 or RAID0 the writes spread evenly against all block devices.
    That means you get twice the speed and better drive wear.

    I would like to think that microsoft's implementation works more or less the same way, as this is completely logical (but then again, its microsoft, so who can really know?).

Log in

Don't have an account? Sign up now