Used vs. New Performance: Revisited

Nearly all good SSDs perform le sweet when brand new. None of the blocks have any data in them, each write is performed at full speed, all is bueno. Over time, your drive gets written to, all blocks get occupied with data (both valid and invalid) and now every time you write to the SSD its controller has to do that painful read modify write and cleaning.

In the Anthology I simulated this worst used case by first filling the drive with data, deleting the partition, then installing the OS and running my benchmarks. This worked very well because it filled every single flash block with data. The OS installation and actual testing added a few sprinkles of randomness that helped make the scenario even more strenuous, which I liked.

The problem here is that if a drive properly supports TRIM, the act of formatting the drive will erase all of the wonderful used data I purposefully filled the drive with. My “used” case on a drive supporting TRIM will now just be like testing a drive in a brand new state.

To prove this point I provide you with an example of what happens when you take a drive supporting TRIM, fill it with data and then format the drive:

SuperTalent UltraDrive GX 1711 4KB Random Write IOPS
Clean Drive 13.1 MB/s
Used Drive 6.93 MB/s
Used Drive After TRIM 12.9 MB/s

 

Oh look, performance doesn’t really change. The cleaning process takes longer now but other than that, the performance is the same.

So, I need a new way to test. It’s a shame because I’m particularly attached to the old way I tested, mostly because it provides a very stressful situation for the drives to deal with. After all, I don’t want to fool anyone into thinking a drive is faster than it is.

Once TRIM is enabled on all drives, the way I will test is by filling a drive after it’s been graced with an OS. I will fill it with both valid and invalid data, delete the invalid data and measure performance. This will measure how well the drive performs closer to capacity as well as how well it can TRIM data.

Unfortunately, no drives properly support TRIM yet. The beta Indilinx firmware with TRIM support works well, unless you put your system to sleep. Then there’s a chance you might lose your data. Woops. There’s also the problem with Intel’s Matrix Storage Manager not passing TRIM to your drives. All of this will get fixed before the end of the year, but it’s just a bit too early to get TRIM happy.

What we get today is the first stage of migrating the way we test. In order to simulate a real user environment I take a freshly secure erased drive, install Windows 7 x64 on it (no cloning, full install this time), then install drivers/apps, then fill the remaining space on the drive and delete it. This fills the drive with invalid data that the drive must keep track of and juggle, much like what you'd see by simply using your system.

I’m using the latest IMSM driver so TRIM doesn’t get passed to the drives; I’m such a jerk to these poor SSDs.

I’ll start look at both new and used performance on the coming pages. Once TRIM gets here in full force I’ll just start using it and we won't have to worry about looking at new vs. used performance.

The Test

CPU Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)
Motherboard: Intel DX58SO (Intel X58)
Chipset: Intel X58
Chipset Drivers: Intel 9.1.1.1015 + Intel IMSM 8.9
Memory: Qimonda DDR3-1066 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64
Tying it All Together: SSD Performance Degradation Intel's X25-M 34nm vs 50nm: Not as Straight Forward As You'd Think
Comments Locked

295 Comments

View All Comments

  • GourdFreeMan - Tuesday, September 1, 2009 - link

    You would, in fact, be incorrect. I refer you to ANSI/IEEE Std 1084-1986, which defines kilo, mega, etc. as powers of two when used to refer to sizes of computer storage. It was common practice to use such definitons in Computer Science from the 1970s until standards were changed in 1991. As many people reading Anandtech received their formal education during this time period, it is understandable that the usage is still commonplace.
  • Undersea - Monday, August 31, 2009 - link

    Where was this article two weeks ago before I bought my OCZ summit? I hope this little article will jump start samsung.

    Thanks for all the hard work :)
  • FrancoisD - Monday, August 31, 2009 - link

    Hi Anand,

    Great article, as always. I've been following your site since the beginning and it's still the best one out there today!

    I mainly use Mac's these days and was wondering if you knew anything about Apple's plans for TRIM??

    Thanks for all the fantastic work, very technical yet easy to understand.

    François
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Thanks for your support over the years :)

    No word on Apple's plans for TRIM yet, I am digging though...

    Take care,
    Anand
  • Dynotaku - Monday, August 31, 2009 - link

    Amazing article as always, now I just need one that shows me how to install just Win 7 and my Steam folder to the SSD and move Program Files and "My Documents" or whatever it's called in Win7 to a mechanical disk.
  • GullLars - Monday, August 31, 2009 - link

    A really great article with loads of data.
    I only have one complaint. The 4kb random read/write tests in IOmeter was done with QD=3, this simulates a really light workload, and does not allow the controllers to make use of the potential of all their flash channels. I've seen intels x25-M scale up to 130-140 MB/s of 4KB random read @ QD=64 (medium load) with AHCI activated. I have not yet tested my Vertex SSDs or Mtron Pro's, but i suspect they also scale well beyond QD=3.

    It would also be usefull to compare the different tests in the HDDsuite in PCmark vantage instead of only the total score.
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    The reason I chose a queue depth of 3 is because that's, on average, what I found when I tried heavily (but realistically) loading some Windows desktop machines. I rarely found a queue depth over 5. The super high QDs are great for enterprise workloads but I don't believe they do a good job at showcasing single user desktop/notebook performance.

    I agree about the individual HDD suite tests, I was just trying to cut down on the number of graphs everyone had to mow through :)

    Take care,
    Anand
  • heulenwolf - Monday, August 31, 2009 - link

    Anand,

    I'd like to add my thanks to the many in the comments. Your articles really do stand out in their completeness and clarity. Well done.

    I'm hoping you or someone else in the forums can shed some light on a problem I'm having. I got talked into getting a Dell "Ultraperformance" SSD for my new work system last year. Its a Samsung-branded SLC SSD 64 GB capacity. As your results predict, its really snappy when its first loaded and performance degrades after a few months with the drive ~3/4 full. One thing I haven't seen predicted, though, is that the drives have only lasted 6 months. The first system I received was so unstable without explanation that we convinced Dell to replace the entire machine. Since then, I'm now on my second SSD refurb replacement under warranty. In both SDD failures, the drive worked normally for ~6 months, then performance dropped to 5-10 MB/sec, Vista boot times went up to ~15 minutes, and I paid dearly in time for every single click and keypress. Once everything finally loaded, the system behaved almost normally. Dell's own diagnostics pointed to bad drives, yet, in each case, the bad SSD continued to work just at super slow speeds. I was careful to disable Vista's automatic defrag with every install.

    My IT staff has blamestormed first Vista (we're still mostly an XP shop) and now SSDs in general as the culprit. They want me to turn in the SSD and replace it with a magnetic hard drive. So, my question is how to explain this:
    A) Am I that 1 in a bazillion case of having gotten a bad system followed by a bad drive followed by another bad drive
    B) Is there something about Vista - beyond auto defrag - that accelerates the wear and tear on these drives
    C) Is there something about Samsung's early SSD controllers that drops them to a lower speed under certain conditions (e.g. poorly implemented SMART diagnostics)
    D) Is my IT department right and all SSDs are evil ;)?
  • Ardax - Monday, August 31, 2009 - link

    Well, first you could point them to this article to point out how bad the Samsung SSDs are. Replace it with an Intel or Indilinx-based drive and you should be fine. Anecdotes so far indicate that people have been beating on them for months.

    As far as configuring Vista for SSD usage, MS posted in the Engineering Windows 7 Blog about what they're doing for SSDs. [url=http://blogs.msdn.com/e7/archive/2009/05/05/suppor...">http://blogs.msdn.com/e7/archive/2009/0...nd-q-a-f...]Article Link[/url].

    The short version of it is this: Disable Defrag, SuperFetch, ReadyBoost, and Application and Boot Prefetching. All these technologies were created to work around the low random read/write performance of traditional HDs and are unnecessary (or unhealthy, in the case of defrag) with SSDs.
  • heulenwolf - Monday, August 31, 2009 - link

    Thanks for the reply, Ardax. Unfortunately, the choice of SSD brand was Dell's. As Anand points out, OEM sales is where Samsung's seems to have a corner on the market. The choices are: Samsung "Ultraperformance" SSD, Samsung not-so-ultraperformance SSD, Magnetic HDD, or void the warranty by getting installing a non-Dell part. I could ask that we buy a non-Dell SSD but since installing it would preclude further warranty support from Dell and all SSDs have become the scapegoat, I doubt my request would be accepted. Additionally, the article doesn't say much about drive reliability which is the fundamental problem in my case.

    I'll look into the linked recommendations on Win 7 and SSDs. I had already done some research on these features and found the general concensus to be that leaving any of them enabled (with the exception of defrag) should do no harm.

Log in

Don't have an account? Sign up now