Simulating a Used Drive

Since SSD performance degrades over time, it’s important to not only look at how well these drives perform new - but also the worst they’d perform over their lifetime. In order to do so we’d need a repeatable way of “seasoning” a drive to reduce its performance to the worst it could possibly get. The most realistic worst-case scenario is one where every single block on the drive is full of data. If a secure erase wipes all LBAs, that’s the best place to start. To simulate a well seasoned drive I first secure erased the drive.

After the secure erase, I used iometer to write one contiguous file across the disk - filling up the entire drive with 128KB blocks. In the case of the 80GB Intel X25-M, that’s 74.5GB of data on the drive before I run a single benchmark. The spare area is left untouched.

Next, I take my test image and I restore it onto the partition with a sector by sector copy. The sequential file write made sure that data is stored in every page of the SSD, the test image restore adds a twist of randomness (and realism) to the data.

There are other ways to produce a drive in its well-used state, but this ends up being the most consistent and repeatable. To confirm that my little simulation does indeed produce a realistically worn drive I ran PCMark on three different drives: 1) a freshly secure-erased Intel X25-M, 2) an Intel X25-M setup using the method I just described and 3) the Intel X25-M used in my CPU testbed that has been through hundreds of SYSMark runs.

The benchmark of choice is PCMark Vantage; it simulates the real world better than most drive benchmarks. The results are below:

Intel X25-M State PCMark Vantage Overall Test PCMark Vantage HDD Test
Fresh Secure Erase 11902 29879
Simulated Used Drive 11536 23252
Actual Testbed Used Drive 11140 23438

 

The secure erased system loses about 3% of its overall performance and 22% of its hard drive specific performance compared to my testbed drive. The seasoning method I described above produces a drive with nearly identical drops in performance.

The method appears to be sound.

Now that we have a way of simulating a used drive, let’s see how the contestants fared.

Restoring Your Drive to Peak Performance New vs Used SSD Performance
Comments Locked

250 Comments

View All Comments

  • strikeback03 - Thursday, March 19, 2009 - link

    I understand your point, but I am not sure you understand the point I (and others) are trying to make. The SSD makers (should) know their market. As they seem to be marketing these SSDs to consumers, they should know that means the vast majority are on Vista or OSX, so the OS won't be optimized for SSDs. It also means the majority will be using integrated disk controllers. Therefore, in choosing a SSD controller which does not operate properly given those restrictions, they chose poorly. The testing here at Anandtech shows that regardless of how the drives might perform in ideal circumstances, they have noticeable issues when used the way most users would use them, which is really all those users care about.
  • tshen83 - Thursday, March 19, 2009 - link

    In the history of computing, it was always the case that software compensated for the new hardware, not the other way around. When new hardware comes out that obsoletes current generation of software, new software will be written to take advantage of the new hardware.
    Think of it this way: you always write newer version of drivers to drive the newest GPUs. When is the last time newer GPUs work with older drivers?

    Nobody should be designing hardware now that makes DOS run fast right? All file systems (except ZFS and soon BTRFS) are obsolete now for SSDs, so we write new file systems. I am not sure Intel X25-M's approach of virtualizing flash to the likings of NTFS and EXT3 is the correct one. It is simply a bridge to get to the next solution.

    SSD makers right now are making a serious mistake pushing SSDs down consumer's throats during an economic crisis. They should have focused on the enterprise market, targeting DB servers. But in that space, Intel X25-E sits alone without competition. (Supertalent UltraDrive LEs should be within 25% of X25-E by my estimation)
  • pmonti80 - Thursday, March 19, 2009 - link

    Now I understand what you meant in the beginning. But still I don't agree with you, the system reviewed is the one 99% of SSD buyers will use(integrated mobo controller + NTFS). So, why optimize the benchmark to show the bad drives in a good light?

    About the Vertex, I don't understand what you are complaining about. After reading this article most people got the idea that Vertex is a good drive and at half Intel's price (I know, I searched on google for comments about this article).
  • tshen83 - Thursday, March 19, 2009 - link

    Professional people only look at two SSD benchmarks: random read IOPS at 4k and random write IOPS at 4k(Maybe 8K too for native DB pages).

    The Vertex random write IOPS at 4K size is abysmal. 2.4MB/sec at 4K means it only does 600ish random write IOPs. Something was wrong, and Vista/ICH10R didn't help. The 8GB/sec write boundary Anand imposed on the random write IOPS test is fishy. So is the artificial io queue depth = 3.

    The vertex random write IOPS should be better. The random read IOPS also should be slightly better. I have seen OCZ's own benchmark placing the Vertex very close to Intel X25-M at random read/ write IOPS tests.

    I personally think that if you use NTFS, just ignore the SSDs for now until Windows 7 RTM. You can't hurt waiting for SSD price to drop some more in the next 6 months. Same thing for Linux, although I would argue that Linux is even in a worse position for SSDs right now than windows 7. EXT3/EXT4/JFS/XFS/REISERFS all suck on SSDs.
  • gss4w - Thursday, March 19, 2009 - link

    Anandtech should adopt the same comment system as Dailytech so that comments that don't make any sense can be rated down. Who would want to read a review of something using a beta OS, or worse an OS that is only used on servers? I think it would be interesting to see if Windows 7 beta offered any improvements, but that should not be the focus of the review.
  • 7Enigma - Thursday, March 19, 2009 - link

    Here's another vote for the Dailytech comments section. The ability to rate up down, but more importantly HIDE the comments below a threshold would make for much more enjoyable reading.
  • curtisfong - Wednesday, March 18, 2009 - link

    Why should Anand test with Windows 7b or *nix? What is the majority OS?

    Kudos to Anand for testing real world performance on an OS that most use, and to Intel for tuning their drives for it. I'm happy the other manufacturers are losing business..maybe they will also tune their drives for real world performance and not synthetic benchmarks.

    To the poster above: do you work for OCZ or Samsung?
  • Glenn - Wednesday, March 18, 2009 - link

    tshen83 "A very thorough review by tshen83, an hour ago
    BUT, still based on Windows Vista.
    "

    As long as these drives are marketed toward said OS, why would you not use it? Most of us wouldn't recognize Solaris if we saw it! And I believe you seriously overestimate yourself if your gonna drill anything into Anands head! You might need your own site, huh?

    Great Job Anand! Don't forget to remind these CEO's that they also need to provide any software needed to configure and optimize these drives to work properly. ie go to OCZ Forums and try to figure out how to align, optimize and keep your drive running like it's supposed to, in less than 4 hours of reading! It would be nice if these companies would do their own beta testing and not rely on early adopters to do it for them!
  • Roland00 - Wednesday, March 18, 2009 - link

    It was a joy to read all 31 pages
  • MagicPants - Wednesday, March 18, 2009 - link

    Anand it would be really helpful to have a list of SSD companies blacklisting you so I know which ones to avoid. In general it would be nice to know who doesn't provide review samples to reputable sites.

Log in

Don't have an account? Sign up now