Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M
Default
25% OP -

The area where low cost designs usually fall behind is performance consistency and the JMF667H in the Black2 is no exception. I was actually expecting far worse results, although the JMF667H is certainly one of the worst SATA 6Gbps controllers we've tested lately. The biggest issue is the inability to sustain performance because while the thickest line is at ~5,000 IOPS, the performance is constantly dropping below 1,000 IOPS and even to zero on occasion. Increasing the over-provisioning helps a bit, although no amount of over-provisioning can fix a design issue this deep.

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 240GB Intel SSD 525 Plextor M5M
Default
25% OP -

 

  WD Black2 120GB Samsung SSD 840 EVO mSATA 1TB Mushkin Atlas 480GB Intel SSD 525 Plextor M5M
Default
25% OP -

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

Based on our sequential Iometer write test, the write performance should be around 150MB/s after secure erase. It seems that TRIM doesn't work perfectly but performance would likely further recover after some idle time.

The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

100 Comments

View All Comments

  • apoe - Friday, February 7, 2014 - link

    People think US internet speeds are slow? When I lived in China, 50 kb/s was considered really fast. In the US, with a standard ISP, I can download Steam games at 6 MB / s, 120 times faster. According to Forbes, the US is in the top 10 for fastest internet speeds, but nothing tops South Korea. Helpful that plenty of larger datacenters are located here too.
  • xKrNMBoYx - Tuesday, February 4, 2014 - link

    Try downloading a game or program that is 20-50+ GB constantly. That eats bandwidth and datacap. With out optical drives your left having to download everything or do multi step transfer from disk to computer to another computer. There are external optical drives but that is another story. There will never be a time where optical drives become obsolete until ISP invest more in speed/reliability with lower prices. Then there's the group of people that wouldn't use the internet either.
  • JMcGrath - Sunday, February 9, 2014 - link

    @Morawka -

    Forget Blu Ray, it will be left far behind in the (fairly) near future. I won't go as far as saying that BD is obsolete or going to be for quite a while, BD has far too much influence in the current movie industry or in the near future - even with 4K already hitting the markets a lot faster than people ever thought possible.

    However, as other people have stated BD is simply not a feasible solution going forward. It has served it's purpose for many years now, but just like CD and DVD better technologies will replace it with larger and faster storage mediums.

    I think it's too hard to say what will become the dominant technology in the near future, and hopefully we won't have to go through another BD vs HD-DVD type war again(!) but there are a number of different technologies in the works, many of which have already shown working prototypes to replace the aging BD tech.

    Most of these tech's have gone with either smaller track widths and laser technologies, additional layers or a combination of the two. However, there is one new technology that sounds very promising, and one I believe (and hope) will become the adopted standard - Holographic Discs!

    As the focus on ultra high resolutions and the aim for "retina" type displays, deeper color depths and shading, and higher true refresh rates (4K/60 or 4K/120 for example) new technologies will be needed. Most internet connections - even the fastest available in most areas - won't support these extreme bitrates, and BD simply can't keep up either.

    I have seen demo's of everything from true 24-bit color panels, 60hz and 120hz 4K via HDMI 2.0 or DP 1.2+, multi-panel / multi-head displays de-multiplexed (demuxed) showing true 23:9 content at 11280x4320P @ 120hz using multiple DP/HDMI connections.

    When talking about just the current standard 4K/30 on an RGB 4:4:4, 12-bit panel, you're talking about:

    3840 * 2160 * 36 * 30 = 8,957,952,000 bps / 8 = 373,248,000 bytes/s...
    =1.04GB/s, that's 3.67TB/hour (uncompressed, true 2160P)!!

    That's 62.6GB / minute or just over 7.33TB for a 2 hour long movie, and this is excluding audio!!

    Now, add in new technologies (coming very soon) like 24-bit color, 60FPS, and the *real* widescreen aspect and you're looking at closer to 367.6GB/s and 43TB for a 2 hour movie!

    I haven't kept a real close eye on the holographic disc technology lately, I know it was originally created by GE (who actually had a working, but smaller 4TB/Layer, proto ~3 years ago!) The discs themselves look identical to a CD/DVD/BD, but rather than using a single laser on 1 linear track, the drive uses multiple lasers at different angles. The possibilities are really endless considering the technology itself is no different than current media, just add 2 more lasers @ 45 degrees and you increase density by 300%, add 2 more @ 30 degrees and you've increased it 500%, 2 more lasers @ 15 degrees... you get the idea.

    The last I remember reading about the technology was that they had the working 4TB/Layer model that I mentioned, but were also working on using additional lasers and a finer track which would allow them as much as 40-80TB in the future!!

    BD won the last round because Sony had such a large influence on the market, especially with the PS3 hitting the market at the same time as BD/HD-DVD players and HDTV's becoming mainstream. It remains to be seen what will be the driving factor this time around, but with a company as large as GE behind the wheel and the demand in large data centers for backup I think holographic discs stand a good chance at winning the next round.

    For everyone out there that works in a large DC using automated tape backups or cloud based backups, imagine being able to not only store 80/160/320TB on a single disc the size of a CD but being able to do it in less than 2 hours!! Considering you could write 80TB in 2 hours and assuming they release PC writers @ 2X, 4X, 8X, etc you could backup an entire enterprise data center in less than an hour, throw it in a small fire/water proof safe, and you're done!
  • patrickjchase - Thursday, January 30, 2014 - link

    This is tangential, but...

    I have similar backup needs, and faced similar issues with OD unreliability (and also with HDD failures for that matter). I ended up developing my own archiver that stripes backup files across multiple drives/disks (optical or HDD). It calculates and embeds strong block-level checksums, and provides RAID6-style Reed-Solomon-code based redundancy within each block-sized stripe. In particular it can tolerate up to 2 block-checksum failures in each stripe (for for example if I stripe across 7 Blu-Ray disks I can tolerate read errors from any 2 within any given block-sized stripe), which means that it can tolerate a *lot* of optical disk read errors. I intentionally degraded (read: scratched up) a backup set such that every disk yielded a very large number of read errors, but the backup payload as a whole was recoverable.

    With that in mind, I find that optical (Blu-Ray) media remain very useful for backups due to their superior shock/vibration/environmental tolerance as compared to hard drives. If I were using them without my archiver I'd be pretty worried, though :-).
  • Navvie - Friday, January 31, 2014 - link

    I'd be very, very interested in seeing this software!
  • Solandri - Friday, January 31, 2014 - link

    We did that on Usenet in the 1990s. When posting a big binary (e.g. a TV show episode) you had to break it up into multiple parts to fit within the Usenet post length limit. So you might break the TV show into 50 compressed archive files (usually RAR). The problem was Usenet would frequently fail to propagate a file. So even though you posted 50, many sites might only get 49 or 47. The solution was to add parity files. So you'd post the original 50 archive files (RAR) and 5 parity files (PAR).

    Any 50 of those 55 files would allow you to recreate the original video file. You could vary the number of parity files, but about 10% was typical.

    When I was backing up stuff to DVD, I found and downloaded newer versions of the old parity programs. I broke up my backups into enough archive files and parity files that I could lose large portions of several disks, or even an entire disk and still recover my backup. Your block-level parity checksums sounds like it would be more robust and transparent, but I only had to use freely downloadable tools.

    http://en.wikipedia.org/wiki/Parity_file
  • Navvie - Monday, February 3, 2014 - link

    When I read patrickjchase's comment my first thought was "that's exactly like usenet."
  • peter64 - Friday, January 31, 2014 - link

    Yes, thank you Dell for making devices are easily user upgraded. I hate all these other notebooks being completely sealed. You can't even replace the battery.
  • peter64 - Friday, January 31, 2014 - link

    I bet if Dell didn't put that removable optical drive in there, your notebook won't have a 2nd hard drive at all.

    Thanks Dell for giving people options and post-purchase upgrade ability during times of sealed non-user upgradeable devices.
  • Johnmcl7 - Thursday, January 30, 2014 - link

    I think it would have been the ideal solution for a single drive perhaps last year but I think it's too late and too expensive as the Crucial M500 960GB is now under £330. While that's still a bit more expensive than this drive it's much neater (one drive instead of two) and I assume power consumption and heat would be better as well. That's the option I'd go for on a machine now, if they'd managed a 2TB drive for this price it would be a lot more attractive as that would put it beyond what's affordable with SSD's at the moment. I realise there are technical difficulties with 2TB 2.5in drives (I don't know if there's any standard drives available in this capacity) but they have to move forward at some point.

Log in

Don't have an account? Sign up now