Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

G.Skill Phoenix Blade 480GB
Default
25% Over-Provisioning

Even though the Phoenix Blade and RevoDrive 350 share the same core controller technology, their steady-state behaviors are quite different. The Phoenix Blade provides quite substantially higher peak IOPS (~150K) and it is also more consistent in steady-state as the RevoDrive frequently drops below 20K IOPS while the Phoenix Blade doesn't. 

G.Skill Phoenix Blade 480GB
Default
25% Over-Provisioning

 

G.Skill Phoenix Blade 480GB
Default
25% Over-Provisioning

 

TRIM Validation

To test TRIM, I turned to our regular TRIM test suite for SandForce drives. First I filled the drive with incompressible sequential data, which was followed by 60 minutes of incompressible 4KB random writes (QD32). To measure performance before and after TRIM, I ran a one-minute incompressible 128KB sequential write pass.

Iometer Incompressible 128KB Sequential Write
  Clean Dirty After TRIM
G.Skill Phoenix Blade 480GB 704.8MB/s 124.9MB/s 231.5MB/s

The good news here is that the drive receives the TRIM command, but unfortunately it doesn't fully restore performance, although that is a known problem with SandForce drives. What's notable is that the first LBAs after the TRIM command were fast (+600MB/s), so in due time the performance across all LBAs should recover at least to a certain point.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

62 Comments

View All Comments

  • MTEK - Friday, December 12, 2014 - link

    Don't really care about a RAID-0 hack. Where are the SF-3700 based SSDs? Anand/Kingston were teasing us with one from last CES.... Where is it??
  • counterclockwork - Friday, December 12, 2014 - link

    In case anyone is wondering, Kristian is wrong about the controller. SBC Designs has nothing to do with this thing, as if their website's lack of specifics and all-around amateurs-from-the-'90s look wasn't a big tip-off. Pure googling actually turns up Comay, and Comay is a brand used by CoreRise. In fact, the Phoenix Blade is nothing more than a rebadge of CoreRise's BladeDrive G24 (see http://www.corerise.com/en/product_show.php?id=95 ). Looking at the text strings in the driver for this confirms as much. As for the chip itself, CoreRise claims the SBC208 is their own proprietary device. Personally, I don't believe this, as their product portfolio doesn't otherwise suggest they have that level of expertise. I'd guess it's a LSI or Marvell controller.
  • Kristian Vättö - Saturday, December 13, 2014 - link

    Thanks for the heads up and detective work. I couldn't find anything in Google, but looks like I wasn't trying hard enough... Anyway, I've updated the article.
  • StrongDC - Saturday, December 13, 2014 - link

    The text says driven by four SandForce SF-2281 controllers while the table says 4x SandForce SF-2282. :)
  • Kristian Vättö - Saturday, December 13, 2014 - link

    Fixed :)
  • SanX - Saturday, December 13, 2014 - link

    Stop bending common sense with all that slow self-destructive flash junk and start making battery backuped RAM PCIe drives. Speeds will be 20x immediately and forever. RAM prices will drop with adoption
  • SanX - Saturday, December 13, 2014 - link

    Battery and hard drive/flash backup
  • FunBunny2 - Saturday, December 13, 2014 - link

    Texas Memory (now, a unit of IBM) made such from 20 years ago; they were among the pioneers of SSD before NAND. Didn't sell all that well. Kind of expensive.
  • Antronman - Saturday, December 13, 2014 - link

    To be fair, that was 20 years ago.

    If anybody were interested, it might be $500 for an 80GB DDR3 storage drive. But honestly, nobody could utilize that sort of performance except for the largest and busiest data centers. And even they don't need it.

    If you really want "teh supr speedi as f*ck spid" then you might as well just grab X79 or X99, and put in 64GBs of RAM and just ramdisk most of it.
  • incx - Sunday, December 14, 2014 - link

    First of all you'd want proper ECC RAM on that thing, which will cost at least around $500 for the RAM only. In addition to that you'd want additional logic that will drive the whole thing, map and avoid bad chips, store stuff in flash (another cost) when power dies and resume it after, check and manage the flash, check and manage the battery and make it look like a "drive" in general. Then you add in the R&D costs, manufacturing, support and warranty costs etc and you're not even in the neighborhood any more.

    Creating a persistent RAM "disk" is not quite the same thing as software-mapping a bunch of consumer-grade RAM into a ramdisk. Sure, that works and is quite awesome, but everyone who uses it for anything decent, acknowledges and works around the risks that the data there may go poof or worse, bad, at any random time.

Log in

Don't have an account? Sign up now