Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Random read performance is consistent across all capacity points. Performance here isn't as high as what Samsung is capable of achieving but it is very good.

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

Low queue depth random write performance has just gotten insanely high on client drives over the past couple of years. Seagate doesn't lead the pack with the 600 but it does good enough. Note the lack of any real difference between the capacities in terms of performance.

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Ramp up queue depth and we see a small gap between the 120GB capacity and the rest. The 600/600 Pro climb the charts a bit at higher queue depths. Note the lack of any performance difference between the 600 and 600 Pro at similar capacities.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Here's how you tell that Seagate has client drive experience: incredible low queue depth sequential read performance. I'm not sure why the 240GB 600 does so well here, but for the most part all of the drives are clustered around the same values.

Desktop Iometer - 128KB Sequential Write (4K Aligned)

Low queue depth sequential writes are also good. The 240GB capacity does better than the rest for some reason. Only the 120GB capacity shows any sign of weakness compared to other class leaders.

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

On the read side, at high queue depths we're pretty much saturating 6Gbps SATA at this point. The fastest drive here only holds a 3% advantage over the 600s.

Incompressible Sequential Write Performance - AS-SSD

Once again we see solid performance from the 600s. There's no performance advantage to the Pro, and the 120GB capacity is measurably slower.

AnandTech Storage Bench 2013 Preview Performance vs. Transfer Size
Comments Locked

59 Comments

View All Comments

  • Kristian Vättö - Tuesday, May 7, 2013 - link

    The units we have are all based on the older 24nm NAND. A while back I asked Corsair for review samples of the 128/256GB Neutrons (the original ones are 120/240) but they said they are not sampling them (yet). I can ask if they have changed their mind, although there shouldn't be much difference since 19nm Toshiba NAND has the same page/block/die size as 24nm.
  • FunBunny2 - Tuesday, May 7, 2013 - link

    Does "Toshiba" mean toggle-mode NAND, by definition? Or do they sell all types?
  • Kristian Vättö - Wednesday, May 8, 2013 - link

    Yes, Toshiba uses Toggle-Mode interface for their NAND. Here's the breakdown of NAND interfaces and manufacturers:

    Toggle-Mode: Toshiba/SanDisk (joint-venture) & Samsung
    ONFI: Intel/Micron (aka IMFT, also a joint-venture) & Hynix
  • LtGoonRush - Tuesday, May 7, 2013 - link

    HardOCP showed pretty significant performance increases, though that could also be due to the new firmware (which is not being back-ported as I understand).
  • romrunning - Tuesday, May 7, 2013 - link

    I really wish we had more tests of SSDs in RAID-5 arrays. This is really useful for SMBs who may not want/afford a SAN. I'm very curious to see if the 20% spare area affects SSDs just as much when they're RAIDed together as it does standalone. I also don't care of the SSDs are branded as being "enterprise" drives. It would be nice to see how a 5x256GB Samsung 840 Pro RAID-5 array would peform, or even a 5x400GB Seagate 600 Pro RAID-5 array.
  • FunBunny2 - Tuesday, May 7, 2013 - link

    No legitimate RDBMS vendor would allow its database on a RAID-5 machine. Never. Never. Never.
  • romrunning - Wednesday, May 8, 2013 - link

    I can't tell if you're just trolling or you're actually serious. Obviously, SMBs use RAID-5 arrays ALL the time, and they use "legitimate" database products like MS-SQL, etc. It doesn't have to be an IBM AIX server running DB2, or anything high-end.
  • daniel_mayes - Wednesday, May 8, 2013 - link

    What is FunBunny2 talking about? What Raid would you want to run them on 1,5,6,10, no ssd's?
    You aren't the only one that want's to see more tests with SSD's in a Raid 5. I would also like to see the destroyer run on ssd's with a higher provision and please add Intel DC S3700 to the destroyer benchmark next.
  • FunBunny2 - Wednesday, May 8, 2013 - link

    "I always have found that based on those requirements RAID 5 requires more spindles to satisfy those requirements than RAID 10 - and this has been found even with a Read/Write of 9:1. "

    here: http://sqlblog.com/blogs/linchi_shea/archive/2007/...
    (no, that's not me)

    Fact is, SSD still writes slower than reads, so what kind of RAID one uses matters. Having a 3NF (or higher) schema is a more productive avenue for performance on SSD, anyways, irregardless. Getting rid of all that bloated [un|de]normalized byte pile will allow, in most cases, you to have a much smaller database, and thus not worry about bunches and bunches of discs.
  • romrunning - Friday, May 10, 2013 - link

    That blog is from 2007, and SSDs weren't really in the picture at all. It has been demonstrated how SSDs can trump spinning disks in virtually all I/O-bound operations. The man in the blog even showed a test of RAID-5 beating RAID-10 on the same hardware, so his test was in direct contradiction to the one who later commented about spindles.

    That being said, I think you're trying to say that getting rid of unnecessary in your database will result in a smaller database & thus lower performance requirements. That might be true at one point, but when you've normalized your data already, then additional data will just make the database grow. After all, if you're writing something like electronic orders to your normalized database, it will grow based upon real data addition. That's why you need to make sure your storage array can handle the increased load.

    RAID-5 has been the best for SMBs because it provide the fault-tolerance and the higher utilization of total storage capacity that they want. That's why I would like to see tests of SSDs in RAID-5 arrays - to get Anandtech to test these great SSD performers in something I could use in a database server. Something like their tests of their own website databases would be nice, or even smaller ones using a 10-20GB database.

Log in

Don't have an account? Sign up now