Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Random read performance starts out quite nicely. There's a good improvement over the old m4 and the M500 lineup finds itself hot on the heels of the Samsung SSD 840. There's not much variance between the various capacities here.

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

It's with the random write performance that we get some insight into how write parallelism works on the M500. The 480GB and 960GB drives deliver roughly the same performance, so all you really need to saturate the 9187 is 32 NAND die. The 240GB sees a slight drop in performance, but the 120GB version with only 8 NAND die sees the biggest performance drop. This is exactly why we don't see a 64GB M500 at launch using 128Gbit die.

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Ramping up queue depth causes some extra scaling on the 32/64 die drives, but the 240GB and 120GB parts are already at their limits. There physically aren't enough NAND die to see any tangible gains in performance between high and low queue depths here on the smaller drives. This is going to be a problem that everyone will have to deal with ultimately, the M500 just encounters it first.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Low queue depth sequential read performance looks ok but the M500 is definitely not class leading here.

Desktop Iometer - 128KB Sequential Write (4K Aligned)

There's pretty much the same story when we look at sequential writes, although once again the 120GB M500 shows its limits very openly. The 840 and M500 have similar performance levels at the same capacity point, but the M500 is significantly behind the higher end offerings as you'd expect.

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Ramping up queue depth we see a substantial increase in sequential read performance, but there's still a big delta between the M500 and all of the earlier drives.

Incompressible Sequential Write Performance - AS-SSD

The high-queue depth sequential write story is a bit better for the M500. It's tangibly quicker than the 840 here.

A Preview of The Destroyer, Our 2013 Storage Bench Performance vs. Transfer Size
Comments Locked

111 Comments

View All Comments

  • mayankleoboy1 - Wednesday, April 10, 2013 - link

    thanks! These look much better, and more realworld+consumer usage.
  • metafor - Wednesday, April 10, 2013 - link

    I'd be very interested to see an endurance test for this drive and how it compares to the TLC Samsung drives. One of the bigger selling points of 2-level MLC is that it has a much longer lifespan, isn't it?
  • 73mpl4R - Wednesday, April 10, 2013 - link

    Thank you for a great review. If this is a product that paves the way for better drives with 128Gbit dies, then this is most welcome. Interesting with the encryption aswell, gonna check it out.
  • raclimja - Wednesday, April 10, 2013 - link

    power consumption is through the roof.

    very disappointed with it.
  • toyotabedzrock - Wednesday, April 10, 2013 - link

    If you wrote 1.5 TB of data for this test then you used 2% of the drives write life in 10-11 hours.

    As a heavy multitasker this worries me greatly. Especially if you edit large video files.
  • Solid State Brain - Wednesday, April 10, 2013 - link

    As I written in one of the comments above, they probably state 72 TiB of maximum supported writes for liability and commercial reasons. They don't want users to be using these as enterprise/professional drives (and chances are that if you write more than 40 GiB/day continuously for 5 years you're not a normal consumer). Most people barely write 1.5 TiB in 6 months of use anyway. So even if 72 TiB don't seem much, they're actually quite a lot of writes.

    Taking into account drive and NAND specifications, and an average write amplification of 2.0x (although in case of sequential workloads such as video editing this should be much closer to 1.0x), a realistic estimate as a minimum drive endurance would be:

    120 GB => 187.5 TiB
    240 GB => 375.0 TiB
    480 GB => 750.0 TiB
    960 GB => 1.46 PiB

    Of course, it's not that these drives will stop working after 3000 write cycles. They will go on as long as uncorrectable write errors (which increase as the drive gets used) remain within usable margins.
  • glugglug - Wednesday, April 10, 2013 - link

    It is very easy to come up with use cases where a "normal" user will end up hitting the 72TB of writes quickly.

    Most obvious example is a user who is using this large SSD to transition from a large HDD without it being "just a boot drive", so they archive a lot of stuff.

    Depending on MSSE settings, it will likely uncompress everything into C:\Windows\Temp when it does scans each night scan.

    You don't want to know how much of my X-25M G1's lifespan I killed in about 6 months time before finding out about that and junctioning my temp directories off of the SSD.
  • Solid State Brain - Wednesday, April 10, 2013 - link

    I am currently using a Samsung 840 250GB with TLC memory, without any hard disk installed in my system. I use it for everything from temp files to virtual machines to torrents. I even reinstalled the entire system a few times because I hopped between Linux and Windows "just because". I haven't performed any "SSD optimization" either. A purely plug&play usage, and it isn't a "boot drive" either. Furthermore, my system is always on. Not quite a normal usage I'd say.

    In 47 days of usage I've written 2.12 TiB and used 10 write cycles out of 1000. This translates in 13 years of drive life at my current usage rate.

    My usage graph + SMART data:
    http://i.imgur.com/IwWZ9Kg.png

    Temp directories alone aren't going to kill your SSD, not directly at least. It likely was something caused by some anomalous write-happy application, not Windows by itself.
  • juhatus - Wednesday, April 10, 2013 - link

    What would you recommend overprovisioning for 256Gb M4 with bitlocker, 10-15-25% ? Also what was the M4's firmware you used to compare to M500? Also are there any benefits for M500 with bitlocker on windows 7? thanks for review, please add 25% results for M4 too :)
  • Solid State Brain - Wednesday, April 10, 2013 - link

    Increasing overprovisioning is only going to matter when continuously writing to the drive without never (or rarely) executing a TRIM operation every time an amount of data roughly equivalent (in practice, less, depending on workload and drive conditions) to the amount of free space gets written.

    This almost never happens in real life usage by the target userbase of such a drive. It's a matter for servers, for those who for a reason or another (like hi-definition video editing) perform many sustained writes, or for those working in an environment without TRIM support (which isn't the case for Windows 7/8, although it can be for MacOS or Linux - where it has to be manually enabled).

    Anandtech SSD benchmarks aren't very realistic for most users, and the same can be said for their OP reccomendations.

Log in

Don't have an account? Sign up now