The Test

Note that I've pulled out our older results for the Kingston V+100. There were a couple of tests that had unusually high performance which I now believe was due the drive being run with a newer OS/software image than the rest of the older drives. I will be rerunning those benchmarks in the coming week.

I should also note that this is beta hardware running beta firmware. While the beta nature of the drive isn't really visible in any of our tests, I did attempt to use the Vertex 3 Pro as the primary drive in my 15-inch MacBook Pro on my trip to MWC. I did so with hopes of exposing any errors and bugs quicker than normal, and indeed I did. Under OS X on the MBP with a full image of tons of data/apps, the drive is basically unusable. I get super long read and write latency. I've already informed OCZ of the problem and I'd expect a solution before we get to final firmware. Often times actually using these drives is the only way to unmask issues like this.

CPU

Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011

Motherboard:

Intel DX58SO (Intel X58)

Intel H67 Motherboard

Chipset:

Intel X58 + Marvell SATA 6Gbps PCIe

Intel H67
Chipset Drivers:

Intel 9.1.1.1015 + Intel IMSM 8.9

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Random write performance is much better on the SF-2500, not that it was bad to begin with on the SF-1200. In fact, the closest competitor is the SF-1200, the rest don't stand a chance.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Ramp up the queue depth and there's still tons of performance on the table. At 3Gbps the performance of the Vertex 3 Pro is actually no different than the SF-1200 based Corsair Force, the SF-2500 is made for 6Gbps controllers.

Iometer - 4KB Random Read, QD=3

 

Today: Toshiba 32nm Toggle NAND, Tomorrow: IMFT 25nm Sequential Read/Write Speed
Comments Locked

144 Comments

View All Comments

  • FCss - Thursday, February 17, 2011 - link

    "My personal desktop sees about 7GB of writes per day." maybe a stupid question but how do you check the amount of your daily writes?
    And one more question: if you have a 128Gb SSD and you leave let's say 40Gb unformated so the user can't fill up the disk, will the controller use this space the same way as it would belong to the spare area?
  • Quindor - Thursday, February 17, 2011 - link

    I use a program called "HDDLED" for this. It shows you some easily accessible leds on your screen and if you hover over it, you can see the current and total disk usage since your PC was booted up.
  • FCss - Thursday, February 17, 2011 - link

    thanks, a great software
  • Breit - Thursday, February 17, 2011 - link

    isn't the totally written bytes to the drive since manufacturing be part of the smart data you can read from your drive? all you have to do then is noting down the value when you boot up your pc in the morning and subtract that from the actual value you read there the next day.
  • Chloiber - Thursday, February 17, 2011 - link

    Or you can just take the average..
  • marraco - Thursday, February 17, 2011 - link

    Vertex 2 takes advantage of unformated space. So OCZ advices to leave 20% of space unformated , (although to improve garbage collection, but it means that unformated space is used)
  • 7Enigma - Thursday, February 17, 2011 - link

    Comon Anand! In your example you have 185GB free on a 256GB drive. I think that is the least likely scenario that paints an overly optimistic case in terms of write life. Everyone knows not to completely fill up their drive but are you telling me that the vast majority of users are going to have 78% of their drive free at all times? I just don't buy it.

    The more common scenario is that a consumer purchases a drive slightly larger then needed (due to how expensive these luxuries still are). So that 256GB drive probably will only have 20-40GB free. Do that and that 36 days for a single use of the NAND becomes ~5-8 days (no way to move static data around at this capacity level). Factor in write amplification (0.6X to 10X) and you lower the time to between 4-25 years for hitting that 3000X cap.

    Still not a HUGE problem, but much more relevant then saying this drive will last for hundreds of years (not counting NAND lifespan itself).
  • 7Enigma - Thursday, February 17, 2011 - link

    Bah I thought the write amplification was 1.6X. That changes the numbers considerably (enough that the point is moot). I still think the example in the article was not a normal circumstance but it seems to still not be an issue.

    <pie to face>
  • mark53916 - Thursday, February 17, 2011 - link

    Encrypted files are not compressible, so you won't get any advantage
    from the hardware write compression.
  • 7Enigma - Thursday, February 17, 2011 - link

    Hi Anand,

    Looks like one of the numbers is incorrect in this chart. Right now it shows LOWER performance after TRIM then when the drive was completely full. The 230MB/sec value seems to be incorrect.

Log in

Don't have an account? Sign up now