The Test

Note that I've pulled out our older results for the Kingston V+100. There were a couple of tests that had unusually high performance which I now believe was due the drive being run with a newer OS/software image than the rest of the older drives. I will be rerunning those benchmarks in the coming week.

I should also note that this is beta hardware running beta firmware. While the beta nature of the drive isn't really visible in any of our tests, I did attempt to use the Vertex 3 Pro as the primary drive in my 15-inch MacBook Pro on my trip to MWC. I did so with hopes of exposing any errors and bugs quicker than normal, and indeed I did. Under OS X on the MBP with a full image of tons of data/apps, the drive is basically unusable. I get super long read and write latency. I've already informed OCZ of the problem and I'd expect a solution before we get to final firmware. Often times actually using these drives is the only way to unmask issues like this.

CPU

Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011

Motherboard:

Intel DX58SO (Intel X58)

Intel H67 Motherboard

Chipset:

Intel X58 + Marvell SATA 6Gbps PCIe

Intel H67
Chipset Drivers:

Intel 9.1.1.1015 + Intel IMSM 8.9

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Random write performance is much better on the SF-2500, not that it was bad to begin with on the SF-1200. In fact, the closest competitor is the SF-1200, the rest don't stand a chance.

Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Ramp up the queue depth and there's still tons of performance on the table. At 3Gbps the performance of the Vertex 3 Pro is actually no different than the SF-1200 based Corsair Force, the SF-2500 is made for 6Gbps controllers.

Iometer - 4KB Random Read, QD=3

 

Today: Toshiba 32nm Toggle NAND, Tomorrow: IMFT 25nm Sequential Read/Write Speed
POST A COMMENT

144 Comments

View All Comments

  • sheh - Thursday, February 17, 2011 - link

    Why's data retention down from 10 years to 1 year as the rewrite limit is approached?
    Does this mean after half the rewrites the retention is down to 5 years?
    What happens after that year, random errors?
    Is there drive logic (or standard software) to "refresh" a drive?
    Reply
  • AnnihilatorX - Saturday, February 19, 2011 - link

    Think about how Flash cell works. There is a thick Silicon Dixoide barrier separating the floating gate with the transistor. The reason they have a limited write cycle is because the Silion dioxide layer is eroded when high voltages are required to pump electrons to the floating gate.

    As the SO2 is damaged, it is easier for the electrons in the floating gate to leak, eventually when sufficient charge is leaked the data is loss (flipped from 1 to 0)
    Reply
  • bam-bam - Thursday, February 17, 2011 - link

    Thanks for the great preview! Can’t wait to get a couple of these new SDD’s soon.

    I’ll add them to an even more anxiously-awaited high-end SATA-III RAID Controller (Adaptec 6805) which is due out in March 2011. I’ll run them in RAID-0 and then see how they compare to my current set up:

    Two (2) Corsair P256 SSD's attached to an Adaptec 5805 controller in RAID-0 with the most current Windows 7 64-bit drivers. I’m still getting great numbers with these drives, almost a year into heavy, daily use. The proof is in pudding:

    http://img24.imageshack.us/img24/6361/2172011atto....

    (1500+ MB/s read speeds ain’t too bad for SATA-II based SSD’s, right?)

    With my never-ending and completely insatiable need-for-speed, I can’t wait to see what these new SATA-III drives with the new Sand-Force controller and a (good-quality) RAID card will achieve!
    Reply
  • Quindor - Friday, February 18, 2011 - link

    Eeehrmm.....

    Please re-evaluatue what you have written above and how to preform benchmarks.

    I too own a Adaptec 5805 and it has 512MB of cache memory. So, if you run atto with a size of 256MB, this fits inside the memory cache. You should see performance of around 1600MB/sec from the memory cache, this is in no way related to what your subsystem storage can or cannot do. A single disk connected to it but just using cache will give you exactly the same values.

    Please rerun your tests set to 2GB and you will get real-world results of what the storage behind the card can do.

    Actually, I'm a bit surprised that your writes don't get the same values? Maybe you don't have your write cache set to write back mode? This will improve performance even more, but consider using a UPS or a battery backup cache module before doing so. Same thing goes for allowing disk cache or not. Not sure if this settings will affect your SSD's though.

    Please, analyze your results if they are even possible before believing them. Each port can do around 300MB/sec, so 2x300MB/sec =/= 1500MB/sec that should have been your first clue. ;)
    Reply
  • mscommerce - Thursday, February 17, 2011 - link

    Super comprehensible and easy to digest. I think its one of your best, Anand. Well done! Reply
  • semo - Friday, February 18, 2011 - link

    "if you don't have a good 6Gbps interface (think Intel 6-series or AMD 8-series) then you probably should wait and upgrade your motherboard first"

    "Whenever you Sandy Bridge owners get replacement motherboards, this may be the SSD you'll want to pair with them"

    So I gather AMD haven't been able to fix their SATA III performance issues. Was it ever discovered what the problem is?
    Reply
  • HangFire - Friday, February 18, 2011 - link

    The wording is confusing, but I took that to mean you're OK with Intel 6 or AMD 8.

    Unfortunately, we may never know, as Anand rarely reads past page 4 or 5 of the comments.

    I am getting expected performance from my C300 + 890GX.
    Reply
  • HangFire - Friday, February 18, 2011 - link

    OK here's the conclusion from 3/25/2010 SSD/Sata III article:

    "We have to give AMD credit here. Its platform group has clearly done the right thing. By switching to PCIe 2.0 completely and enabling 6Gbps SATA today, its platforms won’t be a bottleneck for any early adopters of fast SSDs. For Intel these issues don't go away until 2011 with the 6-series chipsets (Cougar Point) which will at least enable 6Gbps SATA. "

    So, I think he is associating "good 6Gbps interface) with 6&8 series, not "don't have" with 6&8.
    Reply
  • semo - Friday, February 18, 2011 - link

    Ok I think I get it thanks HangFire. I remember that there was an article on Anandtech that tested SSDs on AMD's chipsets and the results weren't as good as Intel's. I've been waiting ever since for a follow up article but AMD stuff doesn't get much attention these days. Reply
  • BanditWorks - Friday, February 18, 2011 - link

    So if MLC NAND mortality rate ("endurance") dropped from 10,000 cycles down to 5,000 with the transition to 34nm manufacturing tech., does that mean that the SLC NAND mortality rate of 100,000 cycles went down to ~ 50,000?

    Sorry if this seems like a stupid question. *_*
    Reply

Log in

Don't have an account? Sign up now