TRIM Performance

In our Vertex 3 preview I mentioned a bug/performance condition/funnythingthathappens with SF-1200 based drives. If you write incompressible data to all LBAs on the drive (e.g. fill the drive up with H.264 videos) and fill the spare area with incompressible data (do it again without TRIMing the drive) you'll actually put your SF-1200 based SSD into a performance condition that it can't TRIM its way out of. Completely TRIM the drive and you'll notice that while compressible writes are nice and speedy, incompressible writes happen at a max of 70 - 80MB/s. In our Vertex 3 Pro preview I mentioned that it seemed as if SandForce had nearly fixed the issue. The worst I ever recorded performance on the 240GB drive after my aforementioned fill procedure was 198MB/s - a pretty healthy level.

The 120GB drive doesn't mask the drop nearly as well. The same process I described above drops performance to the 100 - 130MB/s range. This is better than what we saw with the Vertex 2, but still a valid concern if you plan on storing/manipulating a lot of highly compressed data (e.g. H.264 video) on your SSD.

The other major change since the preview? The 120GB drive can definitely get into a pretty fragmented state (again only if you pepper it with incompressible data). I filled the drive with incompressible data, ran a 4KB (100% LBA space, QD32) random write test with incompressible data for 20 minutes, and then ran AS-SSD (another incompressible data test) to see how low performance could get:

OCZ Vertex 3 120GB - Resiliency - AS SSD Sequential Write Speed - 6Gbps
  Clean After Torture After TRIM
OCZ Vertex 3 120GB 162.1 MB/s 38.3 MB/s 101.5 MB/s

Note that the Vertex 3 does recover pretty well after you write to it sequentially. A second AS-SSD pass shot performance up to 132MB/s. As I mentioned above, after TRIMing the whole drive I saw performance in the 100 - 130MB/s range.

This is truly the worst case scenario for any SF based drive. Unless you deal in a lot of truly random data or plan on storing/manipulating a lot of highly compressed files (e.g. compressed JPEGs, H.264 videos, etc...), I wouldn't be too concerned about this worst-case scenario performance. What does bother me however is how much lower the 120GB drive's worst case is vs. the 240GB.

Power Consumption

Unusually high idle power consumption was a bug in the early Vertex 3 firmware - that seems to have been fixed with the latest firmware revision. Overall power consumption seems pretty good for the 120GB drive, it's in line with other current generation SSDs we've seen although we admittedly haven't tested many similar capacity drives this year yet.

Idle Power - Idle at Desktop

Load Power - 128KB Sequential Write

Load Power - 4KB Random Write, QD=32

AnandTech Storage Bench 2010 Final Words
Comments Locked

153 Comments

View All Comments

  • Xcellere - Wednesday, April 6, 2011 - link

    It's too bad the lower capacity drives aren't performing as well as the 240 GB version. I don't have a need for a single high capacity drive so the expenditure in added space is unnecessary for me. Oh well, that's what you get for wanting bleeding-edge tech all the time.
  • Kepe - Wednesday, April 6, 2011 - link

    If I've understood correctly, they're using 1/2 of the NAND devices to cut drive capacity from 240 GB to 120 GB.
    My question is: why don't they use the same amount of NAND devices with 1/2 the capacity instead? Again, if I have understood correctly, that way the performance would be identical compared to the higher capacity model.
    Is NAND produced in only one capacity packages or is there some other reason not to use NAND devices of differing capacities?
  • dagamer34 - Wednesday, April 6, 2011 - link

    Because price scaling makes it more cost-effective to use fewer, more dense chips than separate smaller, less dense chips as the more chips made, the cheaper they eventually become.

    Like Anand said, this is why you can't just as for a 90nm CPU today, it's just too old and not worth making anymore. This is also why older memory gets more expensive when it's not massively produced anymore.
  • Kepe - Wednesday, April 6, 2011 - link

    But couldn't they just make smaller dies? Just like there are different sized CPU/GPU dies for different amounts of performance. Cut the die size in half, fit 2x the dies per wafer, sell for 50% less per die than the large dies (i.e. get the same amount of money per wafer).
  • A5 - Wednesday, April 6, 2011 - link

    No reason for IMFT to make smaller dies - they sell all of the large dies coming out of the fab (whether to themselves or 3rd parties), so why bother making a smaller one?
  • vol7ron - Wednesday, April 6, 2011 - link

    You're missing the point on economies of scale.

    Having one size means you don't have leftover parts, or have to pay for a completely different process (which includes quality control).

    These things are already expensive, adding the logistical complexity would only drive the prices up. Especially, since there are noticeable difference in the manufacturing process.

    I guess they could take the poorer performing silicon and re-market them. Like how Anand mentioned that they take poorer performning GPUs and just sell them at a lower clockrate/memory capacity, but it could be that the NAND production is more refined and doesn't have that large of a difference.

    Regardless, I think you mentioned the big point: inner RAIDs improve performance. Why 8 chips, why not more? Perhaps heat has something to do with it, and (of course) power would be the other reason, but it would be nice to see higher performing, more power-hungry SSDs. There may also be a performance benefit in larger chips too, though, sort of like DRAM where 1x2GB may perform better than 2x1GB (not interlaced).

    I'm still waiting for the manufacturers to get fancy, perhaps with multiple controllers and speedier DRAM. Where's the Vertex3 Colossus.
  • marraco - Tuesday, April 12, 2011 - link

    Smaller dies would improve yields, and since they could enable full speed, it would be more competitive.

    A bigger chip with a flaw may invalidate the die, but if divided in two smaller chips it would recover part of it.

    On other side, probably yields are not as big problem, since bad sectors can be replaced with good ones by the controller.
  • Kepe - Wednesday, April 6, 2011 - link

    Anand, I'd like to thank you on behalf of pretty much every single person on the planet. You're doing an amazing job with making companies actually care about their customers and do what is right.
    Thank you so much, and keep up the amazing work.

    - Kepe
  • dustofnations - Wednesday, April 6, 2011 - link

    Thank God for a consumer advocate with enough clout for someone important to listen to them.

    All too often valid and important complaints fall at the first hurdle due to dumb PR/CS people who filter out useful information. Maybe this is because they assume their customers are idiots, or that it is too much hassle, or perhaps don't have the requisite technical knowledge to act sensibly upon complex complaints.
  • Kepe - Wednesday, April 6, 2011 - link

    I'd say the reason is usually that when a company has sold you its product, they suddenly lose all interest in you until they come up with a new product to sell. Apple used to be a very good example with its battery policy. "So, your battery died? We don't sell new or replace dead batteries, but you can always buy the new, better iPod."
    It's this kind of ignorance towards the consumers that is absolutely appalling, and Anand is doing a great job at fighting for the consumer's rights. He should get some sort of an award for all he has done.

Log in

Don't have an account? Sign up now