Final Words

Without competing cards to compare to it's difficult to quantify the Z-Drive R4's performance other than to say that it is obviously very fast. With SandForce based SSDs however my concern is rarely about performance and more about reliability. I've often heard that in the enterprise world SSDs just aren't used unless the data is on a live mechanical disk backup somewhere. Players in the enterprise space just don't seem to have the confidence in SSDs yet. Given the teething problems we've seen on the desktop, I don't blame these customers at all.

Ultimately that's my biggest concern with the Z-Drive R4: it seems to be a very solid performer, but it has an absolutely unknown reliability track record. It's possible that by using an on-board SAS controller the Z-Drive R4 will be less prone to random system incompatibilities and a more reliable solution since it is effectively a closed box at that point. That's purely speculation however.

I am curious how OCZ will approach enterprise customers and attempt to win over their trust with the Z-Drive R4. You obviously won't see any Newegg reviews of the product, so OCZ will have to get testimonials from some pretty influential customers to gain traction in this space.

Seriously entering the enterprise market is a huge move for OCZ. Three years ago I couldn't have predicted OCZ would get this far, I wonder what will happen over the next three. One thing is for sure: OCZ will need more than enterprise products to adequately address this market. Hopefully any investments in testing and validation for enterprise customers will help improve the consumer side of the business as well.

Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance
Comments Locked

57 Comments

View All Comments

  • caliche - Wednesday, September 28, 2011 - link

    I am sure he is referring to the previous versions of the z-drive, which is all you can use as an indicator.

    I am an enterprise customer. Dell R710s and two Z-Drive R2 M84 512GB models, one in each. I have had to RMA one of them once, and the other is on it's second RMA. They are super fast when they work, but three failures across two devices in less than a year is not production ready. We are using them in benchmarking servers running Red Hat Enterprise 5 for database stores, mostly read only to break other pieces of software talking to it. Very low writes.

    But here is the thing. When they power on, one or more of the four RAID pieces is "gone". This is just the on board software on the SSD board itself, no OS, no I/O on it at all besides the power up RAID confidence check. Power on the server, works one day, next day the controller on the card says a piece is missing. That's not acceptable when you are trying to get things done.

    In a perfect world, you have redundant and distributed everything with spare capacity and this is not a factor. But then you start looking at dealing with these failures and you start to ask yourself is your time better spent on screwing around with an RMA process and rebuilds or optimizing your environment?
  • ypsylon - Thursday, September 29, 2011 - link

    Nobody in the right frame of mind using SSD in enterprise segment (not even interested in them as consumer drives, but that is not the issue here). SSDs are just as unreliable as normal HDDs with ridiculous price point. You can lose all of data much quicker than from normal HDD. RAID arrays built from standard HDDs are just as fast as 1 or 2 "uber" SSDs and cost fraction of a SSD setup (often even including cost of the RAID controller itself). Also nobody running large arrays in RAID0 (except maybe video processing). RAID0 is pretty much non-existent in serious storage applications. As a backup I much more prefer another HDD array than unreliable, impossible to test, super-duper expensive SSD.

    You can't tests NAND reliability. That is the biggest problem of SSDs in business class environment. Because of that SSD will whiter and die in the next 5-10 years. SSDs are not good enough for industry, if you can't hold on to big storage market then no matter how good something is, it will die. Huge, corporate customers are key to stay alive is storage market.
  • Zan Lynx - Thursday, September 29, 2011 - link

    You are so, so wrong.

    Enterprises are loving SSDs and are buying piles of them.

    SSDs are the best thing since sliced bread if you run a database server.

    For one thing, the minimum latency of a PCIe SSD 4K read is almost 1,000 times less than a 4K read off a 15K SAS drive. The drive arrays don't even start to close the performance gap until well over 100 drives, and even then the drive array cannot match the minimum latency. It can only match the performance in parallel operations.

    If you have a lot of operations that work at queue depth of 1, the SSD will win every time, no matter how large the disk array.
  • leonzio666 - Wednesday, November 2, 2011 - link

    Bear in mind though, that enterprises (real heavy weights) probably preffer something like fusion-io io-drives which btw are the only ssd`s running in IBM driven blade servers. With speeds up to 3 Gb/s and over 320 k IOPS it`s not surprising they cost ca 20k $$ per unit :D So it`s not true that SSD`s in general are not good for the enterprise segment. Also, and this is hot - these ssd use SLC NAND...
  • MCS7 - Thursday, September 29, 2011 - link

    I remember Anand doing a VOODOO 2 card review (VIDEO) way way way back at the turn of the millenium! Oh boy..we are getting OLD....lol take care all
  • Googer - Thursday, September 29, 2011 - link

    Statistics for CPU usage would have been handy as some storage devices have greater demands for the CPU than others. Even between various HDD makes, CPU use varies.
  • alpha754293 - Thursday, September 29, 2011 - link

    Were you able to reproduce the SF-2xxxx BSOD issue with this? or is it limited to just the SF-2281?

Log in

Don't have an account? Sign up now