POST A COMMENT

57 Comments

Back to Article

  • squashmeister99 - Tuesday, September 27, 2011 - link

    You teased us with your video reviews. Now we cant go back... :-) Reply
  • G-Man - Tuesday, September 27, 2011 - link

    I miss the video reviews already :-) Reply
  • vol7ron - Tuesday, September 27, 2011 - link

    hahaha. they are a good addition aren't they? Reply
  • Rasterman - Wednesday, September 28, 2011 - link

    Comparing a single Vertex 3 240GB to a 1.6TB doesn't seem quite valid. Someone considering the R4 would be asking what is the performance difference between the R4 and 4 to 16 Vertex 3s in RAID 0. Especially considering the massive cost savings per GB using the Vertex 3s. Reply
  • MrBungle123 - Wednesday, September 28, 2011 - link

    I'd like to see some RAID 5/6/10 arrays of 15K RPM SCSI drives in the benchmarks too. Reply
  • marraco - Wednesday, September 28, 2011 - link

    No. I don't want video reviews, unless they have English subtitles. Reply
  • SilthDraeth - Wednesday, September 28, 2011 - link

    I have trouble hearing and use subs for all my movies, yet Anand speaks quite clearly, and I have no trouble understanding him.

    Also, he posts the video review in conjunction with the written review, and not as a stand alone feature, so you really don't need it subtitled. So you would still get to enjoy the reviews as you always have.
    Reply
  • Lonbjerg - Thursday, September 29, 2011 - link

    You can write english but not understand it when it's spoken? *shakes head* Reply
  • connor4312 - Thursday, September 29, 2011 - link

    I'm guessing you never learned a second language? Reply
  • arthur449 - Wednesday, September 28, 2011 - link

    Not all information provided is specifically relevant to everyone's interests in a review of a product. Similarly, information provided in a video review follows a linear arbitrary organizational structure that cannot possibly align with everyone's preferred method of learning. Also, it's easier to skim / search / quote / share a text-based review. Reply
  • geddarkstorm - Wednesday, September 28, 2011 - link

    ^ This Reply
  • cervantesmx - Wednesday, September 28, 2011 - link

    I agree 100% Reply
  • GTRagnarok - Tuesday, September 27, 2011 - link

    "We have a preproduction board that has a number of stability & compatibility OCZ tells us will be addressed..."

    I think a word is missing here.
    Reply
  • icrf - Tuesday, September 27, 2011 - link

    Also, you missed the protocol on the last link on the first page (the one to ssd bench) and it 404's now Reply
  • Anand Lal Shimpi - Tuesday, September 27, 2011 - link

    Fixed both! Thank you!

    Take care,
    Anand
    Reply
  • FATCamaro - Tuesday, September 27, 2011 - link

    OOH OOH Let me guess!!!
    Never??

    As in :
    "We have a preproduction board that has a number of stability & compatibility OCZ tells us will NEVER be addressed..."
    Reply
  • vodkapls - Tuesday, September 27, 2011 - link

    Isn't it the fact that the revodrive 3 x2 use asynchronous memory that makes it so much slower than the r4 ? Reply
  • Anand Lal Shimpi - Tuesday, September 27, 2011 - link

    Great catch! I hadn't even thought of that but it's definitely a possibility :)

    Take care,
    Anand
    Reply
  • jebo - Tuesday, September 27, 2011 - link

    I just can't take OCZ seriously from a reliability standpoint. I would love to know what the failure rate is like on OCZ's desktop offerings. I personally am in the process of my 3rd RMA of an OCZ SSD during the past 2 years.

    I think Intel, Crucial (or, judging by the last review, Samsung) will make my next SSD. I can only rebuild windows and piece together backups so many times before I say enough is enough.
    Reply
  • dilidolo - Tuesday, September 27, 2011 - link

    What's the point to develop enterprise product if no enterprise is going to buy?
    I don't think any enterprise will trust OCZ.
    Reply
  • josephjpeters - Tuesday, September 27, 2011 - link

    And why is that? Because of the supposed high failure rates? Can you supply any real information about this?

    OCZ has less then 1% failure rate. There may be more then 1% of customers who have "issues" but they aren't related to the drive. User error plays a pretty big role, but of course it MUST be OCZ's fault right?

    Enterprise customers are professionals who know how to install serious hardware like this. And if they don't? OCZ will help install it for them on site. That's what enterprise companies do!

    Reply
  • Troff - Tuesday, September 27, 2011 - link

    I don't believe that 1% number for a second. First of all, I read some return stats from a store that listed the RETURN rate at just below 3%. Secondly, I know of 5 very different systems with Vertex 3 drives in them. All 5 have recurring lockups/BSODs. The people who built and run these systems write their own filesystems. They are extremely knowledgeable. If they can't make them run properly, they are not fit to run outside of a lab environment.

    That said, I suspect it's as much Sandforce that's the problem as it is OCZ.
    Reply
  • josephjpeters - Wednesday, September 28, 2011 - link

    I think it's an Intel problem. But NooOoOo... it can't be an Intel problem... Reply
  • geddarkstorm - Wednesday, September 28, 2011 - link

    From all the data I've been seeing, it seems to be a SATA III issue, and an issue with motherboards not being reading for such high volumes of data flow. Mechanical drives can get no where near SSD speeds, and I don't think manufacturers were really expecting how fast they'd go on SATA III (almost pegging it out at times, and it's brand new!). Reply
  • josephjpeters - Wednesday, September 28, 2011 - link

    Exactly. It's not an OCZ issue, it's the motherboard. When will someone step in and take the blame? Reply
  • Beenthere - Tuesday, September 27, 2011 - link

    SSDs appear to be an on-the-job learning program for SSD manufacturers with all the issues that currently exist.

    I do not however believe they are selling SSDs at low margins.

    Enterprise won't use SSDs yet for the same reason informed consumers won't use them - they have serious reliability and compatibility issues. Unless you can afford lost data and a hosed PC, SSDs are not even an option at this point in time. Maybe in a couple more years they will sort out the problems that should have resolved long agao?
    Reply
  • dave1231 - Tuesday, September 27, 2011 - link

    I wonder really how much a consumer SSD costs to produce. Saying that slim margins will force companies out of business if there's a big markup on a 128GB is not true. These same drives were $100s of dollars last year and probably still aren't good value today. Unless you're saying consumers are waiting for the .50c/GB drive. Reply
  • josephjpeters - Tuesday, September 27, 2011 - link

    It's roughly 20% margins and the price of an SSD is directly related to the cost of Flash. Owning the controller IP is key in maintaining solid margins.

    Enterprise drives will drive flash demand which will lead to economies of scale that result in cheaper Flash prices and consequently cheaper consumer SSD's.
    Reply
  • ChristophWeber - Tuesday, September 27, 2011 - link

    Anand wrote: "I've often heard that in the enterprise world SSDs just aren't used unless the data is on a live mechanical disk backup somewhere. Players in the enterprise space just don't seem to have the confidence in SSDs yet."

    I use an SSD in an enterprise environment, a first gen Sandforce model from OWC. I do trust it with my main workload - database and web server in this case, but of course it is still backed up to mirrored hard drives nightly, just in case.

    I'd have no qualms deploying a Z-Drive R4 in one of our HPC clusters, but it'd be an RM88 model with capacitors, and I'd still run the nightly rsync to a large RAID unit. Now if someone would finally signal they want to spend another $100k on a cluster, and I'll spec a nice SSD solution for primary storage.
    Reply
  • nytopcat98367 - Tuesday, September 27, 2011 - link

    is it bootable? can it b used 4 a desktop too? Reply
  • jdietz - Tuesday, September 27, 2011 - link

    I looked up the prices for these on Google Shopping - $7 / GB.

    These offer extreme performance, but probably only an enterprise server can ever benefit from this much performance. Enthusiast users of single-user machines should probably stick with RevoDrive X2 for around $2 / GB.
    Reply
  • NCM - Tuesday, September 27, 2011 - link

    Anand writes: "During periods of extremely high queuing the Z-Drive R4 is a few orders of magnitude faster than a single drive."

    Umm, a bit hyperbolic! With "a few" meaning three or more, the R4 would need to be at least 1000 times faster. That's nowhere near the case.
    Reply
  • JarredWalton - Tuesday, September 27, 2011 - link

    Correct. I've edited the text slightly, though even a single order of magnitude is huge, and we're looking at over 30x faster with the R4 CM88 (and over two orders of magnitude faster on the service times for the weekly stats update). Reply
  • Casper42 - Tuesday, September 27, 2011 - link

    Where do you plan on testing it? (EU vs US)

    Have you tried asking HP for an "IO Accelerator" ? (Its a Fusion card)

    I worked with a customer a few weeks ago near me and they were testing 10 x 1.28TB Fusion IO cards in 2 different DB Server upgrade projects. 8 in a DL980 for one project and 2 in a DL580g7 for a separate project.
    Reply
  • Movieman420 - Tuesday, September 27, 2011 - link

    I see all these posts taking all kinds of punishment, please try and remember that ANY company that uses SandForce has the SAME issues, but since Ocz is the largest they catch all the flack. If anything, SF needs to beef up validation testing first and foremost. Reply
  • josephjpeters - Wednesday, September 28, 2011 - link

    Like I said before, it's really more of a motherboard issue with the SATA ports then it is a SF/OCZ issue. They designed to spec... Reply
  • Yabbadooo - Tuesday, September 27, 2011 - link

    I note that on the Windows Live Team blog they write that they are moving to flash based blob storage for their file systems.

    Maybe they will use a few of these? That would definitely be a big vote of confidence, and the testimonials from that would be influential.
    Reply
  • Guspaz - Tuesday, September 27, 2011 - link

    I have to wonder at the utility of these drives. They're not really PCIe drives, they're four or eight RAID-0 SAS drives and a SAS controller on a single PCB. They're still going to be bound by the limitations of RAID-0 and SAS. There are proper PCIe SSDs on the market (Fusion-io makes some), but considering the price-per-gig, these Z-Drives seem to offer little benefit other than saving space.

    Why should I spend $11,200 on a 1600GB Z-Drive when I can spend about the same on eight OCZ Talos SAS drives and a SAS RAID controller, and get 3840GB of capacity? Or spend half as much on eight OCZ Vertex 3 drives and a SATA RAID controller, and get 1920GB of capacity?

    I'm just trying to see the value proposition here. Even with enterprise-grade SSDs (like the Talos) and RAID controllers, the Z-Drive seems to cost twice as much per-gig than OCZ's own products.
    Reply
  • lorribot - Tuesday, September 27, 2011 - link

    I'm with you on this.

    What happens if a controller toasts itself wheres your data then?
    I would rather have smaller hot swap units sitting behind a raid controller.
    It is a shame OCZ couldn't supply such a setup for you to compare performance, or perhaps they know it would be comparable.

    Yes it is a great bit of kit but f I can't raid it then it is of no more use to me than as a cache and RAM is better at that, and a lot cheaper, $11000 buys some big quantities of DDR3.

    In the enterprise space security of data is king, speed is secondary. Losing data means a new job, slow data you just get moaned at. That is why SANs are so well used. Having all your storage in one basket that could fail easily is a big no, no and has been for many years.
    Reply
  • Guspaz - Tuesday, September 27, 2011 - link

    To be fair, you can RAID it in software if required. You could RAID a bunch of USB sticks if you really wanted to. There are more than a few enterprise-grade SAN solutions out there that ultimately rely on Linux's software RAID, after all. Reply
  • lorribot - Wednesday, September 28, 2011 - link

    You cant raid it in software but you could raid several of them if you have deep pockets.
    The point is why buy a 1.6 or 3.2 TB SSD when you can buy 10 x 320 gb SSDs and (possibly) get better performance for less cost?
    Reply
  • bjacobson - Tuesday, September 27, 2011 - link

    I think I've mentioned this before but can you load up a Windows 7 installation with 30 or so startup programs and compare the startup time difference between this and a harddrive?
    A video of this would be be even impressive.
    Reply
  • ckryan - Tuesday, September 27, 2011 - link

    I've been going through some issues with a 2281 drive with Toggle nand. I'm basically writing 11TB a day to it and under these conditions I can't get too many hours in between crashes. I'm of the opinion that the latest FW has helped most out, but clearly my experience shows that the 2281, when perfected, will be unstoppable in certain workloads, but for now all SF users are going to have some problems. If the problems are predictable you can compensate, but if they're random, well SF controllers aren't the only things that have problem with randomness. I knew it was a possibility, and that normal users won't abuse their drives as much, but I have to wonder if OCZ can make an enterprise drive problem free, why can't they make consumer SF drives better? The SF problem is the OCZ problem... OWC doesn't have the same perception issues, but is using the same hardware (Mushkin,Patriot, etc). As much as I like OCZ, they've done some questionable things in the past, and not just swapping cheap flash in SF1200 drives. Hopefully they can overcome the problems they're having with Sandforce and their Arrowana stuff, release a problem free next gen Indilinx controller, and then call it a day. Oh yeah, quit using those stupid plastic chassis. Reply
  • jalexoid - Tuesday, September 27, 2011 - link

    Considering these devices are more likely to find themselves in a machine running something else than a desktop system, why not test them on another OS? Reply
  • sanguy - Wednesday, September 28, 2011 - link

    OCZ's standard line "It's only affecting 0.01% of the deployed units and we're working on a fix....." doesn't work in the enterprise market. Reply
  • josephjpeters - Wednesday, September 28, 2011 - link

    These are PCIe. Most of the "issues" come from SATA drives because mobo makers are having issues with their SATA ports. Reply
  • p05esto - Wednesday, September 28, 2011 - link

    I'll admit, I'm now too lazy to even read....it's getting bad. I just want to push the "play" button while I sit back eating Cheetos and rubbing my tummy. Get into my tummy little Cheeto, get into my brain little ssd review,... same line of thinking really, whatever is easiest.

    Great review though, seriously.
    Reply
  • alpha754293 - Wednesday, September 28, 2011 - link

    If you want to really test it'and validate it's long term reliability, you pretty much need to do what enterprise customers do. Run the SSD, but always keep a backup of it somewhere, like you said.

    That being said though, if you've got TWO backup copies of it, you can actually run a parity check on it (pseudo-checksum) and determine its error rate.

    Also, you didn't run HDTach on it. Given that it's tied together with a Marvell SAS controller, NOT being able to run TRIM on it, I would presume, will give it performance issues in the long run.

    To do the error checking, you'll probably have to put this thing in a Solaris system running ZFS so you can mimic the CERN test. And if you actually read/write continuously to it, at the same level in terms of the sheer volume of data, other SSD/NAND-specific issues might start to pop up like wear levelling, etc. I would probably just run the read/write cycle for an entire month, where it periodically deletes some data, rewrite new data, etc. At the end of the month, make the two mirror backups of it. And then run it again. Hopefully you'd be able to end up at some identical endpoint after PBs of read/write ops that you can run both the block level and volume level checksum on.

    But as a swap drive, this would be BLAZINGLY fast.
    Reply
  • perrydoell - Wednesday, September 28, 2011 - link

    You say "We have a preproduction board that has a number of stability & compatibility issues."

    This is the enterprise space. Things MUST WORK RELIABLY. How can you even review unstable products? I expect better from Anandtech.

    I cannot take OCZ seriously either. An unstable product is NOT for the enterprise. Also, check the negative reviews at NewEgg. Ouch.
    Reply
  • josephjpeters - Wednesday, September 28, 2011 - link

    Where is the R4 listed on Newegg? Reply
  • caliche - Wednesday, September 28, 2011 - link

    I am sure he is referring to the previous versions of the z-drive, which is all you can use as an indicator.

    I am an enterprise customer. Dell R710s and two Z-Drive R2 M84 512GB models, one in each. I have had to RMA one of them once, and the other is on it's second RMA. They are super fast when they work, but three failures across two devices in less than a year is not production ready. We are using them in benchmarking servers running Red Hat Enterprise 5 for database stores, mostly read only to break other pieces of software talking to it. Very low writes.

    But here is the thing. When they power on, one or more of the four RAID pieces is "gone". This is just the on board software on the SSD board itself, no OS, no I/O on it at all besides the power up RAID confidence check. Power on the server, works one day, next day the controller on the card says a piece is missing. That's not acceptable when you are trying to get things done.

    In a perfect world, you have redundant and distributed everything with spare capacity and this is not a factor. But then you start looking at dealing with these failures and you start to ask yourself is your time better spent on screwing around with an RMA process and rebuilds or optimizing your environment?
    Reply
  • ypsylon - Thursday, September 29, 2011 - link

    Nobody in the right frame of mind using SSD in enterprise segment (not even interested in them as consumer drives, but that is not the issue here). SSDs are just as unreliable as normal HDDs with ridiculous price point. You can lose all of data much quicker than from normal HDD. RAID arrays built from standard HDDs are just as fast as 1 or 2 "uber" SSDs and cost fraction of a SSD setup (often even including cost of the RAID controller itself). Also nobody running large arrays in RAID0 (except maybe video processing). RAID0 is pretty much non-existent in serious storage applications. As a backup I much more prefer another HDD array than unreliable, impossible to test, super-duper expensive SSD.

    You can't tests NAND reliability. That is the biggest problem of SSDs in business class environment. Because of that SSD will whiter and die in the next 5-10 years. SSDs are not good enough for industry, if you can't hold on to big storage market then no matter how good something is, it will die. Huge, corporate customers are key to stay alive is storage market.
    Reply
  • Zan Lynx - Thursday, September 29, 2011 - link

    You are so, so wrong.

    Enterprises are loving SSDs and are buying piles of them.

    SSDs are the best thing since sliced bread if you run a database server.

    For one thing, the minimum latency of a PCIe SSD 4K read is almost 1,000 times less than a 4K read off a 15K SAS drive. The drive arrays don't even start to close the performance gap until well over 100 drives, and even then the drive array cannot match the minimum latency. It can only match the performance in parallel operations.

    If you have a lot of operations that work at queue depth of 1, the SSD will win every time, no matter how large the disk array.
    Reply
  • leonzio666 - Wednesday, November 02, 2011 - link

    Bear in mind though, that enterprises (real heavy weights) probably preffer something like fusion-io io-drives which btw are the only ssd`s running in IBM driven blade servers. With speeds up to 3 Gb/s and over 320 k IOPS it`s not surprising they cost ca 20k $$ per unit :D So it`s not true that SSD`s in general are not good for the enterprise segment. Also, and this is hot - these ssd use SLC NAND... Reply
  • MCS7 - Thursday, September 29, 2011 - link

    I remember Anand doing a VOODOO 2 card review (VIDEO) way way way back at the turn of the millenium! Oh boy..we are getting OLD....lol take care all Reply
  • Googer - Thursday, September 29, 2011 - link

    Statistics for CPU usage would have been handy as some storage devices have greater demands for the CPU than others. Even between various HDD makes, CPU use varies. Reply
  • alpha754293 - Thursday, September 29, 2011 - link

    Were you able to reproduce the SF-2xxxx BSOD issue with this? or is it limited to just the SF-2281? Reply

Log in

Don't have an account? Sign up now