Back to Article

  • radium69 - Thursday, April 12, 2012 - link

    Can't seem to find it in the article, but what about the warranty?
    Even Intel consumer SSD's seem to have a good warranty, so I wonder what they can offer for the enterprise market.

    $3500 USD for a nice drive is worth considering, don't know about the buying part though.
  • zdzichu - Thursday, April 12, 2012 - link

    90k IOPS doesn't seem to be much. Single SSD can reach 75k-80k already. And milion IOPS F5100 is still untouchable. Reply
  • Sufo - Thursday, April 12, 2012 - link

    Why mention the F5100. It's not even vaguely within the same price bracket at around $100/GB.

    You get 4x the reads and 8x the writes for 23x the price so it's only worth considering if you can afford it and really need it.
  • Jaybus - Thursday, April 12, 2012 - link

    With the exception of the outstanding endurance, the Z-Drive is better performing and only about 20% more expensive. Granted, endurance is very much a factor in its intended market and lower cost always helps. Reply
  • Samus - Thursday, April 12, 2012 - link

    I can't believe they left out AES-128 support. Fail. Reply
  • pheadland - Thursday, April 12, 2012 - link

    Uh, these are drives intended to be installed in rack-mounted servers in secure machine rooms. They don't need to be encrypted. Reply
  • madmilk - Thursday, April 12, 2012 - link

    Is there really a compelling reason for AES support on the drive? I mean, with AES-NI extensions present on all the new Xeons and Opterons, it seems rather redundant. Reply
  • jjj - Thursday, April 12, 2012 - link

    Actually it's pretty lame but at least it's cheap for the targeted market.
    The Micron drive you mention and the Marvell/OCZ native PCI controller sound a lot more interesting.
  • vol7ron - Thursday, April 12, 2012 - link

    1) How does linear feet per minute differ from cubic feet per minute? Linear implies something 2 dimensional, which I'm struggling to understand how it'd be able to cool anything.

    2) The non-encryption situation may be the largest factor in limited sales, especially if this is targeting the enterprise market.

    3) It's nice to see ~1TB drives around the same price as the original mainstream MLC SSD offerings
  • lukarak - Thursday, April 12, 2012 - link

    Well, as i understand it, you could have two cases, one with 100 cubic feet per minute, the other with 200, but because the second case is three times the size, the cooling would be less efficient. Linear measurements are the same for any case size so you can't miss. Reply
  • vol7ron - Thursday, April 12, 2012 - link

    Yes to your points, but no to your last sentence. That's like saying CFM is the same for any case size. Linear feat/minute is more vector-based (directional speed), whereas CFM is volume based. They're both measurements regardless of case size.

    If you have something spinning really fast, but is really small, it's going to have a fast LFM, but low CFM. Because it has a high LFM, doesn't make it better. Inversely, you can have a really big fan (high CFM), that doesn't spin that fast, much like a ceiling fan. It generally doesn't blow fast enough to move paper on a desk, but it does move the air in the room.

    So, I'm saying both measurements are important.
  • tilandal - Thursday, April 12, 2012 - link

    Cubic feet are a measure of volume. Linear feet is just feet. Linear feet per second is a measure of air speed. Speed are related by the cross section the air is moving through. A 10CF/M fan blowing through a 1 Square foot opening moves air at 10F/M. If its through a .1 square foot opening its moving 100F/M. Reply
  • vol7ron - Thursday, April 12, 2012 - link

    Right, there's a missing order of magnitude (volume).

    I guess if the fan is blowing the heat away from the device then CFM isn't as important and linear speed makes sense because it's how quickly it can dissipate the surrounding radiated air.
  • Senpuu - Thursday, April 12, 2012 - link

    Volume is not an order of magnitude [ex: 10000 (10^4) is one order of magnitude less than 1000 (10^3)] .

    As an example, let's say you have a 10CFM mounted over the fitted intake of a closed case with our Intel SSD positioned in front of a 4"x4" exhaust. That's 10 ft^3 of air that is being brought into the case that needs to be exhausted through a 4"x4" hole every minute. That hole is a (1/3ft)^2 or 1/9 ft^2 cross section. 10 ft^3 of air goes through that hole every minute (a measure of volume / time), but it does so at a rate of 90 ft / min (a measure of distance / time). 90 ft/min < 100 ft/min minimum in the SSD spec, so you'd have to increase the fan CFM (by increasing the RPM or replacing the fan) or decrease the exhaust opening to ~3.79"x3.79".

    Fans are rated in terms of CFM and not LFM because LFM is a function of system geometry and location whereas CFM is only a function of fan geometry and RPMs. As you said though, local heat transport is best expressed in LFM.
  • vol7ron - Thursday, April 12, 2012 - link

    Volume is an order of magnitude. It's adding a third dimension. For instance, when converting a 4"x4" square (4 sq. in -or- 4 in^2) to a cube (4 cu. in -or- 4 in^3 -or- 4"x4"x4"), the volume is exponentially bigger than the surface area.

    In math, magnitude is difference in relative size. A 2D figure is an order of magnitude smaller than a 3D figure.

    That said, you've given a good example. CFM=LFM x Area; or, Q=V x A. I'm still unsure why LFM is preferred over CFM. However, your last point is incorrect. CFM isn't only for fans, and the rotational velocity of the fan is only one variable that could go into creating air flow. Fan's have fins that are angled and push air, the length and degree of their angle are also very important. So is the material, and it's resistance to the force of air against the fan as it spins.

    CFM is the flow rate, which applies to air/water/anything that's 3D. LFM is only the speed (velocity), as you put it, the distance over time. Agh. I give up on understanding the purpose, perhaps because the heat isn't always constant in one part of the board, or to be able to use that figure in other form factors (difference size chips).
  • FunBunny2 - Thursday, April 12, 2012 - link

    -- I'm still unsure why LFM is preferred over CFM.

    Look at the card. With 3 boards, there's no "volume" to speak of. It needs fast "exhaust" to cool it.
  • vol7ron - Thursday, April 12, 2012 - link

    I think that makes the most sense - how quickly you can displace the air that cushions the board.

    I was thinking that's a factor in CFM, which it is, but I wasn't thinking about the significance of that factor. That is somewhat bewildering because I even explained it in my ceiling fan example: low velocity, high area - sure you're moving a lot of air in a minute, but between revolutions there is stagnant air.

    I apologize - I did not mean to take up all the comment space in what should have been addressed in the forums.
  • Senpuu - Monday, April 23, 2012 - link

    Volume is not an order of magnitude and I have never in my time with the sciences and maths heard it used as such. Cite me any respectable reference where it is used as such and I'll gladly cede the point.

    And, since we're nitpicking and being technical, my last point is only slightly incorrect, as I should not have used the word 'only'. There are other factors -- such as the rigidity of the blade, which is a function of the geometry and the material properties, as you pointed out -- but the angle, length, etc are all part of the fan geometry, which I stated. And I never said that CFM was only for fans. I did say that fans were rated in terms of CFM though, which is true.

    I thought the purpose was clear for LFM...? It's giving you an airflow requirement in order to transport sufficient heat away from the component. When air cooling something, the idea is to bring in cool air from outside the case at a certain temperature and pass it over a hot component, thereby cooling the component and heating the passing air as it gets sent out the exhaust. The only pertinent information they need to tell you is how fast that air in order to accomplish that task (assuming standard room temperature for intake air).
  • double0seven - Thursday, April 12, 2012 - link

    I'm just a 2nd yr college student, and you seem to be an actual engineer of some kind or something, but I notice a couple things wrong with the example you give. First, the SSD is positioned in front of the 4"x4" exhaust hole, and the fan is at the intake. Thus, the 90 ft/min you are calculating only occurs on the outside of the case, not at the heat source in question. However if the fan were simply reversed, your example would be more correct.

    More important is your proposed solution of reducing the exhaust opening in order to meet the spec. Think about this intuitively; how could that possibly result in more heat dissipation? It may make the math work, but the big picture here is that a certain amount of heat is being generated, and a certain amount of air is needed to dissipate the heat. The spec may be given in LFM, but that's ok, cause you're an engineer, and you know how to convert it to CFM!
  • Metaluna - Monday, April 16, 2012 - link

    Right. The air is only in contact with the heatsink for a short period of time. So there isn't much heat transfer beyond the small boundary layer between the heatsink and the airstream. The size of that boundary layer is a function of the velocity of the air close to the heatsink (I believe this is called the near field in fluid dynamics). The only thing that matters is that, the more frequently you can 'replace' this boundary layer (LFM) with new cold air, the more heat you can remove.

    So as has already been said, specifying the air velocity, in combination with ambient temperature, is more useful when you are looking at the cooling needs of one specific component in isolation.

    Where CFM becomes important is in the bulk movement of the heated air out of the case. If you don't do this, then the ambient case temperature rises. This means the temperature of the air passing over the heatsink rises, and less heat can be removed. There are so many variables introduced here, though, that it's almost a useless spec.
  • EddyKilowatt - Thursday, April 12, 2012 - link

    Linear feet per minute is the chip-centric view of the cooling situation. The chip (and chip maker) doesn't care how many cubic feet are moving through your box... it cares how fast cool air is moving over its surface and helping it get rid of heat. You can almost think of it as a wind-chill requirement.

    The chip maker's job is to spec how much air speed (wind chill), and at what temperature, is needed to keep the chip cool. Your job as a system builder is to provide the necessary air speed to each chip (or device) in your box. The tools you have to work with are your box volume, fan flow rate, and the design of the airflow paths within the box to get air at a low enough temperature, flowing at a fast enough rate, past each device in your system.
  • SetiroN - Thursday, April 12, 2012 - link

    90/38K IOPS at $2000 sound anything but "awesome". Especially considering that those figures seem to be the sum of two 'physical' drives.
    Anand should be a little more objective when it comes to Intel SSDs.
  • ImSpartacus - Thursday, April 12, 2012 - link

    This isn't a consumer drive. You can't even boot from it.

    The customers interested in this drive aren't as price sensitive as us.
  • vol7ron - Thursday, April 12, 2012 - link

    Booting is a big problem, but the fact that you can't encrypt it is an even larger concern. Reply
  • FunBunny2 - Thursday, April 12, 2012 - link

    Ummm. Most industrial strength RDBMS encrypt before writing, so not necessarily. Reply
  • Sivar - Thursday, April 12, 2012 - link

    They do, but at a cost of greater CPU overhead or additional hardware requirements.
    I suspect Intel understands the importance of encryption, but opted for higher performance and lower TDP first. Most enterprise hard drives do not encrypt data, so this isn't as big a deal as one might think.
  • vol7ron - Thursday, April 12, 2012 - link

    Agreed on all the above. Besides, if someone really wanted to get it, they eventually will. Reply
  • zanon - Thursday, April 12, 2012 - link

    This isn't a consumer drive. You can't even boot from it.

    Honestly, that's really irrelevant. If Intel was completely alone, then sure, they might seem better, but they aren't. There are multiple other competing solutions from other companies in the PCIe SSD space, and 90K IOPS really isn't very impressive at all. The OCZ ones are all above 200K, the ioDrive 2s can push above half a million (and that's for 512b, not 4k), etc. Even with highly incompressible data, Anand's own tests do not indicate that we'd expect to see a factor of 3 drop.

    Intel's solution appears to be useful for sure, but application specific. It's a matter of weighing things like endurance, support, and load type against overall performance. That doesn't mean it can't be competitive, but the unalloyed praise does seem a little strong. It seems to be just plain a reasonably strong first card, but there's nothing in particular that goes "Wow!" about it.
  • ats - Friday, April 13, 2012 - link

    People need to really stop comparing numbers when they don't understand what the numbers are that they are comparing.

    Companies tend to quote a wide variety of I/O numbers. almost all are limited span numbers and many are not steady state numbers. Intel for the enterprise class devices always quotes FULL SPAN steady state random I/O numbers. These WILL be significantly lower than partial span peak I/O numbers but actually much closer to what products will actually deliver in the field over the lifetime of the product.

    These full span steady state numbers are typically 10x or more lower than the partial span peak I/O numbers for a given devices.

    The numbers that Intel are quoting are actually very good for FULL SPAN STEADY STATE random I/O.
  • Kjella - Thursday, April 12, 2012 - link

    "as a single drive with a 200GiB (186GB) capacity"

    200 GB (base 10) = 186 GiB (base 2), if you're going to use that notation at least do it correctly....
  • Sabresiberian - Thursday, April 12, 2012 - link

    A gibibyte isn't a base 2 number, but a number derived from powers of 2. Base 2 numbers only contain 1's and 0's.

    If you are going to correct someone and sneer at them for their usage ("at least do it correctly"), I suggest you know what you are talking about and spell correctly.

  • Juddog - Thursday, April 12, 2012 - link

    7 - 14 PB is a sick amount of endurance for an SSD - it seems like everybody here is focusing on the IOPS but the endurance of 7 - 14 PB is just crazy for an SSD. Something like that would make an excellent longer-term investment; a lot of companies are still hesitant to jump into the SSD market for their servers because of the endurance issue.

    The only reason I mention this is that I've personally spoken with infrastructure guys / DBA's that won't invest in SSDs because of the reliability factory - they want something that will take a constant beating in terms of IO at a constant rate.
  • SQLServerIO - Thursday, April 12, 2012 - link

    I'm very happy with the endurance. As a DBA who has had solid state in production systems for the last four years if you REALLY understand your write load you find that endurance of 1TB to 2TB is good enough for two to three years of heavy use. Even "heavy write" databases see 20% writes vs reads. Fusion-io rates some of their cards at 32PB of write endurance but do cost a bit more :) Reply
  • FunBunny2 - Thursday, April 12, 2012 - link

    Folklore says that SSD death is more predictable than HDD, modulo infant mortality. Have you found this to be true? Reply
  • rs2 - Thursday, April 12, 2012 - link

    I just love it when a ~$4000 component is considered to have "fairly reasonable" pricing. Reply
  • FunBunny2 - Thursday, April 12, 2012 - link

    Compared to STEC or Fusion-io or Texas Memory or Violin, yeah, it is. Whether it's a relevant part to this site's gamers, is the other question. Reply
  • Sabresiberian - Thursday, April 12, 2012 - link

    Exactly; the readers of aren't all "gamers", Anandtech never pretended to be a "gamer" site, and many enthusiasts build for actual business and scientific work, not just playing games.

    This thing isn't for me; I build as an "enthusiast gamer". The Revodrive is the best solution I know about, for my purposes and in PCIe form. This device is clearly intended for a different use (I was hoping it would be more like the Revodrive, but it's not). Not knowing much at all about building that kind of serious machine, I can hardly expect to comment knowledgeably on the price (except by paying attention to what others, who are knowledgeable, post here).

  • Zstream - Thursday, April 12, 2012 - link

    So, the best solution would be to have a raid-1 setup with two regular spindle drives and move the "golden images" to this SSD? Reply
  • quanstro - Thursday, April 12, 2012 - link

    if you compare against six 80gb ssd 320s, you would get
    a bigger drive (480mb) more iops (237k) more throughput
    (1440mb/s) and lower cost ($834). and if you need more
    capacity, you can add capacity in small hot-swappable

    the 320s would also be bootable and offer aes-128 and
    could fit on most any motherboard-based controller.

    i'm not sure i see where a pci-e attached controller would
    offer a better solution than regular ssds yet.
  • ShieTar - Thursday, April 12, 2012 - link

    You forgot to mention the 3.6k writing iops of that solution. And the endurance that is lower by more than a factor of 100.

    Also, I would expect a significant percentage of the potential customers to put more than one of these boards into one server. Server-Boards come with up to 7 PCIe x16 slots, and you can not connect 42 ssd 320s to a motherboard controller. So you need to be putting SAS-Controllers into our PCIe slots anyways.

    So, if nothing else you save yourself dozens of cables which can fail, and you save the space of the 42 SSDs, which helps with the cooling.
  • quanstro - Thursday, April 12, 2012 - link

    the anandtech review of the 330 gives 66.3mb/s for 4k random writes.
    from that i get 16.9k iops/drive or 101.8k iops for 6. that's
    that's better than the write iops even than the intel spec sheet
    gives. (by >25%)

    i would point out that the standard ipass cable gives 4 ports per cable,
    and you will need pci-e slots for nics if you wish to actually serve up
    many tens of ssds worth of data.

    i certainly would go with 4x8 (36) ssds + a few 10gbe nics versus a
    pcie-attached ssd solution.
  • FunBunny2 - Thursday, April 12, 2012 - link

    Don't these numbers be retrograde? Aren't R/W speeds supposed to be converging? Reply
  • PyroHoltz - Thursday, April 12, 2012 - link

    </i>"The 910 will ship with a software tool that allows you to get even more performance out of the drive (up to 1.5GB/s write speed) by increasing the board's operating power to 28W from 25W."</i>

    I want to overclock my SSD, that's cool...or is that warm?
  • Casper42 - Friday, April 13, 2012 - link

    It has nothing to do with Overclocking.

    PCIe offers 2 different common power levels.
    25W (x4 and x8 slots) and 75W (x16 slots)
    If you put this card into a 25W slot, they want to make sure you are not going to overload the slot.
    However if you put it into a 75W slot, they are offering a way to increase the draw to 28W which is probably the max the card draws based on its design.

    Fusion IO cards do the same thing but also offer an aux power input that can connect to a Molex connector for additional power. I think they may be moving this to PEG power instead as Molex is less prevalent in major OEM Servers (HP/Dell/IBM).
  • Makaveli - Thursday, April 12, 2012 - link

    Would you annoying children go back to playing wow!

    Complaining about the price tag because you cannot afford it. This is for enterprise customers not for you to be running Benchmarks to stroke your Epeen get over it.

    For once I would like to come in here and read comments from professionals in the industry and not complains from children that won't buy it because its not $1/GB.

  • daos - Thursday, April 12, 2012 - link

    Thank you Makaveli! 42 SSD's? What a joke... Reply
  • mckirkus - Thursday, April 12, 2012 - link

    This device as well as a mid grade HDD should be included on some of your storage benchmarks to give a sense of how much better SSDs are than HDDs and to show how far consumer grade SSDs have to go. At some point IO will no longer be the bottleneck in PCMark type benchmarks. Reply
  • Zstream - Thursday, April 12, 2012 - link

    I would like to see this as well but I would prefer higher end stuff. Like raid 1/10/5 and virtual desktop performance using such components. Reply
  • Southernsharky - Thursday, April 12, 2012 - link

    I don't see why the price is considered good at all. A price should reflect actual costs and I can't imagine that switching to a PCI card really added anything to the price. If anything, I would think this should lower production costs, at least in the future. As for a premium for new tech.... nothing here is really new. We are talking about merging existing technology here. A PCI card is not expensive. Reply
  • rscoot - Thursday, April 12, 2012 - link

    You realize that the physical cost of the components on the motherboard are only one input into the total cost of the product right? Enterprise products have much longer warranties, generally tend to have better support, and go through a much more thorough validation process compared to consumer grade hardware.

    Literally your argument amounts to: Why does this 24 port managed cisco switch cost 10x as 5 5 port linksys switches!?!?!?
  • WeaselITB - Thursday, April 12, 2012 - link

    <protest> But Linksys and Cisco are the same company! </protest>

    I kid, I kid ... just couldn't resist. :-)

  • dlitem - Thursday, April 12, 2012 - link

    Just like you'd never pay for more than ~2c per a gallon of Coke, since even that is probably more than the manufacturing price of the stuff?

    Value and price are different things. If this thing can replace a full rack of spinning disks powering some high-activity database, it could be considered cheap as dirt. Regardless of the manufacturing cost.
  • neotiger - Thursday, April 12, 2012 - link

    For a PCIe card I'd expect much more than 90K IOPS.

    A comparable PCIe card Z-Drive R4 is much better at 250K- 400K IOPS

    As it stands right now, 90K IOPS is no better than a Vertex 4.
  • Makaveli - Thursday, April 12, 2012 - link

    So would you rather put a couple Vertex 4 in your server then....... Reply
  • euler007 - Thursday, April 12, 2012 - link

    Maybe he doesn't like his job. It would be more subtle then spilling your coffee inside the server. Reply
  • neotiger - Saturday, April 14, 2012 - link

    No. But I'd rather put in a Z-Drive R4.

    Maybe you should learn to read the entire post
  • ats - Saturday, April 14, 2012 - link

    The Z-Drive has lower performance in the same test conditions. Reply
  • Casper42 - Friday, April 13, 2012 - link

    1.28 TB Fusion IO does 150K IOPS and costs $25K retail.

    So 800GB and 90K at 1/5th the price isn't actually so bad.

    Enterprise drives also usually measure IOPS in a mixed workload environment. Not sure how that affects the numbers but your Vertex 4 number is most likely Read or Write but not mixed.
  • ats - Saturday, April 14, 2012 - link

    No, a comparable Z-Drive R4 is at 48K write IOPS. Don't confuse best case write IOPS with FULL SPAN STEADY STATE RANDOM WRITE IOPS. Reply
  • theeldest - Thursday, April 12, 2012 - link

    I'd like to know if those IOPS numbers are sustained over time or peak.

    If you look at OCZs drives (or anything with a sandforce controller) your IOPS pretty quickly fall far below what the drive is rated for because garbage collection can't keep up.

    If the drive doesn't drop as much during sustained operation than it's a much better deal.

    Anand, when you finally get to test this drive can you do a sustained test? It's unrealistic for consumers but in the enterprise space there are more people that would actually benefit from seeing that comparison.

    (by sustained, I mean high random/sequential read & writes for at least double the time it takes to write to all blocks).

    I remember a benchmark where the Vertex2 EX dropped from 50k IOPS to 15k IOPS after 12 minutes.

    Consumer benchmarks != Enterprise benchmarks
  • papapapapapapapababy - Thursday, April 12, 2012 - link

    silly question: my GA-H61M-S2V-B3 has slow SATA 3Gb/s and only 3 x PCI Express 2.0 x1 slots ( hey it was a gift) ... can i plug something like this the single - vga- PCI Express x16 slot? ( using the om-board video of dat awesome G530 ) what about other PCI Express 2.0 x1 SSDs? are there any out here? what about a review of the Z-Drive R5, supertalent corestone? etc. THANKS.
  • ggathagan - Friday, April 13, 2012 - link

    Cost of a new motherboard with two 6Gbps ports: $120-$140 (GA-Z68MA-D2H-B3 or GA-Z77MX-D3H)
    Cost of two Samsung 830 Series 256GB SSD's for RAID0 disk: $600

    Cost of 400GB Intel 910: $1929

    Intel 910 (400GB)
    Random 4KB Read (Up to) 90K IOPS 80K IOPS
    Random 4KB Write (Up to) 38K IOPS 36K IOPS
    Sequential Read (Up to) 1000 MB/s 520 MB/s
    Sequential Write (Up to) 750 MB/s 400 MB/s

    Samsung SSD 830
    Random 4KB Read (Up to) 80K IOPS
    Random 4KB Write (Up to) 36K IOPS
    Sequential Read (Up to) 520 MB/s
    Sequential Write (Up to) 400 MB/s

    Figure on a RAID0 setup roughly doubling your performance.
    I just saved you about $1200!
  • papapapapapapapababy - Friday, April 13, 2012 - link

    Already have the motherboard, plus they cost twice as much or even more over here... so you didn't answer shit. also i already have the bandwidth, hate the sata BS limitations and legacy hdd form factor. and im not particularly interested in this monstrosity ( or any kind of raid silliness) just a good PCIe 2.0 x1 @ 88NV9145 silicon. any suggestions? Reply
  • nexox - Thursday, April 12, 2012 - link

    Looks like people are awfully hung up on the 4KB IOPS specification, which just isn't that important for the enterprise market. I've found that the manufacturer specs on a drive are almost useless in predicting relative performance for a given application. Enterprise customers benchmark, with their application-specific data patterns, for each application. Lower-spec Intel drives frequently come out ahead (in my tests, anyway,) because Intel seems to design drives that perform well in the real world, not just on synthetic benchmarks (4KB IOPS, for instance.)

    Obviously I can't tell from the specs here, and I haven't gotten a 910 to benchmark yet, but I would bet that latency figured (magnitude and consistency,) especially after the drive has run out of erased blocks, are far nicer than similarly-priced alternatives. Most consumer drives (and low-end enterprise drives based on related controllers) suffer from (relatively) large pauses, and ugly interactions between write activity and read latency, which is quite a turn-off for many enterprise users.

    Compared to the FusionIO MLC devices, these look cheap enough to be almost disposable, and they shouldn't have any irritating user-space processes hogging CPU like the FusionIOs do.

    Use of the standard LSI controller is also quite nice, because when you've got a system that works, adding kernel modules and / or binary drivers can really be a huge pain, and incur lots of extra testing.

    And no, I don't work for Intel, and I'm not an Intel fanboy - off the top of my head I can't think of a single piece of Intel hardware that I own personally, though I'm sure I've got an old Pentium 4 or something lurking in my junk pile.
  • Makaveli - Thursday, April 12, 2012 - link

    Your experience and testing mirrors something a co-worker of mine told me.

    In the testing he has done the intel drives usually do far better regardless of the benchmark numbers sandforce likes to throw around. Which easily sway consumers that looking at bar graphs all day long.
  • rimsha - Friday, April 13, 2012 - link

    check more detail
  • blanarahul - Saturday, April 14, 2012 - link

    Honestly i absolutely hate the idea of a RAID-on-card solution. I wish Intel would use the P320h conroller, make t's own firmware, validate it and create a super duper cool ssd.

    But i also want OCZ to use their Kilimanjaro platform for making consumer ssds like Revodrive or Vertex. I would absolutely love to have a PCIe-to-NAND ssd in my computer.
  • RaptorHunter - Sunday, April 22, 2012 - link

    So the 400GB cost $1929 and only gets 1000MB/s ???

    Couldn't you just buy 4 128GB normal sata SSD drives and raid them together. That would give you almost 2000MB/s for only $600
  • x0rg - Tuesday, May 08, 2012 - link

    910 is not bootable?? So sad.. The system hard drive with OS on it is a bottle neck. OK, I'll wait... Reply
  • Jon Severinsson - Saturday, July 21, 2012 - link

    Note that "not bootable" does not mean you can't use it for your system drive (/ on Linux, C:\ on Windows), only that you need to load the bootloader (and possible OS kernel) from somewhere else.

    Configuring this on Windows is tricky (but possible), but on Linux putting the bootloader and OS kernel on a separate disk it is a standard install-time option, and installing just the bootloader on a separate disk (or usb stick or whatever) is also trivial (though getting it to read the kernel from a disk not supported by BIOS/EFI is a bit tricky).

    That is either approx. 20 kiB (bootloader) or approx. 15 MiB (bootloader + OS kernel) you need to load from somewhere else *once* on boot. Not exactly a performance bottleneck...
  • tech01 - Thursday, March 19, 2015 - link

    How are we calculating performance in Intel 910 e.g. 2 GB/s on sequential read?
    As per above information 800GB 910 has PCIe2.0 x 8 can reach up to 4GB/s.

Log in

Don't have an account? Sign up now