POST A COMMENT

102 Comments

Back to Article

  • Shark321 - Monday, January 25, 2010 - link

    Kingston has released a new SSD series (V+) with the Samsung controller. I hope Anandtech will review it soon. Other sites are not reliable, as they test only sequential read/writes. Reply
  • Bobchang - Wednesday, January 20, 2010 - link

    Great Article!
    it's awesome to have new feature SSD and I like the performance
    but, regarding your test, I don't get the same random read performance from IOMeter.

    Can you let me know what version of IOMeter and configuration you used for the result? I never get more than around 6000 IOPS.
    Reply
  • AnnonymousCoward - Wednesday, January 13, 2010 - link

    Anand,

    Your SSD benchmarking strategy has a big problem: there are zero real-world-applicable comparison data. IOPS and PCMark are stupid. For video cards do you look at IOPS or FLOPS, or do you look at what matters in the real world: framerate?

    As I said in my post here (http://tinyurl.com/yljqxjg)">http://tinyurl.com/yljqxjg), you need to simply measure time. I think this list is an excellent starting point, for what to measure to compare hard drives:

    1. Boot time
    2. Time to launch applications
    _a) Firefox
    _b) Google Earth
    _c) Photoshop
    3. Time to open huge files
    _a) .doc
    _b) .xls
    _c) .pdf
    _d) .psd
    4. Game framerates
    _a) minimum
    _b) average
    5. Time to copy files to & from the drive
    _a) 3000 200kB files
    _b) 200 4MB files
    _c) 1 2GB file
    6. Other application-specific tasks

    What your current strategy lacks is the element of "significance"; is the performance difference between drives significant or insignificant? Does the SandForce cost twice as much as the others and launch applications just 0.2s faster? Let's say I currently don't own an SSD: I would sure like to know that an HDD takes 15s at some task, whereas the Vertex takes 7.1s, the Intel takes 7.0s, and the SF takes 6.9! Then my purchase decision would be entirely based on price! The current benchmarks leave me in the dark regarding this.
    Reply
  • rifleman2 - Thursday, January 14, 2010 - link

    I think the point made is a good one for an additional data point for the decision buying process. Keep all the great benchmarking data in the article and just add a couple of time measurements so, people can get a feel for how the benchmark numbers translate to time waiting in the real world which is what everyone really wants to know at the end of the day.

    Also, Anand did you fill the drive to its full capacity with already compressed data and if not, then what happens to performance and reliability when the drive is filled up with already compressed data. From your report it doesn't appear to have enough spare flash capacity to handle a worse case 1:1 ratio and still get decent performance or a endurance lifetime that is acceptable.
    Reply
  • AnnonymousCoward - Friday, January 15, 2010 - link

    Real world top-level data should be the primary focus and not just "an additional data point".

    This old article could not be a better example:
    http://tinyurl.com/yamfwmg">http://tinyurl.com/yamfwmg

    In IOPS, RAID0 was 20-38% faster! Then the loading *time* comparison had RAID0 giving equal and slightly worse performance! Anand concluded, "Bottom line: RAID-0 arrays will win you just about any benchmark, but they'll deliver virtually nothing more than that for real world desktop performance."
    Reply
  • AnnonymousCoward - Friday, January 15, 2010 - link

    Icing on the cake is this latest Vertex 2 drive, where IOPS don't equal bandwidth.

    It doesn't make sense to not measure time. Otherwise what you get is inaccurate results to real usage, and no grasp of how significant differences are.
    Reply
  • jabberwolf - Friday, August 27, 2010 - link

    The better way to test rather then hopping on your mac and thinking thats the end-all be-all of the world is to throw this drive into a server, vmware or xenserver... and create multiple VD sessions.

    1- see how many you can boot up at the same time and run heavy loads.
    The boot ups will take the most IOPS.

    Sorry but IOPS do matter so very much in the business world.

    For stand alone drives, your read writes will be what your are looking for.
    Reply
  • Wwhat - Wednesday, January 06, 2010 - link

    This is all great, finally a company that realizes the current SSD's are too cheap and have too much capacity and that people have too much money.

    Oh wait..
    Reply
  • Wwhat - Wednesday, January 06, 2010 - link

    Double post was caused by anadtech saying something had gone wrong, prompting me to retry. Reply
  • Wwhat - Wednesday, January 06, 2010 - link

    This is all great, finally a company that realizes the current SSD's are too cheap and have too much capacity and that people have too much money.

    Oh wait..
    Reply
  • fertilizer - Tuesday, January 05, 2010 - link

    First of all, my complements to a great article!
    It provided me with great insight!

    It seems to me that SSD manufacturers are spending a lot of time complying to the world of HDD based Operating Systems.
    Would'nt it be time to get OS's to treat a SSD differently than a HDD?
    Reply
  • j718 - Tuesday, January 05, 2010 - link

    the ocz vertex ex is an slc drive, not mlc as shown in the charts. Reply
  • j718 - Tuesday, January 05, 2010 - link

    whoops, sorry, it's just the anandtech storage bench charts that have the ex mislabeled.
    Reply
  • Donald99 - Monday, January 04, 2010 - link

    Any thoughts on potential energy use in mobile environment? Compared to intel MLC. Still better energy efficiencey than a traditional drive?
    Performance results seem uber.
    Reply
  • cliffa3 - Monday, January 04, 2010 - link

    Anand,

    Great article, will be an interesting technology to watch and see how mature it really is.

    Question on the timeline for the price drop: When you said 'we'll see 160GB down at $225', were you talking about the mid-year refresh or the end of year next-gen?
    Reply
  • MadMan007 - Monday, January 04, 2010 - link

    Is it just me or is it inaccurate to mix GB and GiB when calculating overprovisioning at the bottom of page 5? By my reckoning the overprovisioning should be 6.6% (64GB/60GB, 128GB/120GB) not double that from using (64GB/55.9GiB etc) Reply
  • vol7ron - Monday, January 04, 2010 - link

    Anand, the right column of the table should be marked as GiB.

    The last paragraph should take that into consideration. Either the second column should first be converted into GiB, or if it already is (and hard to believe it is), then you could do direct division from there.

    The new table:
    Adv.(GB) Tot.(GB) Tot.(GiB) User(GiB)
    50 64 59.6 46.6
    100 128 119.2 93.1
    200 256 238.4 186.3
    400 512 476.8 372.5

    The new percentages should be:
    (59.6-46.6) / 59.6 x 100 = 21.8% decrease
    (119.2-93.1) / 119.2 x 100 = 21.9% decrease
    (238.4-186.3) / 238.4 x 100 = 21.9% decrease
    (476.8-372.5) / 476.8 x 100 = 21.9% decrease


    And the second table:
    Adv.(GB) Tot.(GB) Tot.(GiB) User(GiB)
    60 64 59.6 55.9
    120 128 119.2 111.8
    240 256 238.4 223.5
    480 512 476.8 447

    The new percentages should be:
    (59.6-55.9) / 59.6 x 100 = 6.21% decrease
    (119.2-111.8) / 119.2 x 100 = 6.21% decrease
    (238.4-223.5) / 238.4 x 100 = 6.25% decrease
    (476.8-447) / 476.8 x 100 = 6.25% decrease


    Note, I did not use significant figures, so all numbers are approximated, yet suitable - the theoretical value may be slightly different.


    vol7ron
    Reply
  • vol7ron - Monday, January 04, 2010 - link

    Anand, the right column of the table should be marked as GiB.

    The last paragraph should take that into consideration. Either the second column should first be converted into GiB, or if it already is (and hard to believe it is), then you could do direct division from there.

    The new table:
    Adv.(GB) Tot.(GB) Tot.(GiB) User(GiB)
    50 64 59.6 46.6
    100 128 119.2 93.1
    200 256 238.4 186.3
    400 512 476.8 372.5

    The new percentages should be:
    (59.6-46.6) / 59.6 x 100 = 21.8% decrease
    (119.2-93.1) / 119.2 x 100 = 21.9% decrease
    (238.4-186.3) / 238.4 x 100 = 21.9% decrease
    (476.8-372.5) / 476.8 x 100 = 21.9% decrease


    And the second table:
    Adv.(GB) Tot.(GB) Tot.(GiB) User(GiB)
    60 64 59.6 55.9
    120 128 119.2 111.8
    240 256 238.4 223.5
    480 512 476.8 447

    The new percentages should be:
    (59.6-55.9) / 59.6 x 100 = 6.21% decrease
    (119.2-111.8) / 119.2 x 100 = 6.21% decrease
    (238.4-223.5) / 238.4 x 100 = 6.25% decrease
    (476.8-447) / 476.8 x 100 = 6.25% decrease


    Note, I did not use significant figures, so all numbers are approximated, yet suitable - the theoretical value may be slightly different.


    vol7ron
    Reply
  • Guspaz - Sunday, January 03, 2010 - link

    Your pricing estimates for Intel's refreshes worry me, and I worry that you're out of touch with SSD pricing.

    Intel's G2 x25-m 160GB drive currently sells for $500-550, so claims that Intel will be selling 600GB drives at the same price point raise some eyebrows.
    Reply
  • kunedog - Monday, January 04, 2010 - link

    I couldn't help but roll my eyes a little when I saw that Anand was again making Intel SSD pricing predictions. Even the G1 X-25Ms skyrocketed above his predictions for the G2s:
    http://www.anandtech.com/storage/showdoc.aspx?i=36...">http://www.anandtech.com/storage/showdoc.aspx?i=36...

    And the G1s are still higher at Newegg (the G2s are still a LOT higher). Anand has never acknowledged the stratospheric X-25M G2 pricing and how dead wrong his predictions were. He's kept us updated on negative aspects like the firmware bugs, slow stock/availability of G2s, and lack of TRIM for G1s, but never pricing.
    Reply
  • vol7ron - Monday, January 04, 2010 - link

    I don't think Anand has ever tried to predict market price. He generally lets us in on lot prices, that is, what retailers buy the merchandise for in quantities of 1000. Generally, when he does release that information, he is close to dead on. He typically does not way in on numeric estimates of market prices, other than statements like "they should be cheaper than...[insert product here]... because material/manufacturing costs are lower." The link you gave looks less to be a prediction and more to be what the suggested retail price is; much like buying a car, although the suggested price is printed, it does not mean the actual market price will be equal to it.

    As for the G1/G2, as you recall, the G2 was very low on initial release (at least at Newegg) to the tune of ~$225. There have been several factors that have driven this price up (~$300). This is due to demand, but really it is a step demand. They are on Revision 5 of the G2, but the important thing is the fact that the G2 has been recalled twice. Where demand is generally steady in terms of price, abnormal release dates have pushed demand higher at different points (the graph looks more like a staircase, hence "step"). The price will again fall in the future.

    You should note that whenever things go "out of stock," the prices will go up, supply is low and demand is high, hence bargaining power from retailers, basic economics. Criticizing Anand does not accomplish anything as his facts were correct.

    Reply
  • vol7ron - Monday, January 04, 2010 - link

    Grammar/Syntax edit:
    "...lets us in on lot prices; that is, what retailers..."
    "He typically does not [weigh] in"


    Further note:
    If you look at the Arrandale article, there is a price supply list. Those prices are for lots of 1Ku (1000 units), which reaffirms the point I made earlier, before I even looked at the Arrandale article.

    As for Newegg, it's a unique site, which prices are close, but generally higher than the 1,000unit price. The fact that the G2 price was ~$225 on initial release was probably a promotional price point that often happens with new products.
    Reply
  • viewwin - Monday, January 04, 2010 - link

    Market forces are driving the price higher than MSRP(Manufacture Suggested Retail Price). Intel tried to have lower prices, but market demand pushed it higher. Prices were far lower on Newegg.com went the G2 first came out, but shot up to $600 at one point for the 160 GB. I recall an article about it, but can't find it. Reply
  • kunedog - Tuesday, January 05, 2010 - link

    OK, so he's "out of touch" with actual market prices, instead of made-up retail prices (MSRP).


    "I recall an article about it, but can't find it."

    That's OK, I saw the whole thing play out firsthand. After Anand posted these articles . . .
    http://www.anandtech.com/storage/showdoc.aspx?i=36...">http://www.anandtech.com/storage/showdoc.aspx?i=36...
    http://www.anandtech.com/storage/showdoc.aspx?i=36...">http://www.anandtech.com/storage/showdoc.aspx?i=36...
    http://www.anandtech.com/storage/showdoc.aspx?i=36...">http://www.anandtech.com/storage/showdoc.aspx?i=36...

    . . . stressing the expected performance and *affordability* of Intel X-25M G2 drives (I quote: "The performance improved, sometimes heartily, but the pricing was the real story."), they quickly disappeared from Newegg at the Anand-predicted price (with Newegg suggesting the G1s as an alternative, for which I call foul because many or most people wouldn't know the difference). They stayed out of stock for weeks. A month later, he posts this on the weekend:
    http://www.anandtech.com/storage/showdoc.aspx?i=36...">http://www.anandtech.com/storage/showdoc.aspx?i=36...

    The very next day (a Monday), G2s were suddenly in stock again at a huge markup, and the prices continued to climb for a few days. They've slowly fallen since that week, but never to the Anand-predicted price, and that fact has never been acknowledged in any of the subsequent reviews.

    The pattern repeated with the Kingston 40GB drives:
    http://www.anandtech.com/storage/showdoc.aspx?i=36...">http://www.anandtech.com/storage/showdoc.aspx?i=36...

    The pricing prediction ($85 w/ rebate, $115 without) for it was apparently so important that it had to be right there in the summary (so you don't even have to click the full article to see it). I checked Newegg every day for a couple weeks after it was posted (and somewhat less often since) but *never* saw it in stock for less than $130 (which is the current price). Further, that article was repeatedly updated and bumped for minor and predictable updates (like new bugs/firmware), but the pricing of the Kingston never updated (even though the rebate is expired).

    I would argue that market prices matter *more* than MSRP, and deserve Anand's attention. The high prices themselves aren't a problem; clearly people are willing to pay that much, therefore the drives are "worth it." It's Anand's complete obliviousness to them (after previously stressing their importance and total awesomeness) that comes across as strange.
    Reply
  • chemist1 - Sunday, January 03, 2010 - link

    Hi Anand,

    When you wrote: "Current roadmaps put the next generation of Intel SSDs out in Q4 2010, although Intel tells me it will be a 'mid-year' refresh," didn't you mean "*there* [not 'it'] will be a mid-year refresh?" I.e., that the next generation is still not expected out until Q4, but that there will be a mid-year updating of the current generation? [By writing "it" will be a mid-year refresh, you communicate that Intel told you that the next gen will be released mid-year instead of Q4, which is not what I think you meant to say .... or is it?]
    Reply
  • vol7ron - Sunday, January 03, 2010 - link

    Good question.

    To clarify what he's asking:
    Is it a mid-year refresh and a 2010Q4 release?
    -or-
    Is the mid-year refresh going to take place instead of the Q4 release (Q4 is pushed back).
    Reply
  • vol7ron - Saturday, January 02, 2010 - link

    I thought GIGABYTE released a motherboard with SATA6 for AMD (GA-790FXTA-UD5). It might be nice to start testing it out and putting these SSDs to the test.

    Also, is it fair to take the enterprise level controller (SF-1500) and compare that to the consumer market product (X25-M)? Granted the SF-1500 has already stood well against the X25-E, but it's going to cost a heck of a lot more than the X25-M and the target market is the enterprise sector, anyhow.

    Regardless of what it compares to, I'm already saying that the cost of this controller is overpriced. They can justify it however they would like; that is, better performance, high research and development costs, market barriers to entry, eg. The truth, though, is that they're overcharging. The logic is mostly sound, but the price is not. OCZ should sign a contract to buy the controller for a year, sell what they can, and negotiate a lower price, or else drop the controller. I'd like to see what that does to SF's profits.

    I also would like to say that not using DRAM can have bad effects down the line. To get rid of it to justify a more expensive controller seems like an ignorant bargaining chip that SF is using to make more money. That's like saying, "I've upgraded your Ferrari with a newer, bigger engine, but it'll only take Regular [gas]." There's a high correlation between horsepower and Premium fuel; suffice to say the product might be faster, but it could be better.
    Reply
  • vol7ron - Sunday, January 03, 2010 - link

    Given time to think about this:

    Maybe it is fair to compare the SF-1500 to the X25-M, since they're both MLCs. However, if the SF-1500 is still supposed to be the enterprise version, the two products are not price equivalent.

    Regardless, I do like to see the comparison. I just don't like to see the criticism when one is deemed an enterprise version and the other is still targeted for the home consumer/enthusiast.
    Reply
  • Capt - Saturday, January 02, 2010 - link

    ...it would be nice to have a shootout between the test field (Vertex 2, X25-M, ...) and a pair some drives in a RAID0/Stripe configuration, especially comparing equal total sizes and different platforms (Intel/AMD chipset, hardware controllers). With the new about-to-be-released Intel RST drivers SSD stripe performance got boosted quite a bit, and although I guess there won't be much of an improvement in the 4k area, reading/writing larger blocks and sequential does improve by a massive amount. As a pair of two 80GB X25-Ms costs only 10% more than a single 160GB drive this scenario is very tempting... Reply
  • vol7ron - Saturday, January 02, 2010 - link

    I also have been trying to get the reviewers to show more SSD RAID configurations. Not just because the price difference is semi-negligent, but because SSDs are suppose to be more error-free, and thus a more suitable technology for RAID. After all, isn't the exponential error potential the reason why RAID-0 was frowned upon?

    On the downside, I think there have been problems recently with the Intel Matrix Storage Manager, which might be one reason why the topic has been delayed. Regardless, it would be nice if this topic was re-addressed, if only to remind us readers that it is still in your thoughts :)
    Reply
  • Howard - Friday, January 01, 2010 - link

    Did you REALLY mean 90 millifarads (huge) or 90 uF, which is much more reasonable? Reply
  • korbendallas - Saturday, January 02, 2010 - link

    Yep, it's 0.09F 5.5V Supercapacitor.

    http://www.cap-xx.com/images/HZ202HiRes.jpg">http://www.cap-xx.com/images/HZ202HiRes.jpg
    Reply
  • iwodo - Friday, January 01, 2010 - link

    If, all things being equal, it just shows that the current SSD drives performance aren't really limited by Flash itself but the controller.

    So may be with a Die Shrink we could get even more Random RW performance?
    And i suspect these SSD aren't even using ONFI 2.1 chips either, so 600MB/s Seq Read is very feasible. Except SATA 3.0 is holding it all up.

    How far are we from using PCI-Express based SSD? I am sure booting problem could be easily solved with UEFI,
    Reply
  • ProDigit - Friday, January 01, 2010 - link

    One of the factors would be if this drive has a processor that does real life compression of files on the SSD,that would mean that it would use more power on notebooks.
    Sure it's performance is top, as well as it's length in time that it works, but how much power does it use?

    If it still is close to an HD it might be an interesting drive. But if it is more, it'd be interesting to see how much more!
    I'm not interested in equipping a netbook or notebook/laptop with a SSD that uses more than 5W TDP.
    Reply
  • chizow - Friday, January 01, 2010 - link

    I've always noticed the many similarities between SSD controller technology and RAID technology with the multiple channel modules determining reads/write speeds along with write differences between MLC and SLC. The differences in SandForce's controller seems to take this analogy a step further with what is essentially RAID 5 compared to previous MLC SSDs.

    It seems like these drives use a lot of controller/processor power for redundancy/error checking code, which is very similar to a RAID 5 array. This allows them to do away with DRAM and gives them the flexibility to use cheaper NAND Flash, but at the expense of additional Flash capacity to store the parity/ECC data. I guess that begs the question, is 64MB of DRAM and the difference in NAND quality used more expensive than 30% more NAND Flash? Right now I'd say probably not until cheaper NAND becomes available, but if so it may make their technology more viable to widespread desktop adoption when that

    Last thing I'll say is I think its a bit scary how much impact Anand's SSD articles have on this emerging market. He's like the Paul Muad'dib of SSDs and is able to kill a controller-maker with a single word lol. Seriously, after he exposed the stuttering and random read/write problems on Jmicron controllers back when OCZ first started using them, the mere mention of their name combined with SSDs has been taboo. OCZ has clearly recovered since then, as their Vertex drives have been highly regarded. I expect SandForce-based controllers to be all the buzz now going forward, largely because of this article.
    Reply
  • pong - Friday, January 01, 2010 - link

    It seems to me that Anand may be misunderstanding the reason for the impressive write amplification. The example with Windows Vista install + Office 2007 install states that 25GB is written to the disk, but only 11GB is written to flash. I don't believe this implies compression. It just means that a lot of the data written to disk is shortlived because it lives in temporary files which are deleted soon after or because the data is overwritten with more recent information. The 11GB is what ends up being on the disk after installation whether it is an SSD or a normal hard-drive. If the controller has significantly more RAM than other SSD controllers it doesn't have to commit short-lived changes to flash as often. The controller may also have logic that enables it to detect hotspots, ie areas of the logical disk that is written to often to improve the efficiency of its caching scheme. This sort of stuff could probably be implemented mostly in an OS except the OS can't guarantee that the stuff in the cache will make it to the disk if the power is suddenly cut. The SSD controller can make this guarantee if it can store enough energy - say in a large capacitor - to commit everything it has cached to flash when power is removed. Reply
  • shawkie - Friday, January 01, 2010 - link

    Unless I misread it the article seems be claiming that the device actually has no cache at all. Reply
  • bji - Friday, January 01, 2010 - link

    I think the article said that the SSD has no RAM external to the controller chip, but that the controller chip itself likely has some number of megabytes of RAM, much of which is likely used for cache. It's not clear, but it's very, very hard to believe that the device could work without any kind of internal buffering; but that this device does it with less DRAM than other SSDs (i.e., the smaller amount of DRAM built into the controller chip versus a separate external tens-of-megabytes DRAM chip).
    Reply
  • gixxer - Friday, January 01, 2010 - link

    I thought the vertex supported Trim thru windows 7, yet in the article Anand says this:
    "With the original Vertex all you got was a command line wiper tool to manually TRIM the drive. While Vertex 2 Pro supports Windows 7 TRIM, you also get a nifty little toolbox crafted by SandForce and OCZ:"

    Does the Vertex drive support windows 7 trim or do you still have to use the manual tool?
    Reply
  • MrHorizontal - Friday, January 01, 2010 - link

    Very interesting controller, though they've seemed to have missed a couple of tricks...

    First why is an 'enterprise' controller like this not using SAS which is at 6GBps right now and we can see what effect a non-3GBps interface has on SSDs, and why when SATA 6GBps is being shipped in motherboards now, then in 2010, when these SandForce drives are going to be released will still be using 3GBps SATA...

    Second, the 'RAID' features of this drive seem to be like RAID5 distributing parity hashes across the spare area which is also distributed across the drive. However, all controllers have multiple channels and why they don't use RAID6 (the one where a dedicated drive holds parity data, not the 2-stripe RAID5) whereby they use 1 or 2 SLC NAND Flash chips to hold the more important data, and use really cheap MLC NAND to hold the actual data in a redundant manner?
    Reply
  • Holly - Friday, January 01, 2010 - link

    Hmm, I thought MLC/SLC is more the matter of SSD controller than memory chip itself? Anybody could throw a bit light pls? Reply
  • bji - Friday, January 01, 2010 - link

    MLC and SLC are two different types of flash chips. You can find out more at:

    http://en.wikipedia.org/wiki/MLC_flash">http://en.wikipedia.org/wiki/MLC_flash
    http://en.wikipedia.org/wiki/Single-level_cell">http://en.wikipedia.org/wiki/Single-level_cell
    Reply
  • Holly - Friday, January 01, 2010 - link

    well,

    according to
    http://www.anandtech.com/storage/showdoc.aspx?i=34...">http://www.anandtech.com/storage/showdoc.aspx?i=34...

    they are not nescessary the same chips... the transistors are very much the same and it's more or less matter of how you interpret the voltages

    quote:
    Intel actually uses the same transistors for its SLC and MLC flash, the difference is how you read/write the two.
    Reply
  • shawkie - Friday, January 01, 2010 - link

    Call me cynical but I'd be very suspicious of benchmark results from this controller. How can you be sure that the write amplification during the benchmark resembles that during real world use? If you write completely random data to the disk then surely its impossible to achieve a write amplification of less than 1.0? I would have thought that home users would be mostly storing compressed images, audio and video which must be pretty close to random. I'd also be interested to know if the deduplication/compression is helping them to increase to the effective reserved space. That would go a long way to mask read-modify-write latency issues but again, what happens if the data on the disk can't be deduplicated/compressed? Reply
  • Swivelguy2 - Friday, January 01, 2010 - link

    On the contrary - if you write random data, some (probably lots) of that data will be duplicated on successive writes simply by random chance.

    When you write already-compressed data, an algorithm has already looked at that data and processed it in a way that makes sure there's very little duplication of data.
    Reply
  • Holly - Friday, January 01, 2010 - link

    It's always matter of used compression algorithm. There are algorithms that are able to press whole avi movie (= already compressed) to few megabytes. Problem with these algoritms is they are so demanding it takes days even for neuron network to compress and decompress. We had one "very simple" compression algorithm in graphs theory classes.. honestly I got ultimately lost after first read paragraph (out of like 30 pages).

    So depending on algorithms used you can compress already compressed data. You can take your bitmap, run it through Run Length Encoding, then run it through Huffman encoding and finish with some dictionary based encoding... In most cases you'll compress your data a bit more every time.

    There is no chance to tell how this new technology handles it's task in the end. Not until it is ran with Petabytes of data.
    Reply
  • bji - Friday, January 01, 2010 - link

    Please don't use an authoritative tone when you actually don't know much about the subject. You are likely to confuse readers who believe that what you write is factual.

    The compression of movies that you were talking about is a lossy compression and would never, ever be suitable in any way for compressing data internally within an SSD.

    Run Length Encoding requires knowledge of the internal structure of the data being stored, and an SSD is an agnostic device that knows nothing about the data itself, so that's out.

    Huffman encoding (or derivitives thereof) is universally used in pretty much every compression algorithm, so it's pretty much a given that this is a component of whatever compression SandForce is using. Also, dictionary based encoding is once again only relevent when you are dealing with data of a generally restricted form, not for data which you know nothing about, so it's out; and even if it were used, it would be used before Huffman encoding, not after it as you suggested.

    I think your basic point is that many different individual compression technologies can be combined (typically by being applied successively); but that's already very much de riguer in compression, with every modern compression algorithm I am familiar with already combining several techniques to produce whatever combination of speed and effective compression ratios is desired. And every compression algorithm has certain types of data that it works better on, and certain types of data that it works worse on, than other algorithms.

    I am skeptical about SandForce's technology; if it relies on compression then it is likely to perform quite poorly in certain circumstances (as others have pointed out); it reminds me of "web accelerator" snake oil technology that advertised ridiculous speeds out of 56K modems, and which only worked for uncompressed data, and even then, not very well.

    Furthermore, this tradeoff of on-board DRAM for extra spare flash seems particularly retarded. Why would you design algorithms that do away with cheap DRAM in favor of expensive flash? You want to use as little flash as possible, because that's the expensive part of an SSD; the DRAM cache is a miniscule part of the total cost of the SSD, so who cares about optimizing that away?
    Reply
  • Holly - Friday, January 01, 2010 - link

    Well I know quite a bit about the subject, but if you feel offended in any way I am sorry.

    More I think we got in a bit of misunderstanding.

    What I wrote was more or less serie of examples where you could go and compress some already compressed data.

    It's quite common knowledge you won't be able to lossless compress well made AVI movie with normally used lossless compression software like ZIP or RAR. But, that is not even a slightest proof there isn't some kind of algorithm that can compress this data to a much smaller volume.

    To prove my concept of theory I took the example of bitmap (uncompressed) and then used various lossless compression algorithms. In most cases every time I'd use the algorithm I would get more and more compressed data (well maybe except RLE that could end up with longer result than original file was).

    I was not forcing any specific "front end" used algorithms on this controller, because honestly all talks about how (if) it compresses the data is mere speculation. So I went back to origins to keep the idea as simple as possible.

    Whole point I was trying to make is there is no way to tell if it saves data traffic on NANDs when you save your file XY on this device simply because there is no knowledge what kind of algorithm is used. We can just guess by trying to compress the file with common algorithms (be it lossless or not) and then try to check if the controller saves NANDs some work or not. OFC, algorithms used on the controller must be lossless and must be stable. But that's about all we can say at this point.

    Sorry if I caused some kind of confusion.

    What _seems_ to me is that basically there is this difference between X25-M and Vertex 2 Pro logic (taking the 25 vs 11 gigs example used in the article):
    System -> 25GB -> X25-M controller -> writes/overwrites -> 25 GB -> NAND flash (but due to overwrites, deletes etc. there is 11GB of data)

    compared to Vertex 2 Pro:
    System -> 25GB -> SF-1500 controller -> controller logic -> 11 GB -> NAND flash (only 11GB actually written in NAND due to smart controller logic)

    Reply
  • sebijisi - Wednesday, January 06, 2010 - link

    [quote]
    It's quite common knowledge you won't be able to lossless compress well made AVI movie with normally used lossless compression software like ZIP or RAR. But, that is not even a slightest proof there isn't some kind of algorithm that can compress this data to a much smaller volume.
    [/quote]

    Well actually there is. The entropy of the original file bounds the minimum possible size of the compressed file. Same reason you compress first before encrypting something: As the goal of encryption is to generate maximum entropy, encrypted data cannot be compressed further. Not even with some advanced but not yet know algorithm.
    Reply
  • shawkie - Friday, January 01, 2010 - link

    As long as the data is truly random (i.e. there is no correlation between different bytes) then it cannot be compressed. If you have N bits of data then you have 2^N different values. It is impossible to map all of these different values to less than N bits. If you generate this value randomly its possible you might produce something that can be easily compressed (such as all zeros or all ones) but if you do it enough times you will generate every possible value an equal number of times so on average it will take up at least N bits. See
    http://www.faqs.org/faqs/compression-faq/part1/sec...">http://www.faqs.org/faqs/compression-faq/part1/sec...
    http://en.wikipedia.org/wiki/Lossless_data_compres...">http://en.wikipedia.org/wiki/Lossless_d...on#The_M...

    As you note, it is possible to identify already-compressed data and avoid trying to recompress it but this still means you get a write amplification of slightly more than 1.0 for such data.
    Reply
  • Anand Lal Shimpi - Friday, January 01, 2010 - link

    Correct. Highly Random and highly compressed data will not work well with SandForce's current algorithm. Less than 25% of the writes you'll see on a typical desktop machine are random writes, even then they aren't random over 100% of the LBA space. I'm not sure how well the technology works for highly random server workloads (SF claims it's great), but for the desktop user it appears to be perfect.

    Take care,
    Anand
    Reply
  • shawkie - Friday, January 01, 2010 - link

    Thinking about this further I've come to the conclusion that the files must be divided into small blocks that are compressed independently. Firstly because the disk doesn't know about files (only sectors) and secondly because its the only way you could modify a small part of a compressed file quickly. I don't think 512 bytes would be big enough to acheive respectable compression ratios so I think 4KB is more likely. This might explain why Seagate are pushing to make 4KB the smallest addressable unit for storage devices. So then they take each 4KB block, compress it, and write it to the next available space in flash. If they use 64 bit pointers to store the location of each 4KB block they could easily address the entire space with single-bit granularity. Of course, every overwrite will result in a bit of irregularly sized free space. They could then just wait for a bit of compressed data that happens to fit perfectly or implement some kind of free space consolidation or a combination. I'm starting to come around to the idea. Reply
  • shawkie - Friday, January 01, 2010 - link

    Apologies to Anand, I completely missed the page titled "SandForce's Achilles' Heel". I do think there are some scenarios that still need testing though. What happens when a small modification has to be made to a large file that the drive has decided to compress? Not an easy thing to benchmark but something I can imagine might apply when editing uncompressed audio files or some video files. The other question is what happens when the disk is made dirty by overwriting several times using a random write pattern and random data. What is the sequential write speed like after that? Reply
  • lesherm - Friday, January 01, 2010 - link

    with a Seinfeld reference. Reply
  • LTG - Friday, January 01, 2010 - link

    Definitely the only one with a Seinfeld and a Metallica and a StarWars reference :).


    Sponge Worthy
    Enter the Sandforce
    Use the Sandforce
    Reply
  • GullLars - Thursday, December 31, 2009 - link

    It seems anand has a problem with identifying the 4KB random performance of the drives.

    Intel x25-M has time and time again been shown to deliver 120MB/s or more 4KB random read bandwidth. x25-E delivers in the area of 150MB/s random read and 200MB/s of random write at 4KB packet sizes for queue depth of 10 and above.

    I do not know if the problem is due to testing not being done in AHCI/RAID mode, or if it is because of a queue depth lower than number of internal flash channels, but these numbers are purely WRONG and misrepresentative. I probably shouldn't post while drunk :P but this upsets me enough to disregard that.

    Anandtech is IMO a site too good to post nonsensical data like this, pleese fix it ASAP. If you choose to sensor my post after fixing it, pleese mail me notifying me of it in case i don't remmeber posting.
    Reply
  • Anand Lal Shimpi - Friday, January 01, 2010 - link

    My 4KB read/write tests are run with a queue depth of 3 to represent a desktop usage scenario. I can get much higher numbers out of the X25-M at higher queue depths but then these tests stop being useful for desktop/notebook users. I may add server-like iometer workloads in the future though.

    All of our testing is done in non-member RAID mode.

    Take care,
    Anand
    Reply
  • GullLars - Friday, January 01, 2010 - link

    Thank you for the response, but i still feel the need to point out that posting 4KB random numbers for queue depth 3 should be explicitly pointed out, as this only utilizes less than 1/3 of the flash channels in the x25-M. Here is a graph i made of the 4KB random read IOPS numbers of an x25-M by queue depth: http://www.diskusjon.no/index.php?act=attach&t...">http://www.diskusjon.no/index.php?act=attach&t...
    As shown in this graph, the performance scales well up to a queue depth of about 12, where the 10 internal channels get saturated with requests.

    A queue depth of 3 may be representative for average light load running windows, but during operations like launching programs, booting windows, or certain operations whitin programs that read database listings, the momentary queue depths often spike to 16-64, and it is in theese circumstances you really feel the IOPS performance of a drive. This is one of the reasons why x25-M beats the competition in the application launch test in PCmark vantage despite having the same IOPS performance at queue depths 1-4 and about the same sequential performance.

    The sandforce SF-1500 controller is rated for 30.000 4KB random IOPS, 120MB/s. In order to reach these read performance numbers with MLC flash, you need at least 6 channels, with corresponding outstanding IO's to make use of them. Then you also need to take into account controller overhead. The SF-1500 controller has 16 channels, and the SF-1200 controller has 8 channels.
    To test IOPS performance of a drive (not enterpreted for usage but raw numbers), outstanding IOs should be at least equal to number of channels.
    Reply
  • Anand Lal Shimpi - Friday, January 01, 2010 - link

    I'm not sure I agree with you here:

    "A queue depth of 3 may be representative for average light load running windows, but during operations like launching programs, booting windows, or certain operations whitin programs that read database listings, the momentary queue depths often spike to 16-64,"

    I did a lot of tests before arriving at the queue depth of 3 and found that even in the most ridiculous desktop usage scenarios we never saw anything in the double digits. It didn't matter whether you were launching programs in parallel or doing a lot of file copies while you were interacting with apps. Even our heavy storage bench test had an average queue depth below 4.

    Take care,
    Anand
    Reply
  • GullLars - Saturday, January 02, 2010 - link

    I'm not out to be difficult here, so i will let it be after this, but what i and a few others who have been doing SSD benchmarking for about a year now have found using the windows performance monitor indicates Queue Depth spikes in the area of 16-64 outstanding IO's when launching apps, and certain other interactions with apps that cause reading of many database entries.

    Copying files will only create 1 outstanding sequential IO-queue, and does not contribute significantly to the momentary queue depth during short high loads.

    Scanning for viruses may contribute more to the queue depth, but i have not tested it this far.

    At a queue depth of 1-4 for purely reads, there is little difference between JMicron, Indilinx, Samsung, Mtron, and Intel based SSDs, and the difference seen in PCmark Vantage applauch test and real world tests of "launch scripts" (a script launching all programs installed on the computer simultaneously) also indicate there is a notable difference. Some of this may be caused by different random write performance and sequential read, but queue depths above 4 in bursts help explain why x25-M with the 10-channel design beats the competing 4-channel controllers in this type of workload even when sequential read is about the same.

    I also like to think Intel didn't make a complex 10-channel "M" drive optimized for 4KB random IOPS targeted at consumers only to win in benchmarks. If the queue depth truly never went above 3-5, even when counting bursts, there would have been wasted a ridiculus amount of effort and resources in making the x25-M, as a 4-channel drive would be a lot cheaper to develop and produce.


    Thanks for taking the time to reply to my posts, and i hope you know i value the SSD articles posted on this site. My only concern has been the queue depths used for performance rating, and a concern for the future is that the current setup does not forward TRIM to drives supporting it.
    Reply
  • semo - Saturday, January 02, 2010 - link

    Anand,

    After reading your very informative SSD articles, I still found something new from GullLars. I think it would be useful to include the queue length when stating IOPS figures as it will give us more technical insight of the inner workings of the different SSD models and give hints to performance for future uses.

    When dial up was the most common way of connecting to the internet, most sites were small with static content. As connection and CPU speeds grew, so did the websites. Try going to a big ugly site like cnet with a 7-8 year old pc with even the fastest internet connection. I'm sure that all this supposed untapped performance in SSDs will be quickly utilized in future (probably because of inefficient software in most cases rather than for legit reasons). With virtualization slowly entering the consumer space (XP mode, VM unity and so on) as giant sandboxes and legacy platforms, surely disk queue lengths can only grow...
    Reply
  • shawkie - Saturday, January 02, 2010 - link

    Anand,

    I agree that its also helpful to know what the hardware can really do. It seems to me that longer queue depths are becoming important for high performance on all storage devices (even hard disks have NCQ and can be put in RAID arrays). At some point software manufacturers are going to wake up to that fact. This is just like the situation with multi-core CPUs. I'm fortunate because in my work I not only select the hardware platform but also develop the software to run on it.
    Reply
  • DominionSeraph - Monday, January 04, 2010 - link

    A jumble of numbers that don't apply to the scenario at hand is nothing but misleading.

    Savvio 15K.1 SAS: 416 IOPS
    1TB Caviar Black: 181.

    Ooooh... the 15k SAS is waaaay faster!! Sure, in a file server access pattern at a queue depth of 64. Try benchmarking desktop use and you'll find the 7200RPM SATA is generally faster.
    Reply
  • BrightCandle - Friday, January 01, 2010 - link

    With which software and parameters did you achieve the results you are talking about? Everything I've thrown at my X25-M has shown results in the same park as Anand's figures so I'm interested to see how you got to those numbers. Reply
  • GullLars - Friday, January 01, 2010 - link

    These numbers have been generated by several testing methods.
    *AS SSD benchmark shows 4KB random read and random write at Queue Depth (QD) 64, and x25-M gets in the area of 120-160MB/s on read and 65-85MB/s on write.
    *Crystal Disk Mark 3.0 (beta) tests 4KB random at both QD1 and QD32. At QD32 4KB random read, Intel x25-M gets 120-160MB/s, and at random write it gets 65-85MB/s here too.
    Here's to a screenshot of CDM 2.2 and 3.0 of x25-M 80GB on 750SB with AHCI in fresh state. http://www.diskusjon.no/index.php?act=attach&t...">http://www.diskusjon.no/index.php?act=attach&t...
    *Testing with IOmeter, parameters 2GB length, 30 sec runtime, 1 worker, 32 outstanding IO's (QD), 100% read, 100% random, 4KB blocks, burst lenght 1. On a forum i frequent most users with x25-M get between 30-40.000 IOPS with theese parameters. For the same parameters only 100% write the norm is around 15K IOPS on a fresh drive, and a bit closer to 10K in used state with OS running from the drive. x25-E has been benched to 43K random write 4KB IOPS.

    Regarding the practical difference 4KB IOPS makes, the biggest difference can be seen in the PCmark vantage test Application Launching. Such workloads involve reading a massive amount of small files and database listings, pluss logging all file access this creates. Prefetch and superfetch may help storage units with less than a few thousand IOPS, but x25-M in many cases actually get worse launch times with these activated. Using a RAM disk for known targets of small random writes make sense, and i've put my browser cache and temp files on a RAM disk even though i have an SSD.
    With x25-M's insane IOPS performance, the random part of most workloads is done whitin a second and what you are left waiting for is the loading of larger files and the CPU. Attempting to lower the load time of small random reads during an application launch from say 0,5 sec by running a superfetch script or read-caching with a RAMdisk makes little sense.
    Reply
  • Zool - Friday, January 01, 2010 - link

    For a average user 4KB random performance are the most useless results out there. If a user encounters that much random 4KB read/writes than he need to change the operating system asap.
    And if something realy needs to randomly read/write 4KB files than your best bet is to cache it to Ram or make Ram disk i think.
    Reply
  • LTG - Thursday, December 31, 2009 - link

    This statement seems really dubious - Isn't it in fact the opposite?

    The majority of storage space is taken up by things that don't compress well: Music, Videos, Photos, Zip style archives...

    Everything else is smaller.


    Anand Says:
    ==========================
    That means compressed images, videos or file archives will most likely exhibit higher write amplification than SandForce’s claimed 0.5x. Presumably that’s not the majority of writes your SSD will see on a day to day basis, but it’s going to be some portion of it.
    Reply
  • DominionSeraph - Friday, January 01, 2010 - link

    That stuff just gets written once.
    Day-to-day operations sees a whole lot of transient data.
    Reply
  • Shining Arcanine - Thursday, December 31, 2009 - link

    As someone else suggested, I imagine that the SATA driver could take all of the data written/read to the drive and transparently implement the algorithms on the much more powerful CPU.

    Is there anything to stop people from reverse engineering the firmware to figure out exactly what the drive in terms of compression is doing and then externalizing it to the SATA driver, so other SSDs can benefit from it as well? i.e. Are there any legal issues with this?
    Reply
  • Anand Lal Shimpi - Friday, January 01, 2010 - link

    Patents :) SandForce holds a few of them with regards to this technology.

    Obviously that's up to the courts to determine if they are enforceable or not, SandForce believes they are. Other companies could license the technology though...

    Take care,
    Anand
    Reply
  • Holly - Friday, January 01, 2010 - link

    Well, you can patent implementation and technology, but not the idea itself. (At least that's what my boss was trying to explain me). So, in case this idea seems worthy enough other manufacturers will come with their own MySuperStoringTechnology (c).

    Personaly I think any improvement (even if it comes out to be dead end) is worth it in global scale and this tech seems very interesting to me.

    I only have some worries about using cheaper NAND chips... Taking cheap USB flash they tend to go nuts in about 6-12 months of usage (well, I am stressing them quite a bit...) Putting them together with the best controller seems to me a bit like disbalancing things. Definitely not for servers/enthusiasts (who want the best quality for good reasons) and still too expensive for pple earning their paychecks
    Reply
  • Holly - Friday, January 01, 2010 - link

    P.S. Happy New Year Reply
  • yacoub - Thursday, December 31, 2009 - link

    I don't know that I want lower-quality Flash memory in my SSDs. I think I'd rather have both a better chip and high quality memory. But you know corners will be cut somewhere to keep the prices affordable. Reply
  • frontliner - Thursday, December 31, 2009 - link

    Page 10 talks about Random Write in MB/s and you're talking IOPS:

    At 11K IOPS in my desktop 4KB random write test, the Vertex 2 Pro is 20% faster than Intel’s X25-M G2. Looking at it another way, the Vertex 2 Pro has 2.3x the 4KB random write performance of today’s OCZ Vertex Turbo.

    &

    Random read performance is quite good at 13K IOPS, but a little lower than Intel’s X25-M G2.
    Reply
  • Anand Lal Shimpi - Thursday, December 31, 2009 - link

    woops! you're right, I decided to go with the MB/s graphs at the last minute but wrote the text based on the IOPS results. Fixed! :)

    Take care,
    Anand
    Reply
  • Makaveli - Thursday, December 31, 2009 - link

    My guess is intel will release another firmware to increase the write speed on the G2 drives. As Q4 2010 is quite a long wait for a refresh. So new firmware with increase write speed and a price drop should still keep them in the driving seat.

    Kudos to OCZ for the constant shove in the back to intel tho.
    Reply
  • mikesown - Thursday, December 31, 2009 - link

    Hi Anand,

    Great article! Along the subject of Intel's monopoly bullying, I was curious if you had any information about Micron manufacturing their own C300 SSDs with (very nice, it seems) Marvell controllers(see http://www.micronblogs.com/category/ssd-concepts/)">http://www.micronblogs.com/category/ssd-concepts/). I know Micron and Intel manufactured NAND jointly through their IM Flash Technologies venture, so it seems a little bit strange that Micron would manufacture competing SSDs while in a partnership with Intel. Did Intel and Micron part ways for good?

    Thanks,
    Mike
    Reply
  • efficientD - Thursday, December 31, 2009 - link

    As an employee of Micron, I can say that Intel and Micron have not parted ways, but rather only had the agreement for the actual flash memory and not all of the other parts of an SSD (controller, dtc.) We are still very much in cooperation on what was agreed upon in the first place. You will notice that the OCZ in this article is Micron, and not from IM flash (the Intel/Micron joint venture). If you crack open an Intel drive, however, you will nearly exclusively find IM Flash chips along with Micron DRAM, the first gen didn't even have Micron DRAM. Hope this clarifies some things. Reply
  • Doormat - Thursday, December 31, 2009 - link

    I'm disappointed in the lack of SATA 6Gb/s support, but a lot of that is product timing (its only now showing up in add-on chips, and controllers in late 2010/early 2011). You really wonder what the speeds are on an unbridled SF-based drive. Reply
  • Jenoin - Thursday, December 31, 2009 - link

    "SandForce states that a full install of Windows 7 + Office 2007 results in 25GB of writes to the host, yet only 11GB of writes are passed on to the drive. In other words, 25GBs of files are written and available on the SSD, but only 11GB of flash is actually occupied. Clearly it’s not bit-for-bit data storage."
    "What SF appears to be doing is some form of real-time compression on data sent to the drive."
    Based on what they said (and what they didn't say) I have to disagree. It appears to me that they are comparing the write that is requested with the data already on the SSD and only writing to the bits that need changed thereby write amplification ~0.5. This would explain the high number of IOPS during your compressed file write test perhaps. That test would then be a mixed test of sequential and random writes giving you performance numbers in between the two other tests. Could you verify the actual disk usage with windows 7 and Office installed? If it indicates 11gb used then it is using some kind of compression but if it indicates the full size on the disk then it is using something similar to what I detailed. I just thought it interesting that Sandforce never said things would take up less space, (which would be a large selling point) they only said it would have to write about half as much supporting my theory.
    Reply
  • Wwhat - Wednesday, January 06, 2010 - link

    You make a good point, and anand seems to deliberately deflect thinking about it, now you must wonder why.
    Anyway don't be disheartened, your point is good regardless of this support of 'magic' that anad seems to prefer over an intellectual approach.
    Reply
  • Shining Arcanine - Thursday, December 31, 2009 - link

    As far as I can tell from Anand's description of the technology, it seems that this is being done transparently to the operating system, so while the operating system thinks that 25GB have been written, the SSD knows that it only wrote 11GB. Think of it of having two balancing sheets, one that other people see that has nice figures and the other that you see which has the real figures, sort of like what Enron did, except instead of showing the better figures to everyone else when the actual figures are worse, you show the worse figures to everyone else when the actual figures are better. Reply
  • Anand Lal Shimpi - Thursday, December 31, 2009 - link

    Data compression, deduplication, etc... are all apparently picked and used on the fly. SandForce says it's not any one algorithm but a combination of optimizations.

    Take care,
    Anand
    Reply
  • AbRASiON - Friday, January 01, 2010 - link

    What about data reliability, compressed data can normally be a bit of an issue recovering it - any thoughts? Reply
  • Jenoin - Thursday, December 31, 2009 - link

    Could you please post actual disk capacity used for the windows 7 and office install?
    The "size" vs "size on disk" of all the folders/files on the drive, (listed by windows in the properties context tab) would be interesting, to see what level of compression there is.

    Thanks
    Reply
  • Anand Lal Shimpi - Thursday, December 31, 2009 - link

    Reported capacity does not change. You don't physically get more space with DuraWrite, you just avoid wasting flash erase cycles.

    The only way to see that 25GB of installs results in 11GB of writes is to query the controller or flash memory directly. To the end user, it looks like you just wrote 25GB of data to the drive.

    Take care,
    Anand
    Reply
  • notty22 - Thursday, December 31, 2009 - link


    It would be nice for the customer if OCZ did not produce multiple models with varying degrees of quality . Whether its the controller or memory , or combination thereof.
    Go to Newegg glance at OCZ 60 gig ssd and greeted with this.

    OCZ Agility Series OCZSSD2-1AGT60G

    OCZ Core Series V2 OCZSSD2-2C60G

    OCZ Vertex Series OCZSSD2-1VTX60G

    OCZ Vertex OCZSSD2-1VTXA60G

    OCZ Vertex Turbo OCZSSD2-1VTXT60G

    OCZ Vertex EX OCZSSD2-1VTXEX60G

    OCZ Solid Series OCZSSD2-1SLD60G

    OCZ Summit OCZSSD2-1SUM60G

    OCZ Agility EX Series OCZSSD2-1AGTEX60G

    219.00 - 409.00
    Low to high the way I listed them.
    I can understand when some say they will wait until the
    manufactures work out all the various bugs/negatives that must
    be inherent in all these model/name changes.
    Which model gets future technical upgrades/support ?
    Reply
  • jpiszcz - Thursday, December 31, 2009 - link

    I agree with you on that one.

    What we need is an SSD that beats the X25-E, so far, there is none.

    BTW-- is anyone here running X25-E on enterprise severs with > 100GB/day? If so, what kind of failure rates are seen?



    Reply
  • Lonyo - Thursday, December 31, 2009 - link

    I like the idea.
    Given the current state of the market, their product is pretty suitable when it comes to end user patterns.
    SSDs are just too expensive for mass storage, so traditional large capacity mechanical drives make more sense for your film or TV or music collection (all of which are likely to be compressed), which all the non-compressed stuff goes on your SSD for fast access.

    It's god sound thinking behind it for a performance drive, although in the long run I'm not so sure the approach would always be particularly useful in a consumer oriented drive.
    Reply
  • dagamer34 - Thursday, December 31, 2009 - link

    At least for now, consumer-oriented drives aren't where the money is. Until you get 160GB drives down to $100, most consumers will call SSDs too expensive for laptop use.

    The nice thing about desktops though is multiple slots. 80GB is all what most people need to install an OS, a few programs, and games. Media should be stored on a separate platter-based drive anyway (or even a centralized server).
    Reply
  • blowfish - Friday, January 01, 2010 - link

    80GB? You really need that much? I'm not sure how much space current games take up, but you'd hope that if they shared the same engine, you could have several games installed in significantly less space than the sum of their separate installs. On my XP machines, my OS plus programs partitions are all less than 10GB, so I reckon 40GB is the sweet spot for me and it would be nice to see fast drives of that capacity at a reasonable price. At least some laptop makers recognise the need for two drive slots. Using a single large SSD for everything, including data, seems like extravagant overkill. Reply
  • Gasaraki88 - Monday, January 04, 2010 - link

    Just as a FYI, Conan take 30GB. That's one game. Most new games are around 6GB. WoW takes like 13GB. 80GB runs out real fast. Reply
  • DOOMHAMMADOOM - Friday, January 01, 2010 - link

    I wouldn't go below 160 GB for a SSD. The games in just my Steam folder alone go to 170 GB total. Games are big these days. The thought of putting Windows and a few programs and games onto an 80GB hard drive is not something I would want to do. Reply
  • Swivelguy2 - Thursday, December 31, 2009 - link

    This is very interesting. Putting more processing power closer to the data is what has improved the performance of these SSDs over current offerings. That makes me wonder: what if we used the bigger, faster CPU on the other side of the SATA cable to similarly compress data before storing it on an X25-M? Could that possible increase the effective capacity of the drive while addressing the X25-M's major shortcoming in sequential write speed? Also, compressing/decompressing on the CPU instead of in the drive sends less through SATA, relieving the effects of the 3 GB/s ceiling.

    Also, could doing processing on the data (on either end of SATA) add more latency to retrieving a single file? From the random r/w performance, apparently not, but would a simple HDTune show an increase in access time, or might it be apparent in the "seat of the pants" experience?

    Happy new year, everyone!
    Reply
  • rree - Wednesday, January 06, 2010 - link


    http://ecartshopping.biz">http://ecartshopping.biz

    Air jordan(1-24)shoes $33

    Nike shox(R4,NZ,OZ,TL1,TL2,TL3) $35
    Handbags(Coach lv fendi d&g) $35
    Tshirts (Polo ,ed hardy,lacoste) $16

    Jean(True Religion,ed hardy,coogi) $30
    Sunglasses(Oakey,coach,gucci,Armaini) $16
    New era cap $15

    Bikini (Ed hardy,polo) $25

    FREE sHIPPING
    http://ecartshopping.biz">http://ecartshopping.biz
    Reply
  • rree - Wednesday, January 06, 2010 - link

    http://ecartshopping.biz">http://ecartshopping.biz

    Air jordan(1-24)shoes $33

    Nike shox(R4,NZ,OZ,TL1,TL2,TL3) $35
    Handbags(Coach lv fendi d&g) $35
    Tshirts (Polo ,ed hardy,lacoste) $16

    Jean(True Religion,ed hardy,coogi) $30
    Sunglasses(Oakey,coach,gucci,Armaini) $16
    New era cap $15

    Bikini (Ed hardy,polo) $25

    FREE sHIPPING
    http://ecartshopping.biz">http://ecartshopping.biz
    Reply
  • jacobdrj - Friday, January 01, 2010 - link

    The race to the true 'Isolinear Chip' from Star Trek is afoot... Reply
  • Fox5 - Thursday, December 31, 2009 - link

    This really does look like something that should have been solved with smarter file systems, and not smarter controllers imo. (though some would disagree)

    Reiser4 does support gzip compression of the file system though, and it's a big win for performance. I don't know if NTFS's compression is too, but I know in the past it had a negative impact, but I don't see why it wouldn't perform better if there was more cpu performance.
    Reply
  • blagishnessosity - Thursday, December 31, 2009 - link

    I've wondered this myself. It would be an interesting experiment. There are http://en.wikipedia.org/wiki/Comparison...systems#... (NTFS, Btrfs, ZFS and Reiser4). In windows, I suppose this could be tested by just right clicking all your files and checking "compress" and then running your benchmarks as usual. In linux, this would be interesting to test with btrfs's SSD mode paired with a low-overhead io scheduler like noop or deadline.

    What interests me the most though is SSD performance on a http://en.wikipedia.org/wiki/Log-structured_file_s... as they theoretically should never have random reads or writes. In the linux realm, there are several log-based filesystems (JFFS2, UBIFS, LogFS, NILFS2) though none seem to perform ideally in real world usage. Hopefully that'll change in the future :-)
    Reply
  • blagishnessosity - Thursday, December 31, 2009 - link

    correction:
    There are http://en.wikipedia.org/wiki/Comparison...systems#...">several filesystems that support transparent compression (NTFS, Btrfs, ZFS and Reiser4).

    What interests me the most though is SSD performance on a http://en.wikipedia.org/wiki/Log-structured_file_s...">Log-based filesystem as they theoretically should never have random reads or writes.

    (note to web admin: the comment wysiwig does not appear to work for me)
    Reply
  • themelon - Thursday, December 31, 2009 - link

    Note that ZFS now also has native DeDupe support as of build 128

    http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup">http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup

    Reply
  • grover3606 - Saturday, November 13, 2010 - link

    Is the used performance with trim enabled? Reply

Log in

Don't have an account? Sign up now