Back to Article

  • jordanclock - Friday, May 13, 2011 - link

    I like the idea of this article and it starts providing some extra data that was asked for in the original review: What about other SSDs? Could we get some more SSDs tested with SRT? I'm not expecting Vertex 3s, but some "older" SSDs like the F40 that would likely be replaced soon. Reply
  • MrSpadge - Friday, May 13, 2011 - link

    Currently Anand tested 3 mid- to high performance SSDs and I think this gives us a pretty good picture overall. However, if another one was tested I'd want the 64 GB Agility 3 and/or Solid 3. Well, a proper review of them would be very nice anyway ;)

  • Cow86 - Friday, May 13, 2011 - link

    Yea I was thinking the same thing...they supposedly perform a lot better, while still being reasonably affordable for the 60GB drives (~110 euro's here). Maybe they'd be the ultimate cache at a reasonable price point? Either way would love a review of those as well, vertex 3 is staying rather expensive :( Reply
  • jebo - Friday, May 13, 2011 - link

    I agree that the Agility 3 and Solid 3 look very promising. Reply
  • therealnickdanger - Friday, May 13, 2011 - link

    Plus, SRT works with drives up to 64GB, so it seems worth it to test a drive of that size. I imagine there are many users (such as myself) that have older Intel 80GB, Indilix 64GB, or SandForce SF1200/1500 64GB drives that will soon be replaced with faster 6Gbps drives.

    I would imagine that you could a drive of any size, especially the larger ~120GB SSDs that offer significant speed advantages over the 64GB models. SRT would limit the usable cache to 64GB, but it would still be interesting to see...

  • Boissez - Friday, May 13, 2011 - link

    Yup - Here's another vote for that article. I have an 'old' 60GB agility 2 boot drive and a Z68 board on it's way. Question now is whether I keep it as boot drive or whether I take a performance hit and use it as cache. This article comes close to answering it yet Anandtech's own SSD workload benchmarks (which are the most interesting IMO) does not include the F40's numbers. Reply
  • GullLars - Tuesday, May 24, 2011 - link

    If you can fit your OS and core apps/games on your agility 2, it's a no-brainer, keep it as you have. If you have a lot of apps and games you have had on HDDs and use often, you could concider trying it as cache.
    I have 3 computers with 32GB boot drives (1 laptop, HTPC, and a donated SSD to my fathers computer), and all of them get by fine with that with a little bit of management. You could concider using 30GB for caching and 30GB for OS + core apps, getting full SSD speed on the stuff you use the most, and have cache help out on game loads and more seldom used apps.
  • Mr Perfect - Friday, May 13, 2011 - link

    That's what I'm thinking too. Get yourself the largest, fastest cache you can and see what happens. :) Currently that's looking like a 64GB Agility 3 or Solid 3. Reply
  • samsp99 - Friday, May 13, 2011 - link

    I was just asking myself this very question... Reply
  • genji256 - Friday, May 13, 2011 - link

    I'm confused. If the 311 performed better on the second run ("Intel 311 - Run 1" in your table), shouldn't that mean that the cache was large enough to store the data for all the applications (since they all ran during the first run)? If so, why would there suddenly be data evicted from the cache between the second and third run ("Intel 311 - Run 1" and "Intel 311 - Run 2" respectively)? Reply
  • chromatix - Friday, May 13, 2011 - link

    The improvement only shows that *some* of the applications remained in cache for the second run (on the smaller 311). There are any number of reasons why some but not all of them would remain resident - the caching algorithm is almost certainly not pure LRU.

    With the larger F40, more stuff remains in cache and so the performance improvement is greater. This is wonderful news for people (like me) who have an enormous Steam installation and are seriously running out of SSD space to put it on.
  • beginner99 - Friday, May 13, 2011 - link

    ... but not now. There are some gains compared to HDD only but if you factor in at what cost it's debatable compared to a standalone ssd. The speed boost doesn't seem that great and worse unpredictable.

    Can you comment on real-life experience?
    Current HDDs are pretty fast or fast enough in like 90% of the times. But what sucks is the stuttering that happens. Is this 100% prevented with caching? Probably only with maximize options which is risky. hence you actually lose the most important benefit of the ssd: No Stuttering
  • DanNeely - Friday, May 13, 2011 - link

    Not sure I agree totally. While the ultimate SSD experience is a 120+GB drive that can store your OS and a large chunk of your apps, that's too expensive for anyone but fairly deep pocketed enthusiasts to afford. $100 SSDs are easier to justify from a cost perspective, but their limited capacity would traditionally require large amounts of micromanagement in order to get any effective use out of them. Rapid response gives a decent benefit without the micromanagement. To really take off though, it needs $50 or $30 SSDs not $100 ones. Reply
  • Patrick Wolf - Friday, May 13, 2011 - link

    I think that's the main bullet point Anand is missing. Yes, an SSD + a storage HDD that you manage manually is technically the fastest solution. However, the whole point of an SSD is increased productivity. In computing, seconds matter and If you have to spend any time managing what's stored on your SSD and HDD it's time you could be spending doing normal tasks. Reply
  • GullLars - Tuesday, May 24, 2011 - link

    Actually, with W7 it takes just a few hours after a clean install the first time you do it to make stuff go on the SSD or HDD, and tweak it so it's fairly optimal. If you have done it before, or have a good guide, it can easily be done in under 1 hour, and a scheduled run of Ccleaner once a day or week keeps the temp crap from building up.
    I had a 64GB SSD RAID (2x32) from 2008 to 2010, and have had a 32GB SSD in my laptop since 2009. Space was never an issue. Last fall (2010) i went for 4x C300 64GB on SB850 to get well over 1GB/s reads and max my southbridge on IOPS. I still haven't used more than 80GB including all apps and games.
  • slyck - Saturday, May 14, 2011 - link

    Don't forgot the added cost of the Z68 motherboard. Intel will make a killing both ways on this one, more expensive mobo, and a quite high cost $/GB SSD. Reply
  • GullLars - Tuesday, May 24, 2011 - link

    The high $/GB of 311 isn't margins for intel. SLC is twice as expensive as MLC in die costs alone, and then you have to factor in production scale, making it a bit more expencive. A rule of thumb is SLC cost being about 2,5X MLC. The 311 is not 2,5X the price of x25-V 40GB. Intel's margins would be higher if you got x25-V instead, which would make sense if you could use it as a read-only cache, but they didn't even allow that option... Reply
  • cavalier695 - Friday, May 13, 2011 - link

    I wonder how the performance would be in a laptop. Maybe using a drive like the 40GB Intel 310 Series SSD. The mSATA form factor would be very useful in laptops where a having second 2.5" bay for holding a second drive is rarely possible. Would this work and if so is it possible AT might do some benchmarks or a review for it? Reply
  • modnar58 - Friday, May 13, 2011 - link

    With regards to "I believe there's a real future with SSD caching, however the technology needs to go mainstream. It needs to be available on all chipsets"

    What about standalone controller cards that offer SSD caching like HighPoints $60 card?
  • evilspoons - Friday, May 13, 2011 - link

    I saw a review of that card on Tom's Hardware (I think)... the results were very mediocre compared to Intel's implementation. Reply
  • Dribble - Friday, May 13, 2011 - link

    The reason being they are so common they are cheap. In the UK you can't get that 20GB Intel SSD yet, and while you could get the 40GB force (or vertex 2) the cost difference is very small - like I did you might as well get the 60GB vertex 2. I suspect a few others will do the same thing.

    Hence it would seem a good test.

    The other thing I'd like to know is if you can buy a large capacity SSD (e.g. 240GB vertex 3) and then partition it giving 180GB to use as a boot drive, and the other 60GB to cache your monster games disk.
  • 10Goto10 - Friday, May 13, 2011 - link

    Or a 6Gb Sata Vertex3 240 GB, it should have enough IO, throughput and space to maintain both os and cache. Reply
  • Ryan Smith - Friday, May 13, 2011 - link

    Yes, you can partition SSDs. In fact we've thrown around the exact same idea. Reply
  • 10Goto10 - Friday, May 13, 2011 - link

    According to the test hardware spec, the test is run on H67 not Z68, copy paste ;) ?? Reply
  • mschira - Friday, May 13, 2011 - link

    Would it be possible to get some SSD caching on a thinkpad t-series? they not only have a normal hard drive slot but they also have a miniPci slot for which they explicetly support small SSD to mount there...
    now having SSD caching would be great...
  • DanNeely - Friday, May 13, 2011 - link

    First page, the table lists an h67 mobo for the test, not a Z68 Reply
  • jdavenport608 - Friday, May 13, 2011 - link

    Both your articles about Z68 mention that SSD caching can be used in front of a RAID array. Do you plan on doing an article that describes the benefits or value of using a SSD cache in conjunction with different RAID arrays? Reply
  • weh69 - Friday, May 13, 2011 - link

    I'd love to see results for a 60GB SSD as cache in front of a dual 2TB WD black RAID-1 array as the system drive. Reply
  • hechacker1 - Friday, May 13, 2011 - link

    If that is possible, I would really like to see that too.

    Especially for a raid-5 config. Since write performance is limited so much by parity writes, having a cache could really help (on maximum mode).
  • kepstin - Friday, May 13, 2011 - link

    This Intel SSD caching feature is a bit disappointing in one way.
    It's obvious that the caching work is done completely in software - in the Windows chipset driver, the same way as the motherboard RAID features work. There probably is some BIOS support required to boot off it though.
    There's no technical reason why the same technology couldn't be used on other chipsets - it's just that Intel's decided to artificially limit it to the Z68...
    I wouldn't be surprised if third-party tools and a generic Linux driver that can do the same thing on any chipset start showing up later this year.
  • hechacker1 - Friday, May 13, 2011 - link

    Agreed. I wonder if anybody can hack the RST 10.5 driver to make it work on any intel chipset.

    I actually just modded my X58 bios to include the new Intel Raid option rom, version Guess what? The Acceleration Options shows up in the raid bios.

    Unfortunately, I don't have an SSD to see if it will work.

    I also tried an even newer option rom, and that Acceleration Option is now grayed out...

    Either way, this is just software it seems.

    Anandtech, any idea if the z68 chipset has some hardware responsible for enabling cache?
  • mianmian - Friday, May 13, 2011 - link

    The write frequency on the catching drive is much more frequent than a normal disk. MLC might not have enough endurance to survive long under this kind of load. Reply
  • Shadowmaster625 - Friday, May 13, 2011 - link

    I would rather just manage two drives, and move whatever I'm not using over to the media drive. It's not that hard; it only takes a couple clicks to move a folder from drive C to drive E. And for people who are scared to death of performing such types of operations, I doubt they'd see a reason to justify an extra $100. They can just suffer with a single drive solution. At least until the SSD cache is reduced to no more than a $20-$40 premium. I still think an integrated flash controller and a SSD DIMM is the way to go, and intel is shooting themselves in the foot with this half*** solution. A 10 channel SSD controller built into the cpu would be so blazingly fast... Reply
  • qwertymac93 - Friday, May 13, 2011 - link

    Hmm, price of 40gig Corsair force = $110
    Price of 60gig vertex 2 = $105
    ... Am i missing something here?
  • cactusdog - Friday, May 13, 2011 - link

    The features of this platform sounded great on paper but in reality a bit of a letdown.

    For people with a SSD this caching thing is useless.

    I'm guessing AMD/Nvidia's next gen cards will have a quicksinc like performance built into their gpus, making sandy bridge/Quicksinc not so attractive.

    Sata 6GBs is nice but anyone can get that speed on older boards with a pci-e ssd or just RAIDing 2 SSDs.

    The only really attractive thing is unlocked CPUs for a reasonable pric but. I'm gonna skip this platform........ by the end of the year this platform will look very average.
  • assafb - Friday, May 13, 2011 - link

    1) A more useful configuration would have been the SandForce 40GB on Enhanced mode, but it has only been tested as maximized?

    2) A power user would use an SSD as the boot drive naturally, SSD caching alone would be inferior, of course, but that is not all that there is to it with SSD caching for the power user. This is the question - whether the 160GB mainstream SSD buyer that has a mainstream 2TB HD would be better off with a 160GB boot drive and no caching, or with 40GB partitioned out of his 160GB SSD to cache the HD and the remaining 120GB partition for the boot drive and critical apps. This question is relevant as for many users like myself, OS+apps+games exceed 160GB, and even the unaffordable 256GB+ SSDs, so some apps and games still have to get to the mechanical HD.
  • Anand Lal Shimpi - Friday, May 13, 2011 - link

    1) Enhanced mode should perform similarly to maximized mode in read performance. Write performance should be lower, at least on the Hitachi drive.

    2) If you can fit your OS and apps on the 120GB partition, then the 120/40 setup is your best bet for tackling games in my opinion.

    Take care,
  • assafb - Saturday, May 14, 2011 - link

    Thanks! Reply
  • sparky0002 - Friday, May 13, 2011 - link

    Go for broke. Build the quickest array you can behind a z68 chipset then throw a cache infront of it. It would suck if it ended up as the limit rather than as a booster. My guess is enhanced mode will perform better.

    but what I really want to know is... I've lived my life inside a 60gb partition for years. Games and the like are on another disk. so how would three SSD 311's in raid 0 V's a Revodrive go? No cache just a straight test of the SLC drives.
  • don_k - Saturday, May 14, 2011 - link

    'so how would three SSD 311's in raid 0 V's a Revodrive go'

    Not even close. The first gen revodrive does 500MB/s read/writes. Second gen doubles those numbers. So three 311 would give you maybe ~300MB/s at a higher price than a first gen revodrive.
  • iwod - Friday, May 13, 2011 - link

    If 20GB is really what most of us do use. What happen when you have 8GB x 4 RAM? With Windows 7 Superfetch, most of your data will be fetched inside memory.

    And with $110, would i be better off with a RAM drive, and fake that as a drive for use with Intel SRT?
  • Araemo - Friday, May 13, 2011 - link

    Since this SSD caching is software-driven.. it doesn't help any with boot performance (Until the service is running anyways), right? Reply
  • Stahn Aileron - Friday, May 13, 2011 - link

    I'm pretty sure it's built into the chipset's RAID BIOS/firmware, not used at the OS-level. The OS-level stuff is probably more for management and monitoring than anything else.

    If fact, I think you have to set it up in the BIOS/firmware, not within the OS. I'd have to look at the Z68 review again to be sure about the wording though.
  • cbass64 - Friday, May 13, 2011 - link

    In the last SRT review ( a 3TB drive took 55+ seconds to boot and a 3TB drive with SRT booted in 33 seconds... Reply
  • micksh - Friday, May 13, 2011 - link

    Can you do TRIM on cache SSD?

    I expect that SSD performance would degrade over time regardless of whether it's a cache or standalone drive.
    It looks like this aspect has been missed in original Z68 review as well.
  • Stahn Aileron - Friday, May 13, 2011 - link

    Out of curiosity, does Intel's SRT and RAID drivers support TRIM-command passthrough? I believe it was mentioned that SRT only works with the chipset in RAID mode. Last I recall, no RAID drivers support TRIM yet (I might be out of the loop.)

    Also, in a somewhat related matter: Have you considered looking at SRT's affect on an SSD's long-term read/write capability? I don't think it'll matter TOO much if someone is using a dedicated SSD for the cache. I'm asking this in the scenario that a user has a large capacity SSD (say 160GB) and allocates part of it (say 20GB - 40GB) as a cache while the rest is used for something else. So, for example:

    120GB = OS install and frequently use programs/applications (Office, Adobe, etc.)
    40GB = Cache for larger storage volume meant for infrequently used or very large installation size apps/programs (like a gaming drive/partition/volume.)

    I think the above example may be what many power users/enthusiasts will want to do. Would this have an affect on an SSD's long-term read/write performance? Especially in relation to the support of TRIM with Intel's SRT/RAID drivers. Or will the garbage collection algorithm be the only thing available to maintain it performance in the long run? Do users have to start looking at the garbage collection performance of an SSD as well when choosing one to use as a cache? Or has Intel devised a RAID driver for the Z68 that supports TRIM?

    Oh, and would my above usage scenario (splitting and sharing of the SSD) affect data eviction/retention for the cache in any noticeable way, if at all?
  • micksh - Friday, May 13, 2011 - link

    RAID drivers support TRIM. Just one condition - SSD has to be a single drive in order TRIM to work. If RAID is made of SSDs TRIM won't work, but you han have HDD RAID and single SSD with TRIM on the same controller.

    SRT is different from RAID so the question is still valid. And your scenario is very interesting too.

    I think it's believed that SLC SSDs don't need TRIM. I don't even know if 311 supports it. Intel might not even bother with adding TRIM to SRT if 311 is its main usage target, I'm not sure.
    But if you are going to use your own MLC drive it is important.

    And yes, defragmentation from the next post is also important question.
  • Stahn Aileron - Friday, May 13, 2011 - link

    All flash NAND can use TRIM. Lack of it for SLC just has a lower overhead penalty on (re)write performance than it does for MLC. Though you did remind me that Intel Gen 1 (50nm) SLC SSD, the X25-E, never got TRIM support, far as I recall.

    I didn't realize RAID drivers allowed TRIM passthrough on single drives. I always thought SSDs had to be on ACHI-mode drivers to support TRIM properly, regardless of configuration. I wonder if/when they will fix it for RAID arrays, then?

    Lastly, yeah, forgot about the fragmentation problem as well. I wonder if Intel's SRT caching algorithm is smart enough to ignore defrag activity?

    Actually, I have been curious as to what Diskeeper's SSD-related feature actually does for an SSD. I own a copy of DK with HyperFAST and I've been wondering what it does for quite some time now. Otherwise, I've been using DK for a long time now (since '05, at least).

    Anand, have you ever looked at DK's impact on SSD performance? Or at least find out it role and what it actually DOES on a system?
  • Henk Poley - Monday, May 16, 2011 - link

    HyperFast does not much at all. At best it caches some writes, that can improve SSDs that lack in the 4k write department (old disks). I also read it helps with file consolidation, which can help an OS/SSD without TRIM (old disks). Never used space can be used for write leveling by the SSD. Real world benchmarks appear to find nothing changes or that the performance slightly deteriorates.

    The product was built when 8GB PATA SSDs were pretty nifty.

    Besides all that, if you can write less to an SSD, that is in general better. So don't move files around willy nilly (don't defrag).
  • Dribble - Friday, May 13, 2011 - link

    So I have my big HD that obviously needs regularly defragged attached to my SSD.

    How do I get the defragger to not wear out the SSD?
  • Hrel - Friday, May 13, 2011 - link

    Does anyone else notice how silly measuring a 4KB write in MB/s? Haha, seriously, anything at even point 5MB/s is more than fast enough to write 4KB and have me not even notice. It's not a fault on the SSD that it doesn't do those quite as well. It's still WAY faster than I need it to be. Reply
  • Hrel - Friday, May 13, 2011 - link

    perspective people Reply
  • cbass64 - Friday, May 13, 2011 - link

    uhhh...hardly anyone ever just writes a single 4KB file...different programs different IO specs. If I recall, Vantage mostly uses 4KB reads/writes, at low queue depths too i think. That doesn't mean it writes a 4KB file and times how long it takes, it writes TONS of 4KB files are measures how long that takes.

    You do make a good point about perspective though...when you see drives boasting that they can achieve ridiculously huge amounts of IOps, it's almost always achieved by running 512byte IO for like 5 seconds (useless metrics)
  • ZmaxDP - Friday, May 13, 2011 - link


    I'd really like to see you take one of the highest performing drives (M4, SF2400, Intel500 series) like the Vertex 3 240GB and partition 64GB of it as cache. Theoretically, both the size increase and the significant speed bump would make a big difference and make it more like running with a full SSD.

    For me, the real win in terms of configuration is a 2 or 3 TB drive cached by one of these high performance drives, with the leftover space dedicated to "always fast" programs. You get full SSD speed all the time for particular things, and the best cached performance possible from the storage drive...

    Unless of course this tech works with ramdisks, in which case I'd love to see numbers with a ramdisk as the cache and a Vertex 3 as the main drive just for a sense of total IO overkill...
  • araczynski - Friday, May 13, 2011 - link

    looks like i'll be getting an OCZ Vertex 3 in my next rig. Reply
  • mervincm - Friday, May 13, 2011 - link

    Might this be a suitable job for an old JMicron SSD? Some of us have these and are looking for a job. With the last firmware from SuperTalent, they were not AS terrrible, and they were always reasonable at reading. While its poor write IOPS might be a negative at the start, their size (most are 64GB or larger) and (still way better than HDD) read performance might give a decent boost! Besides, we paid good money for them, and need to find something to do with them!

    Comon Anand, take a trip back to 2009,dig out an old G1 SSD, lets see if it helps or hurts :) this 2011 systemboard.
  • GTVic - Friday, May 13, 2011 - link

    "I have to admit that Intel's Z68 launch was somewhat anti-climactic for me"

    Of course, the theory that enthusiasts want to utilize the on-chip graphics and overclock at the same time is ridiculous. Anyone that pays big bucks for a higher end motherboard, cooling apparatus, a high end video card, etc. is not going to then complain about not being able to use the built-in GPU. Likewise, anyone with such a setup is not going to blink at the cost of an SSD for their system drive once they see the performance improvement in person. No wonder Intel put this on the back burner.
  • fic2 - Monday, May 16, 2011 - link

    I would expect that the use of the built-in GPU would be for the ability of QuickSync.
    Although since I am not a gamer I would rather just use the built-in graphics - I know 'shock' that people other than gamers like fast cpus!.
  • Tinface - Friday, May 13, 2011 - link

    I too would like to see some numbers on using a highend SSD like a Vertex 3 240GB for caching. I'm intending to get such an SSD for my next gaming rig. Right now I have a massive Steam folder weighing in at 275GB. Since Steam want all games in a subfolder I have to uninstall a lot to fit it on the SSD. Other games such as World of Warcraft benefits massively from an SSD so that's got 25GB reserved. Am I better off using 64GB for cache or save it for a handful of games more on the SSD instead of the harddrive? Any other options I missed? Reply
  • JNo - Sunday, May 15, 2011 - link

    Yeah there is another much better option that you missed. Google "Steam Mover" - it's a small free app that allows you to move individual steam games to from one drive (eg a mechanial one) to another (eg SSD) and have it all still work because it uses junction pointers to redirect to computer to look in another place for the folders whilst still thinking that they're in the original place.

    Alternatively GameSave Manager (I use) has the same feature built in as well as performing game backups to the cloud. With either you can just keep your most played games on the SSD running full pelt and switch them around occasionally.
  • c4v3man - Friday, May 13, 2011 - link

    ...because there are still some boards out there that haven't been recalled, and the potential bad press from someone using the cache feature and losing their data due to a failed port would be very damaging to their reputation. Z68 chipsets are unaffected due to their launch date...

    Anyone else think this may be the case?
  • Comdrpopnfresh - Friday, May 13, 2011 - link

    In it's current implementation, Intel's Smart Response Technology is self-defeating and a wasted allocation of SSD potential:

    -It doesn't provide benefit, or it bottlenecks, write performance to the storage system
    -Establishes an artificial NAND-wearing function of read operations; inflated number of writes is incurred on a ssd used as cache that is not on one used for storage
    -Squanders the potential of ssd benefits that would be recognized if the ssd was used as primary storage
    -I strongly doubt that data integrity is maintained under Maximized mode in the event of a power loss
    -A lot of the processes that would see the most benefit from ssd speeds don't see it applied under SRT (ex: antivirus)

    To mitigate the downsides of SRT you can:
    -use a larger ssd -use an SLC, rather than MLC ssd

    But the two of those solutions are mutually constraining, and the resultant configuration is at ends with the entire exercise of ssd-caching and intended purpose of Intel's SRT.

    The reason why these downsides put a damper on SRT but aren't expressed in ssd-caching in enterprise space is because the product constraints are paradoxal & divergent from the intentions and goals of the technology:
    1- If you place a ssd sized well enough to serve as the primary storage as ssd-cache with SRT, all results conclude it's better off as storage rather than cache. No benefits. The allowable size of a large ssd for use as ssd-caching is 64GB- the rest is a user-accessible partition that is along for the ride (dragged through the mud?)
    2- SRT is intended for small ssds, but the minimum size is 18.6GB. Even with Intel's own specially purposed SSD, all signs point to a larger ssd being necessary to get the most from SRT. But the point of SRT is to spend less and get away with a smaller ssd (if you consider 'getting away with' underutilizing ssd benefits, all the while coughing and hacking along the way, that's your perogative)
    -3 The chosen ssd needs sequentiual writes higher than the HDD being cached, or it will bottleneck writes storage. You can use an SLC model but doing so means reduced drive size and higher costs incurred- both in contention w/ SRT's purpose
    -4 For the necessary speeds to make SRT worthwhile, MLC is an option on ssd models of higher capacity because they have more channels to move data along. But then your not using a small drive, and castrating and otherwise delightful pirmary storage purposed ssd.
    -5 Higher write-cycles are mitigated by using SLC or large enough sized MLC for wear-leveling to retard degredation. This presents the same problems as in 4 and 5.

    In enterprise space, the scales are much larger, the budgets higher, and the implementations of hardware are performed to attain the proper ends or because it is the best option for the parameters of usage. Most likely large, expensive, SLC drives are used as cache for arrays with performance requirements necessitating their use: it all fits together like a hand in a glove.

    To be of real potential or an alternative to primary ssd-storage, the hardware allocation used is either a waste of hardware or doesn't yield a comparably worthwhile solution. its like taking that cozy enterprise glove and trying to make a mitten out of it
  • JNo - Sunday, May 15, 2011 - link


    SSDs much better used as main OS drive. Unless money no object, any money for a cache SSD is better spent on a bigger main SSD. A small cache SSD couldn't speed up my 2TB media drive anyway as most films are 9GB+ and would be sequential so ignored by cache. Game loads are the other area but again, better off with a larger drive and using Steam Mover to move the ones you want on to SSD as an when you need rather than slower speed, integrity losses and evictions and higher wear levelling of cache SSD set up. For OS drive you're much safer using enhanced mode (less performance) as maximised sounds too risky and virus scans still slow. Overall I can barely think of a situation where SRT benefits a lot of people much in a remotely economical fashion.
  • adamantinepiggy - Monday, May 16, 2011 - link

    I'm curious how badly this caching beats up the SSD. Like compdrpopn above, I assume there is a reason Intel chose SLC NAND, and I assume because of its much larger write cycles over MLC and faster write speeds when completely full. Figure a normal consumer SSD does not have the majority of it's cells re-written constantly nor is it generally completely full, while a SSD used to cache a hard drive "is" going to to be constantly full and completely re-written to every cell.

    Example, one installs the OS, MSOffice, and any other standard apps to a normal bootup SSD. In regular usage, other than pagefiles and temp files, the majority of the cells retain the same data pretty much forever over the life of the SSD. With a SSD used in a HD caching capacity, I assume it's going to cache until completely full very quickly, and then overwrite all the data continuously as it constantly caches different/new HD data. That's a lot of writing/erase cycles if the SSD, acting as a cache for the HD, if it gets flushed often. How is typical a typical MLC SSD gonna handle this wear pattern?

    Now take this with a grain of salt, as I am just conjecturing with my limited understanding of how Intel is actually caching the HD but coupled with the question on why Intel would press an relatively "expensive" SLC SSD for its SSD of choice in this particular usage leads me to believe that this type of HD caching duty is gonna beat up normal MLC SSD's as the wear pattern is not the same as the expected use patterns designed into consumer SSD firmware.
  • gfody - Friday, May 13, 2011 - link

    wouldn't an M4 or C300 perform better as cache since it has much lower latency than other SSDs? Reply
  • Action - Saturday, May 14, 2011 - link

    I would second this comment. On the surface if would seem to me that Sandforce drives would be handicapped in this particular application as the overhead and latency of compressing the data would have a negative impact. A non-Sandforce drive would be the desired one to use and the M4 or C300 would appear to be the ones to try first over the Vertex or Agility drives in this particular application. Reply
  • sparky0002 - Friday, May 13, 2011 - link

    Give us a 312 is sort of the message here.

    Double up on the NAND chips, use all 10 channels and as a side effect it would be 40gb.

    The current model 311 is too limiting in write speeds for more enthusiasts. the only option would be to run it in Enhanced mode so that write go direct to the platters.

    If that is then the case, then it would be nice to run a system of a good fast ssd with a massive traditional disk as storage and a ssd 311 to cache it.

    Now the question becomes if the platter has all your games and music and movies on it. just how good is intel cache dirtying policy? Load up a few games so they are in cache, then please go listen to 25gb of music. lol. and see if the games are still in cache.
  • Casper42 - Friday, May 13, 2011 - link

    So I cracked open my wife's new SB based HP Laptop and while there isnt a ton of room, it really makes me wonder if laptop vendors shouldn't be including the MBA style SSD socket inside their laptops.

    1) You could do a traditional Dual Drive design and have a 128GB-256GB Boot SSD in the sleeker MBA Form Factor with a traditional HDD in the normal 2.5" slot for storing data.
    2) With this new feature being retroactive on alot of existing laptops using the right chipset, and future laptop models as well, why not offer a combo with a 64GB SSD Stick pre-configured to be used by SRT along side the same traditional 2.5" HDD. This could be a $100-150 upgrade and I would assume it would produce even better results when boosting the traditionally slower laptop drives.

    Especially on 15.6" models I just cant see that they couldnt squeeze this in there somewhere.

    So perhaps as a follow up, you could grab an average 5400 and 7200 rpm laptop drive and run through the tests again with either of the 2 SSDs you tested so far, or if there is an adapter out there, the actual MBA 64GB SSD Stick drive.

  • IlllI - Friday, May 13, 2011 - link

    how is this different from readyboost?

    how is this different than the cache that typical hard drives have had for years now?

    other than performance.. isnt it basically the same idea?

    and if so, i wonder how it seems to be much better/faster than those other two concepts
  • cbass64 - Friday, May 13, 2011 - link

    As far as I know, ReadyBoost only caches small, random reads. Any large reads are sent directly to the HDD. Writes aren't cached at all.

    Cache on HDD's are tiny...32, maybe 64MB. Just not big enough to make a real difference. If you made the cache any larger, the price of the drives would go way up. Plus they use cheap flash.
  • Andreos - Saturday, May 14, 2011 - link

    I don't do transcoding and I think the Z68 Virtu hoopla is a one gigantic snooze. None of that is going to save me from having to buy a decent discrete video card.

    SSD caching is the one thing that sparked my interest in Z68. But the Z68 boards that are coming available online have a price premium over the equivalent P67 board of about $50. An SSD 311 runs $110. That puts you at $160 just to get into the SSD caching game. I'd rather take that $160, kick in another $50, and get a P67 board and a real 120GB SSD. In fact, that's just what I'm going to do!

    BTW the Z68 boards have fewer USB ports than the equivalent P67 boards (all those video ports for the integrate video take up real estate).

    I think Z68, rather than being frosting on the Sandy Bridge cake, is more like melted ice cream.
  • Addikt - Saturday, May 14, 2011 - link

    After reading these two articles, I don't fully understand what the point of this technology is. I understand that it fills a current void, but how useful is it going to be down the road?

    With the costs of SSDs continually dropping, and mainstream drive performance increasing, it looks like it will only be 6 months to a year before this technology is obsolete.

    It was already highlighted in the previous article that if you have a Vertex 3, you really have no need for this feature. So, in the interim it's bringing SSD-ish performance to mechanical drives, but to what end? Why not just save up your pennies and buy that Vertex 3 that you've always wanted? I mean, this Intel 311 20 GB is going to cost you $100+ anyway.

    I just don't see the end-game with respect to this tech. That said, upon reading the comments section, it seems that everyone is so delighted by its arrival, which causes me to question my own understanding of it.

    Am I missing something here? Anyone care to enlighten me? I just don't see this as being a really big deal.
  • slyck - Saturday, May 14, 2011 - link

    I feel the same. From the first announcements this looked pretty weak to me. You do get some performance gains but at what cost? Not interested. At all. Waiting til the end of the year to finally make a first SSD purchase(if the prices FINALLY become reasonable. Planned to make first purchase same time last year and the year before.)

    Pay less for a non-Z68 chipset, save the cost of this overpriced 20GB SSD, and just get a decent sized SSD instead.
  • xineis - Sunday, May 15, 2011 - link

    Intel SSD Caching seems to be a very nice idea! Would SSDs like the Vertex2 or lower end SSDs perform better? Or would they do just the same?

    Also, could someone explain what is that Random Data that always appears on the graphics?
  • biofishfreak - Tuesday, May 17, 2011 - link

    I've been loving all the numbers about SSDs, but I'm still apprehensive about the whole failure thing- not that I'm worried about the drive itself since I already have a solid backup plan in place- but about the RMA times, shipping costs, and how many times I can get the drive replaced on a single purchase warranty (as in the warranty states 3 years, but does that go out the window after the first replacement?) I'm ok spending ~$200 on a low-to-mid SSD every two years, but I can't afford to do every 6 months.

    A review and comparison chart of all the manufacturers RMA times, warranty specifics, etc would be great!! SSDs fail, it's a part of life. But do the support centers fail too?
  • jcollett - Thursday, May 19, 2011 - link

    Maybe I missed it, but did this article address that the Intel 311 uses SLC chips while all the other consumer SSDs use MLC? This will be VERY IMPORTANT when using the drive for caching. I would be surprised if the MLC drives last a year at this task. Intel engineers were no fools; they put SLC chips into this drive for a good reason. Reply
  • feathers632 - Friday, May 20, 2011 - link

    Based on the video I watched on youtube which as I recall was a gigabyte z68 with SSD caching booting windows next to a regular non SSD system. The Z68 system was maybe 10 seconds faster. A total waste of time. Reply
  • JimmiG - Saturday, May 21, 2011 - link

    It's a shame hybrid drives haven't taken off. That's sort of the consumer version of this. Consumers don't want to buy multiple devices or configure anything. It should just work. Compare to how 3D graphics really took off when they began integrating the 2D and 3D cores on the same chip.

    The problem with just adding a small SSD (64-128GB) to your existing setup, is that you have to manually move the files you *think* you will be accessing most frequently to the SSD. When your usage pattern changes, you have to manually move the application/game you're no longer using as much to your mechanical drive, then move the one you've started using more frequently to the SSD. This usually involves a complete reinstall of the program or game. The same will happen when you run out of space on the SSD.

    If the OS or storage controller took care of automatically caching the most frequently used data, it would be much more efficient. It might not even need to cache all application or game data, maybe just some portions of. Windows already does this with Superfetch and Readyboost for smaller amounts of data (a couple of gigabytes). It shouldn't be that hard to extend it to 64+GB.
  • Stahn Aileron - Tuesday, May 24, 2011 - link

    You do realize that Intel's SRT on Z68 is the next step to making Hybrid drives a viable market, right?

    The scenario you describe is precisely the one SRT is trying to avoid/mitigate/solve: manual data allocation by the user. It's at the chipset and driver level. The difference here is the need for two drives and initial set-up vs. a single drive and (hopefully) simple plug 'n play.

    While it'd be nice to have a (good) hybrid drive, I don't foresee 20GB of SLC NAND, one (for laptops) or two (for desktop) HDD platters, and the various controllers for each in a single package any time soon. (If we take Intel's SRT system as an example, you'd need the equivalent of a controller for each the SSD and HDD components, plus a chipset-like controller to tie the two parts together. We'll assume the external SATA connection to the MB is supplied through the chipset-equivalent controller. Not to say this can't all be consolidated into a single-chip solution later on, though.)

    I don't see this happening unless a company like Intel, Corsair, Kingston, or OCZ partners up with a HDD manufacturer like WD or Seagate.

    I'd love to see somethng like a co-branded Intel/WD Hybrid drive or something equivalent. (Would "hybrid drive" be abbreviated as HYD?) I think it'd do wonders for the single-drive ultraportable laptop market.
  • Hrel - Tuesday, May 31, 2011 - link

    If Intel got a 40GB one of these out for sale at 79 bucks I'd buy it.

    I'd really like to see you add some striped RAID with and without a cache results to these charts though. Otherwise very good article. I really like seeing the results for application/game launch and level load times; as those are "real world" where as 4KB random writes aren't as much and 4KB is pretty irrelevant anyway. As long as the drive can handle writing 4KB/second or more I think I'm good, haha.
  • ibex333 - Friday, June 10, 2011 - link

    Can you use 1 SSD as BOTH Cache, AND Storage, by dividing into two partitions?

    Just how fast should an SSD be to be used as cache? I'd like to see some OCZ drives here, along with other manufacturers drives.

Log in

Don't have an account? Sign up now