SSD Caching

We finally have a Sandy Bridge chipset that can overclock and use integrated graphics, but that's not what's most interesting about Intel's Z68 launch. This next feature is.

Originally called SSD Caching, Intel is introducing a feature called Smart Response Technology (SRT) alongside Z68. Make no mistake, this isn't a hardware feature but it's something that Intel is only enabling on Z68. All of the work is done entirely in Intel's RST 10.5 software, which will be made available for all 6-series chipsets but Smart Response Technology is artificially bound to Z68 alone (and some mobile chipsets—HM67, QM67).

It's Intel's way of giving Z68 owners some value for their money, but it's also a silly way to support your most loyal customers—the earliest adopters of Sandy Bridge platforms who bought motherboards, CPUs and systems before Z68 was made available.

What does Smart Response Technology do? It takes a page from enterprise storage architecture and lets you use a small SSD as a full read/write cache for a hard drive or RAID array.

With the Z68 SATA controllers set to RAID (SRT won't work in AHCI or IDE modes) just install Windows 7 on your hard drive like you normally would. With Intel's RST 10.5 drivers and a spare SSD installed (from any manufacturer) you can choose to use up to 64GB of the SSD as a cache for all accesses to the hard drive. Any space above 64GB is left untouched for you to use as a separate drive letter.

Intel limited the maximum cache size to 64GB as it saw little benefit in internal tests to making the cache larger than that. Admittedly after a certain size you're better off just keeping your frequently used applications on the SSD itself and manually storing everything else on a hard drive.

Unlike Seagate's Momentus XT, both reads and writes are cached with SRT enabled. Intel allows two modes of write caching: enhanced and maximized. Enhanced mode makes the SSD cache behave as a write through cache, where every write must hit both the SSD cache and hard drive before moving on. Whereas in maximized mode the SSD cache behaves more like a write back cache, where writes hit the SSD and are eventually written back to the hard drive but not immediately.

Enhanced mode is the most secure, but it limits the overall performance improvement you'll see as write performance will still be bound by the performance of your hard drive (or array). In enhanced mode, if you disconnect your SSD cache or the SSD dies, your system will continue to function normally. Note that you may still see an improvement in write performance vs. a non-cached hard drive because the SSD offloading read requests can free up your hard drive to better fulfill write requests.

Maximized mode offers the greatest performance benefit, however it also comes at the greatest risk. There's obviously the chance that you lose power before the SSD cache is able to commit writes to your hard drive. The bigger issue is that if something happens to your SSD cache, there's a chance you could lose data. To make matters worse, if your SSD cache dies and it was caching a bootable volume, your system will no longer boot. I suspect this situation is a bit overly cautious on Intel's part, but that's the functionality of the current version of Intel's 10.5 drivers.

Moving a drive with a maximized SSD cache enabled requires that you either move the SSD cache with it, or disable the SSD cache first. Again, Intel seems to be more cautious than necessary here.

The upside is of course performance as I mentioned before. Cacheable writes just have to hit the SSD before being considered serviced. Intel then conservatively writes that data back to the hard drive later on.

An Intelligent, Persistent Cache

Intel's SRT functions like an actual cache. Rather than caching individual files, Intel focuses on frequently accessed LBAs (logical block addresses). Read a block enough times or write to it enough times and those accesses will get pulled into the SSD cache until it's full. When full, the least recently used data gets evicted making room for new data.

Since SSDs use NAND flash, cache data is kept persistent between reboots and power cycles. Data won't leave the cache unless it gets forced out due to lack of space/use or you disable the cache altogether. A persistent cache is very important because it means that the performance of your system will hopefully match how you use it. If you run a handful of applications very frequently, the most frequently used areas of those applications should always be present in your SSD cache.

Intel claims it's very careful not to dirty the SSD cache. If it detects sequential accesses beyond a few MB in length, that data isn't cached. The same goes for virus scan accesses, however it's less clear what Intel uses to determine that a virus scan is running. In theory this should mean that simply copying files or scanning for viruses shouldn't kick frequently used applications and data out of cache, however that doesn't mean other things won't.

 

Introduction Intel's SSD 311 20GB: Designed to Cache
Comments Locked

106 Comments

View All Comments

  • cbass64 - Wednesday, May 11, 2011 - link

    RAID0 can't even compare. With PCMark Vantage a RAID0 with MD's gives you roughly a 10% increase in performance in the HDD suite. A high end SSD is is 300-400% faster in the Vantage HDD suite scores. Even if you only achieve 50% of the SSD performance increase with SRT you'd still be seeing 150-200% increase and this article seems to claim that SRT is much closer to a pure SSD than 50%.

    Obviously benchmarks like Vantage HDD suite don't always reflect real world performance but I think there's still an obvious difference between 10% and a couple hundred %...
  • Hrel - Thursday, May 12, 2011 - link

    all I know is since I switched to RAID 0 my games load in 2/3 the time they used to. 10% is crazy. RAID 0 should get you a 50% performance improvement across the board; you did something wrong.
  • DanNeely - Thursday, May 12, 2011 - link

    Raid only helps with sequential transfers. If Vantage has a lot of random IO with small files it won't do any good.
  • don_k - Wednesday, May 11, 2011 - link

    Or the fact that it is an entirely software based solution. Intel's software does not, as far as I and google know, run on linux, nor would I be inclined to install such software on linux even if it were. So this is a non-starter for me. For steam and games I say get a 60-120gb consumer level ssd and call it a day. No software glitches, no stuff like that.

    This kind of caching needs to be implemented at the filesystem level if you ask me, which is what I hope some linux filesystems will bring 'soon'. On windows the outlook is bleak.
  • jzodda - Wednesday, May 11, 2011 - link

    Are there any plans in the future of this technology being made available to P67 boards?

    Before I read this I thought it was a chipset feature. I had no idea this was being implemented in software at a driver level.

    I am hoping that after a reasonable amount of time passes they make this available for P67 users. I understand that for now they want to add some value to this new launch but after some time passes why not?
  • michael2k - Wednesday, May 11, 2011 - link

    Given that the drive has built in 4gb of flash, it would be very interesting to compare this to the aforementioned SRT. Architecturally similar, though it requires two drives instead of one. Heck, what would happen if you used SRT with a Seagate Momentus?
  • kenthaman - Wednesday, May 11, 2011 - link

    1. You mention that:

    "Even gamers may find use in SSD caching as they could dedicate a portion of their SSD to acting as a cache for a dedicated games HDD, thereby speeding up launch and level load times for the games that reside on that drive."

    Does Intel make any mention of possible future software versions allowing user customization to specifically select applications to take precedence over others to remain in cache. For example say that you regularly run 10 - 12 applications (assuming that this work load is sufficient to begin the eviction process), rather than having the algorithm just select the least utilized files have it so that you can point to an exe and then it could track the associated files to keep them in cache above the priority of the standard cleaning algorithm.

    2. Would it even make sense to use this in a system that has a 40/64/80 gig OS SSD and then link this to a HDD/array or would the system SSD already be handling the caching? Just trying to see if this would help offload some of the work/storage to the larger HDDs since space is already limited on these smaller.
  • Midwayman - Wednesday, May 11, 2011 - link

    What is the long term use degradation like? I know without TRIM SSD's tend to lose performance over time. Is there something like trim happening here since this all seems to be below the OS level?
  • jiffylube1024 - Wednesday, May 11, 2011 - link

    Great review, as always on Anandtech!

    This technology looks to be a boon for so many users. Whereas technophiles who live on the bleeding edge (like me) probably won't settle for anything less than an SSD for their main boot drive, this SSD cache + HDD combo looks to be an amazing alternative for the vast majority of users out there.

    There's several reasons why I really like this technology:

    1. Many users are not smart and savvy at organizing their files, so a 500GB+ C drive is necessary. That is not feasible with today's SSD prices.

    2. This allows gamers to have a large HDD as their boot drive and an SSD to speed up game loads. A 64GB SSD would be fantastic for this as the cache!

    3. This makes the ultimate drop-in upgrade. You can build a PC now with an HDD and pop in an SSD later for a wicked speed bump!

    I'm strongly considering swapping my P67 for a Z68 at some point, moving my 160GB SSD to my laptop (where I don't need tons of space but the boot speed is appreciated), and using a 30-60GB SSD as a cache on my desktop for a Seagate 7200.12 500GB, my favourite cheap boot HDD.
  • samsp99 - Wednesday, May 11, 2011 - link

    Is the intel 311 the best choice for the $$, or would other SSDs of a similar cost perform better. For example the egg has OCZ Vertex 2 and other sandforce based drives in the 60GB range for approx $130. That is a better cache size than the 20GB of the intel drive.

    Sandforce relies on compression to get some of its high data rates, would that still work well in this kind of a cache scenario?

Log in

Don't have an account? Sign up now