Intel's Caching History

Intel's first attempt at using solid-state memory for caching in consumer systems was the Intel Turbo Memory, a mini-PCIe card with 1GB of flash to be used by the then-new Windows Vista features Ready Drive and Ready Boost. Promoted as part of the Intel Centrino platform, Turbo Memory was more or less a complete failure. The cache it provided was far too small and too slow—sequential writes in particular were much slower than a hard drive. Applications were seldom significantly faster, though in systems short on RAM, Turbo Memory made swapping less painfully slow. Battery life could sometimes be extended by allowing the hard drive to spend more time spun down in idle. Overall, most OEMs were not interested in adding more than $100 to a system for Turbo Memory.

Intel's next attempt at caching came as SSDs were moving into the mainstream consumer market. The Z68 chipset for Sandy Bridge processors added Smart Response Technology (SRT), a SSD caching mode for Intel's Rapid Storage Technology (RST) drivers. SRT could be used with any SATA SSD but cache sizes were limited to 64GB. Intel produced the SSD 311 and later SSD 313 with low capacity but relatively high performance SLC NAND flash as caching-optimized SSDs. These SSDs started at $100 and had to compete against MLC SSDs that offered multiple times the capacity for the same price—enough that the MLC SSDs were starting to become reasonable options for every general-purpose storage without any hard drive.

Smart Response Technology worked as advertised but was very unpopular with OEMs, and it didn't really catch on as an aftermarket upgrade among enthusiasts. The rapidly dropping prices and increasing capacities of SSDs made all-flash configurations more and more affordable, while SSD caching still required extra work to set up and small cache sizes meant heavy users would still frequently experience uncached application launches and file loads.

Intel's caching solution for Optane Memory is not simply a re-use of the existing Smart Response Technology caching feature of their Rapid Storage Technology drivers. It relies on the same NVMe remapping feature added to Skylake chipsets to support NVMe RAID, but the caching algorithms are tuned for Optane. The Optane Memory software can be downloaded and installed separately without including the rest of the RST features.

Optane Memory caching has quite a few restrictions: it is only supported with Kaby Lake processors and it requires a 200-series chipset or a HM175, QM175 or CM238 mobile chipset. Only Core i3, i5 and i7 processors are supported; Celeron and Pentium parts are excluded. Windows 10 64-bit is the only supported operating system. The Optane Memory module must be installed in a M.2 slot that connects to PCIe lanes provided by the chipset, and some motherboards will also have M.2 slots that do not support Optane Caching or RST RAID. The drive being cached must be SATA, not NVMe, and only the boot volume can be cached. Lastly, the motherboard firmware must have Optane Memory support to boot the cached volume. Motherboards that have the necessary firmware features will feature a UEFI tool to unpair the Optane Memory cache device from the backing device being cached, but this can also be performed with the Windows software.

Many of these restrictions are arbitrary and software enforced. The only genuine hardware requirement seems to be a Skylake 100-series or later chipset. The release notes for the final production release of the Optane Memory and RST drivers even includes in the list of fixed issues the removal of the ability to enable Optane caching with a non-Optane NVMe cache device, and the ability to turn on Optane caching with a Skylake processor in a 200-series motherboard. Don't be surprised if these drivers get hacked to provide Optane caching on any Skylake system that can do NVMe RAID with Intel RST.

Intel's latest caching solution is not being pitched as a way of increasing performance in high-end systems; for that, they'll have full-size Optane SSDs for the prosumer market later this year. Instead, Optane Memory is intended to provide a boost for systems that still rely on a mechanical hard drive. It can be used to cache access to a SATA SSD or hybrid drive, but don't expect any OEMs to ship such a configuration—it won't be cost-effective. The goal of Optane Memory is to bring hard drive systems up to SSD levels of performance for a modest extra cost and without sacrificing total capacity.

Introduction Testing Optane Memory
POST A COMMENT

110 Comments

View All Comments

  • Billy Tallis - Wednesday, April 26, 2017 - link

    As long as you have Intel RST RAID disabled for NVMe drives, it'll be accessible as a standard NVMe device and available for use with non-Intel caching software. Reply
  • fanofanand - Tuesday, April 25, 2017 - link

    I came here to read ddriver's "hypetane" rants, and I was not disappointed! Reply
  • TallestJon96 - Tuesday, April 25, 2017 - link

    Too bad about the drive breaking.

    As an enthusiast who is gaming 90% of the time with my pc, I don't think this is for me right now. I actually just bought a 960 evo 500gb to compliment my 1 tb 840 evo. Overkill for sure, but I'm happy with it, even if the difference is sometimes subtle.

    This technology really excites me. If they can get a system running eith no Dram or Nand, and just use a large block of Xpoint, that could make for a really interesting system. Put 128 gb of this stuff paired with a 2c/4t mobile chip in a laptop, and you could get a really lean system that is fast for every day usage cases (web browsing, video watching, etc).

    For my use case, I'd love to have a reason to buy it (no more loading times ever would be very futuristic) but it'll take time to really take off.
    Reply
  • MrSpadge - Tuesday, April 25, 2017 - link

    > no more loading times

    Not going to happen, because there's quite some CPU work involved with loading things.
    Reply
  • SanX - Tuesday, April 25, 2017 - link

    Blahblahblah indurance, price, consumption, superspeed. Where they are? ROTFLOL At least don't show these shameful speeds if you opened your mouth this loud, Intel. No one will ever look at anything less then 3.5GB/s set by Samsung 960 Pro if you trolled about superspeeds. Reply
  • cheshirster - Wednesday, April 26, 2017 - link

    Is there any technical reasoning why this won't work with older CPU's?
    I don't see this being any different than Intel RST.
    Reply
  • KAlmquist - Thursday, April 27, 2017 - link

    I think that Intel SRT caches reads, whereas the Optane Memory caches both reads and writes. My guess is that when Intel SRT places data in the cache, it doesn't immediately update the non-volatile lookup tables indicating where that data is stored. Instead, it probably waits until a bunch of data has been added, and then records the locations of all of the cached data. The reason for this would be that NAND can only be written in page units. If Intel were to update the non-volatile mapping table every time it added a page of data to the cache, that would double the amount of data written to the caching SSD.

    If I'm correct, then with Intel SRT, a power loss can cause some of the data in the SSD cache to be lost. The data itself would still be there, but it won't appear in the lookup table, making it inaccessible. That doesn't matter because SRT only caches reads, so the data lost from the cache will still be on the hard drive.

    In contrast, Optane Memory memory presumably updates the mapping table for cached data immediately, taking advantage of the fact that it uses a memory technology that allows small writes. So if you perform a bunch of 4K random writes, the data is written to the Optane storage only, resulting in much higher write performance than you would get with Intel SRT.

    In short, I would guess that Optane Memory uses a different caching algorithm than Intel SRT; an algorithm that is only implemented in Intel's latest chipsets.

    That's unfortunate, because if Optane Memory were supported using software drivers only (without any chipset support), it would be a very attractive upgrade to older computer systems. At $44 or $77, an Optane Memory device is a lot less expensive than upgrading to an SSD. Instead, Optane Memory is targeted at new systems, where the economics are less compelling.
    Reply
  • mkozakewich - Thursday, April 27, 2017 - link

    I would really like to see the 16GB Optane filled with system paging file (on a device with 2 or 4 GB of RAM) and then do some general system experience tests. This seems like the perfect solution: The system is pretty good about offloading stuff that's not needed, and pulling needed files into working memory for full speed; and the memory can be offloaded to or loaded from the Optane cache quickly enough that it shouldn't cause many slowdowns when switching between tasks. This seems like the best strategy, in a world where we're still seeing 'pro' devices with 4 GB of RAM. Reply
  • Ugur - Monday, May 1, 2017 - link

    I wish Intel would release Optane sticks/drives of 1-4TB sizes asap and sell them for 100-300 more than SSDS of same size immediately.
    I'm kinda disappointed they do this type of tiered rollout where it looks like it'll take ages until i can get an Optane drive at larger sizes for halfway reasonable prices.
    Please Intel, make it available asap, i want to buy it.
    Thanks =)
    Reply
  • abufrejoval - Monday, May 8, 2017 - link

    Well the most important thing is that Optane is now real a product on the market, for consumers and enterprise customers. So some Intel senior managers don’t need to get fired or cross off items on their bonus score cards.

    Marketing will convince the world that Optane is better, most importantly that only Intel can have it inside: No ARM, no Power no Zen based server shall ever have it.

    For the DRAM-replacement variant, that exclusivity had a reason: Without proper firmware support, that won’t work and without special cache flushing instructions it would be too slow or still volatile.
    Of course, all of that could be shared with the competition, but who want to give up a practical monopoly, which no competition can contest in court before their money runs out.

    For the PCIe variant Intel, chipset and OS dependencies are all artificial, but doesn’t that make things better for everyone? Now people can give up ECC support in cheap Pentiums and instead gain Optane support for a premium on CPUs and chipsets, which use the very same hardware underneath for production cost efficiency. Whoever can sell that, truly deserves their bonus!

    Actually, I’d propose they be paid in snake oil.

    For the consumer with a linear link between Optane and its downstream storage tier, it means the storage path has twice as many opportunities to fail. For the service technician it means he has four times as many test scenarios to perform. Just think on how that will double again, once Optane does in fact also come to the DIMM socket! Moore’s law is not finished after all! Yeah!

    Perhaps Microsoft could be talked into creating a special Optane Edition which offers much better granularity for forensic data storage, and surely there would be plenty of work for security researchers, who just love to find bugs really, really deep down in critical Intel Firmware, which is designed for the lowest Total Cost of TakeOwnership in the industry!

    Where others see crisis, Intel creates opportunities!
    Reply

Log in

Don't have an account? Sign up now