Intel's Caching History

Intel's first attempt at using solid-state memory for caching in consumer systems was the Intel Turbo Memory, a mini-PCIe card with 1GB of flash to be used by the then-new Windows Vista features Ready Drive and Ready Boost. Promoted as part of the Intel Centrino platform, Turbo Memory was more or less a complete failure. The cache it provided was far too small and too slow—sequential writes in particular were much slower than a hard drive. Applications were seldom significantly faster, though in systems short on RAM, Turbo Memory made swapping less painfully slow. Battery life could sometimes be extended by allowing the hard drive to spend more time spun down in idle. Overall, most OEMs were not interested in adding more than $100 to a system for Turbo Memory.

Intel's next attempt at caching came as SSDs were moving into the mainstream consumer market. The Z68 chipset for Sandy Bridge processors added Smart Response Technology (SRT), a SSD caching mode for Intel's Rapid Storage Technology (RST) drivers. SRT could be used with any SATA SSD but cache sizes were limited to 64GB. Intel produced the SSD 311 and later SSD 313 with low capacity but relatively high performance SLC NAND flash as caching-optimized SSDs. These SSDs started at $100 and had to compete against MLC SSDs that offered multiple times the capacity for the same price—enough that the MLC SSDs were starting to become reasonable options for every general-purpose storage without any hard drive.

Smart Response Technology worked as advertised but was very unpopular with OEMs, and it didn't really catch on as an aftermarket upgrade among enthusiasts. The rapidly dropping prices and increasing capacities of SSDs made all-flash configurations more and more affordable, while SSD caching still required extra work to set up and small cache sizes meant heavy users would still frequently experience uncached application launches and file loads.

Intel's caching solution for Optane Memory is not simply a re-use of the existing Smart Response Technology caching feature of their Rapid Storage Technology drivers. It relies on the same NVMe remapping feature added to Skylake chipsets to support NVMe RAID, but the caching algorithms are tuned for Optane. The Optane Memory software can be downloaded and installed separately without including the rest of the RST features.

Optane Memory caching has quite a few restrictions: it is only supported with Kaby Lake processors and it requires a 200-series chipset or a HM175, QM175 or CM238 mobile chipset. Only Core i3, i5 and i7 processors are supported; Celeron and Pentium parts are excluded. Windows 10 64-bit is the only supported operating system. The Optane Memory module must be installed in a M.2 slot that connects to PCIe lanes provided by the chipset, and some motherboards will also have M.2 slots that do not support Optane Caching or RST RAID. The drive being cached must be SATA, not NVMe, and only the boot volume can be cached. Lastly, the motherboard firmware must have Optane Memory support to boot the cached volume. Motherboards that have the necessary firmware features will feature a UEFI tool to unpair the Optane Memory cache device from the backing device being cached, but this can also be performed with the Windows software.

Many of these restrictions are arbitrary and software enforced. The only genuine hardware requirement seems to be a Skylake 100-series or later chipset. The release notes for the final production release of the Optane Memory and RST drivers even includes in the list of fixed issues the removal of the ability to enable Optane caching with a non-Optane NVMe cache device, and the ability to turn on Optane caching with a Skylake processor in a 200-series motherboard. Don't be surprised if these drivers get hacked to provide Optane caching on any Skylake system that can do NVMe RAID with Intel RST.

Intel's latest caching solution is not being pitched as a way of increasing performance in high-end systems; for that, they'll have full-size Optane SSDs for the prosumer market later this year. Instead, Optane Memory is intended to provide a boost for systems that still rely on a mechanical hard drive. It can be used to cache access to a SATA SSD or hybrid drive, but don't expect any OEMs to ship such a configuration—it won't be cost-effective. The goal of Optane Memory is to bring hard drive systems up to SSD levels of performance for a modest extra cost and without sacrificing total capacity.

Introduction Testing Optane Memory
Comments Locked

110 Comments

View All Comments

  • ddriver - Tuesday, April 25, 2017 - link

    Yeah, daring intel, the pioneer, taking mankind to better places.

    Oh wait, that's right, it is actually a greedy monopoly that has mercilessly milked people while making nothing aside from barely incremental stuff for years and through its anti-competitive practices has actually held progress back tremendously.

    As I already mentioned above, the last time "intel dared to innovate" that resulted in netburst. Which was so bad that in order to save the day intel had to... do what? Innovate once again? Nope, god forbid, what they did was go back and improve on the good design they had and scrapped in their futile attempts to innovate.

    And as I already mentioned above, all the secrecy behind xpoint might be exactly because it is NOTHING innovative, but something old and forgotten, just slightly improved.
  • Reflex - Tuesday, April 25, 2017 - link

    Axe is looking pretty worn down from all that grinding....
  • ddriver - Wednesday, April 26, 2017 - link

    Also, unlike you, I don't let personal preferences cloud my objectivity. If a product is good, even if made by the most wretched corporation out there, it is not a bad product just because of who makes it, it is still a good product, still made by a wretched corporation.

    Even if intel wasn't a lousy bloated lazy greedy monopolist, hypetane would still suck, because it isn't anywhere near the "1000x" improvements they promised. It would suck even if intel was a charity that fed the starving in the 3rd world.

    I would have had ZERO objections to hypetane, and also wouldn't call it hypetane to begin with, if intel, the spoiled greedy monopolist was still decent enough to not SHAMELESSLY LIE ABOUT IT.

    Had they just said "10x better latency, 4x better low depth queue performance" and stuff like that, I'd be like "well, it's ok, it is faster than nand, you delivered what you promised.

    But they didn't. They lied, and lied, and now that it is clear that they lied, they keep on lying and smearing with biased reviews in unrealistic workloads.

    What kind of an idiot would ever approve of that?
  • fallaha56 - Tuesday, April 25, 2017 - link

    OMG when our product wasn't as good as we said it was we didn't own-up about it

    and maybe you test against HDD (like Intel) but the rest of us are already packing SSDs
  • philehidiot - Saturday, April 29, 2017 - link

    This is what companies do. Your technology is useless unless you can market it. And you don't market anything by saying it's mediocre. Look at BP's high octane fuel which supposedly cleans your engine and gets better fuel efficiency. The ONLY thing that higher octane fuel does is resist auto-ignition under compression better and thus certain high performance engines require it. As for cleaning your engine - you're telling me you've got a solvent which is better at cutting through crap than petrol AND can survive the massive temperatures and pressures inside the combustion chamber? It's the petrol which scrubs off the crap so yes, it's technically true. They might throw and additive or two in there but that will only help pre-combustion chamber and if you actually have a problem. And Yes, in certain, newer cars with certain sensors you will get SLIGHTLY higher MPG and therefore they advertise the maximum you'll get under ideal conditions because no one will but into it if you're realistic about the gains. The gains will never offset the extra cost of the fuel, however.

    PC marketing is exactly the same and why the J Micron controller was such a disaster so many years ago. They went for advertised high sequential throughput numbers being as high as possible and destroyed the random performance, Anand spotted it and OCZ threw a wobbler. But that experience led to drives being advertised on random performance as well as sequential.

    So what's the lesson here? We should always take manufacturer's claims with a mouthful of salt and buy based on objective criteria and independent measurements. Manufacturers will always state what is achievable in basically a lab set up with conditions controlled to perfection. Why? Because for one you can't quote numbers based on real life performance because everyone's experience will differ and you can't account for the different variables they'll experience. And for two, if everyone else is quoting the maximum theoretical potential, you're immediately putting yourself at a disadvantage by not doing so yourself. It's not about your product, it's about how well you can sell it to a customer - see: Stupidly expensive Dyson Hairdryer. Provides no real performance benefit over a cheap hairdryer but cost a lot in R&D and is mostly advertising wank for rich people with small brains.

    As for Intel being a greedy monopoly... welcome to capitalism. If you don't want that side effect of the system then bugger off to Cuba. Capitalism has brought society to the highest standard of living ever seen on this planet. No other form of economic operation has allowed so many to have so much. But the result is big companies like Intel, Google, Apple, etc, etc.

    Advertising wank is just that. Figures to masturbate over. If they didn't do it then sites like Anandtech wouldn't need to exist as products would always be accurately described by the manufacturer and placed honestly within the market and so reviews wouldn't be required.

    I doubt they lied completely - they will be going on the theoretical limits of their technology when all engineering limitations are removed. This will never happen in practice and will certainly never happen in a gen 1 product. Also, whilst I see this product as being pointless, it's obviously just a toe dipping exercise like the enterprise model. Small scale, very controlled use cases and therefore good real world use data to be returned for gen 2/3.

    Personally, whilst I'm wowed by the figures, I don't see how they're going to improve things for me. So what's the point in a different technology when SLC can probably perform just as well? It's a different development path which will encounter different limitations and as a result will provide different advantages further down the road. Why do they continue to build coal fired power stations when we have CCGTs, wind, solar, nukes, etc? Because each technology has its strengths and weaknesses and encounters different engineering limitations in development. Plus a plurality of different, competing technologies is always better as it creates progress. You can't whinge about monopolies and then when someone starts doing something different and competing with the established norm start whinging about that.
  • fallaha56 - Tuesday, April 25, 2017 - link

    hi @sarah i find that a dead hard drive also plays into responsiveness and boot times(!)

    this technology is clearly not anywhere near as good as Intel implied it was
  • CaedenV - Monday, April 24, 2017 - link

    I have never once had an SSD fail because it has over-used its flash memory... but controllers die all the time. It seems that this will remain true for this as well.
  • Ryan Smith - Tuesday, April 25, 2017 - link

    And that's exactly what we're suspecting here. We've likely managed to hit a bug in the controller's firmware. Which to be sure, isn't fantastic, but it can be fixed.

    Prior to the P3700's launch, Intel sent us 4 samples specifically for stress testing. We managed to disable every last one of them. However Intel learned from our abuse, and now those same P3700s are rock-solid thanks to better firmware and drivers.
  • jimjamjamie - Tuesday, April 25, 2017 - link

    Interesting that an ad-supported website can stress-test better than a multi-billion dollar company..
  • testbug00 - Tuesday, April 25, 2017 - link

    based on what? Have they sent you another model?

    A sample dying on day one, and only allowing testing via remote server doesn't confidence build.

Log in

Don't have an account? Sign up now