Intel's Caching History

Intel's first attempt at using solid-state memory for caching in consumer systems was the Intel Turbo Memory, a mini-PCIe card with 1GB of flash to be used by the then-new Windows Vista features Ready Drive and Ready Boost. Promoted as part of the Intel Centrino platform, Turbo Memory was more or less a complete failure. The cache it provided was far too small and too slow—sequential writes in particular were much slower than a hard drive. Applications were seldom significantly faster, though in systems short on RAM, Turbo Memory made swapping less painfully slow. Battery life could sometimes be extended by allowing the hard drive to spend more time spun down in idle. Overall, most OEMs were not interested in adding more than $100 to a system for Turbo Memory.

Intel's next attempt at caching came as SSDs were moving into the mainstream consumer market. The Z68 chipset for Sandy Bridge processors added Smart Response Technology (SRT), a SSD caching mode for Intel's Rapid Storage Technology (RST) drivers. SRT could be used with any SATA SSD but cache sizes were limited to 64GB. Intel produced the SSD 311 and later SSD 313 with low capacity but relatively high performance SLC NAND flash as caching-optimized SSDs. These SSDs started at $100 and had to compete against MLC SSDs that offered multiple times the capacity for the same price—enough that the MLC SSDs were starting to become reasonable options for every general-purpose storage without any hard drive.

Smart Response Technology worked as advertised but was very unpopular with OEMs, and it didn't really catch on as an aftermarket upgrade among enthusiasts. The rapidly dropping prices and increasing capacities of SSDs made all-flash configurations more and more affordable, while SSD caching still required extra work to set up and small cache sizes meant heavy users would still frequently experience uncached application launches and file loads.

Intel's caching solution for Optane Memory is not simply a re-use of the existing Smart Response Technology caching feature of their Rapid Storage Technology drivers. It relies on the same NVMe remapping feature added to Skylake chipsets to support NVMe RAID, but the caching algorithms are tuned for Optane. The Optane Memory software can be downloaded and installed separately without including the rest of the RST features.

Optane Memory caching has quite a few restrictions: it is only supported with Kaby Lake processors and it requires a 200-series chipset or a HM175, QM175 or CM238 mobile chipset. Only Core i3, i5 and i7 processors are supported; Celeron and Pentium parts are excluded. Windows 10 64-bit is the only supported operating system. The Optane Memory module must be installed in a M.2 slot that connects to PCIe lanes provided by the chipset, and some motherboards will also have M.2 slots that do not support Optane Caching or RST RAID. The drive being cached must be SATA, not NVMe, and only the boot volume can be cached. Lastly, the motherboard firmware must have Optane Memory support to boot the cached volume. Motherboards that have the necessary firmware features will feature a UEFI tool to unpair the Optane Memory cache device from the backing device being cached, but this can also be performed with the Windows software.

Many of these restrictions are arbitrary and software enforced. The only genuine hardware requirement seems to be a Skylake 100-series or later chipset. The release notes for the final production release of the Optane Memory and RST drivers even includes in the list of fixed issues the removal of the ability to enable Optane caching with a non-Optane NVMe cache device, and the ability to turn on Optane caching with a Skylake processor in a 200-series motherboard. Don't be surprised if these drivers get hacked to provide Optane caching on any Skylake system that can do NVMe RAID with Intel RST.

Intel's latest caching solution is not being pitched as a way of increasing performance in high-end systems; for that, they'll have full-size Optane SSDs for the prosumer market later this year. Instead, Optane Memory is intended to provide a boost for systems that still rely on a mechanical hard drive. It can be used to cache access to a SATA SSD or hybrid drive, but don't expect any OEMs to ship such a configuration—it won't be cost-effective. The goal of Optane Memory is to bring hard drive systems up to SSD levels of performance for a modest extra cost and without sacrificing total capacity.

Introduction Testing Optane Memory
Comments Locked

110 Comments

View All Comments

  • YazX_ - Monday, April 24, 2017 - link

    "Since our Optane Memory sample died after only about a day of testing"

    LOL
  • Chaitanya - Monday, April 24, 2017 - link

    And it is supposed to have endurance rating 21x larger than a conventional NAND SSD.
  • Sarah Terra - Monday, April 24, 2017 - link

    Funny yes, but teething issues aside the random write Performance is several orders of magnitude faster than all existing storage mediums, this is the number one metric I find that plays into system responsiveness, boot times, and overall performance and the most ignored metric by all Meg's to date. They all go for sequential numbers, which don't mean jack except when doing large file copies.
  • ddriver - Monday, April 24, 2017 - link

    So let's summarize:

    1000 times faster than NAND - in reality only about 10x faster in hypetane's few strongest points, 2-6x better in most others, maximum thorough lower than consumer NVME SSDs, intel lied about speed about 200 times LOL. Also from Tom's review, it became apparent that until the cache of comparable enterprise SSDs fills up, they are just as fast as hypetane, which only further solidifes my claim that xpoint is NO BETTER THAN SLC, because that's what those drives use for cache.

    1000 times the endurance of flash - in reality like 2-3x better than MLC. Probably on par with SLC at the same production node. Intel liked about 300-500 times.

    10 times denser than flash - in reality it looks like density is actually way lower than. 400 gigs in what.. like 14 chips was it? Samsung has planar flash (no 3d) that has more capacity in a single chip.

    So now they step forward to offer this "flash killer" as a puny 32 gb "accelerator" which makes barely any to none improvement whatsoever and cannot even make it through one day of testing.

    That's quite exciting. I am actually surprised they brought the lowest capacity 960 evo rather than the p600.

    Consumer grade software already sees no improvement whatsoever from going sata to nvme. It won't be any different for hypetane. Latency are low queue depth access is good, but that's mostly the controller here, in this aspect NAND SSDs have a tremendous headroom for improvement. Which is what we are most likely going to see in the next generation from enterprise products, obviously it makes zero sense for consumers, regardless of how "excited" them fanboys are to load their gaming machines with terabytes of hypetane.

    Last but not least - being exclusive to intel's latest chips is another huge MEH. Hypetane's value is already low enough at the current price and limited capacity, the last thing that will help adoption is having to buy a low value intel platform for it, when ryzen is available and offers double the value of intel offerings.
  • Drumsticks - Monday, April 24, 2017 - link

    Your bias is showing.

    1000x -> Harp on it all you want, but that number was for the architecture not the first generation end product. It represents where we can go, not where we are. I'll also note that Toms gave it their editor approved award - "As tested today with mainstream settings, Optane Memory performed as advertised. We observed increased performance with both a hard disk drive and an entry-level NVMe SSD. The value proposition for a hard drive paired with Optane Memory is undeniable. The combination is very powerful, and for many users, a better solution than a larger SSD."

    "1000 times the endurance of flash -> You can concede that 3D XPoint density isn't as good as they originally envisioned, but it's still impressive, gen1, and has nowhere to go but up. It's not really worse than other competing drives per drive capacity - this cache supports like 3 DWPD basically. The MX300 750GB only supports like .3 DWPD. 10x better is still good.

    10 times denser than flash -> DRAM, not Flash. And it's going to be much denser than DRAM.

    Barely any to no improvement -> LOL, did you look at the graphs? Those lines at the bottom and on the left were 500GB and 250GB Sata and NVMe drives getting killed by Optane in a 32GB configuration. 3D XPoint was designed for low queue depth and random performance - i.e. things that actually matter, where it kills its competition. Even sequential throughput, which is far from its design intention, generally outperforms consumer drives.

    So, Optane costs, in an enterprise SSD, 2-3x more than other enterprise drives, for record breaking low queue depth throughput that far surpasses its extra cost, while providing 10-80x less latency. In a consumer drive, Optane regularly approaches an order of magnitude faster than consumer drives in only a 32GB configuration.

    If Optane is only as fast as SLC, I'd love to understand why the P4800X broke records as pretty much the fastest drive in the world, barring unrealistically high queue depths.

    This 32GB cache might be a stopgap, and less compelling of a product in general because of its capacity, but that you could deny the potential that 3D XPoint holds is absolutely laughable. The random performance and low queue depth performance is undeniably better than NAND, and that's where consumer performance matters.
  • ddriver - Monday, April 24, 2017 - link

    "I'd love to understand why the P4800X broke records"

    Because nobody bothered to make a SLC drive for many many years. The last time there were purely SLC drives on the market it was years ago, with controllers completely outdated compared to contemporary standards.

    SLC is so good that today they only use it for cache in MLC and TLC drives. Kinda like what intel is trying to push hypetane as. Which is why you can see SSDs hitting hypetane IOPs with inferior controllers, until they run out of SLC cache space and performance plummets due to direct MLC/TLC access.

    I bet my right testicle that with a comparable controller, SLC can do as well and even better than hypetane. SLC PE latencies are in the low hundreds of NANOseconds, which is substantially lower than what we see from hypetane. Endurance at 40 nm is rated at 100k PE cycles, which is 3 times more than what hypetane has to offer. It will probably drop as process node shrinks but still.

    "10x better is still good"

    Yet the difference between 10x and 1000x is 100x. Like imagine your employer tells you he's gonna pay you 100k a year, and ends up paying you a 1000 bucks instead. Surely not something anyone would object to LOL.

    I am not having problems with "10x better". I am having problems with the fact it is 100x less than what they claimed. Did they fail to meet their expectations, or did they simply lie?

    I am not denying hypetane's "potential". I merely make note that it is nothing radically better than nand flash that has not been compromised for the sake of profit. xpoint is no better than SLC nand. With the right controller, good old, even ancient and almost forgotten SLC is just as good as intel and micron's overhyped love child. Which is kinda like reinventing the wheel a few thousand years later, just to sell it at a few times what its actually worth.

    My bias is showing? Nope, your "intel inside" underpants are ;)
  • Reflex - Monday, April 24, 2017 - link

    SLC has severe limits on density and cost. It's not used because of that. Even at the same capacity as these initial Optane drives it would likely cost considerably more, and as Optane's density increases there is no ability to mitigate that cost with SLC, it would grow linearly with the amount of flash. The primary mitigations already exists: MLC and TLC. Of course those reduce the performance profile far below Optane and decrease it's ability to handle wear. Technically SLC could go with a stacked die approach, as MLC/TLC are doing, however nothing really stops Optane from doing the same making that at best a neutral comparison.
  • ddriver - Monday, April 24, 2017 - link

    SLC is half the density of MLC. Samsung has 2 TB of MLC worth in 4 flash chips. Gotta love 3D stacking. Now employ epic math skills and multiply 4 by 0.5, and you get a full TB of SLC goodness, perfectly doable via 3D stacked nand.

    And even if you put 3D stacking aside, which if I am not mistaken the sm961 uses planar MLC, 2 chips on each side for a full 1 TB. Cut that in half, you'd get 512 GB of planar SLC in 4 modules.

    Now, I don't claim to be that good in math, but if you can have 512 GB of SLC nand in 4 chips, and it takes 14 for a 400 GB of xpoint, that would make planar SLC OVER 4 times denser than xpoint.

    Thus if at planar dies SLC is over 4 times better, stacked xpoint could not possibly not possibly be better than stacked SLC.

    Severe limits my ass. The only factor at play here is that SSDs are already faster than needed in 99% of the applications. Thus the industry would rather churn MLC and TLC to maximize the profit per grain of sand being used. The moment hypetane begins to take market share, which is not likely, they can immediately launch SLC enterprise products.

    Also, it should be noted that there is still ZERO information about what the xpoint medium actually is. For all we know, it may well be SLC, now wouldn't that be a blast. Intel has made a bunch of claims about it, none of which seemed plausible, and most of which have already turned out to be a lie.
  • ddriver - Monday, April 24, 2017 - link

    *multiply 2 by 0.5
  • Reflex - Monday, April 24, 2017 - link

    You can 3D stack Optane as well. That's a wash. You seem very obsessed with being right, and not with understanding the technology.

Log in

Don't have an account? Sign up now