At the Flash Memory Summit this week, Western Digital announced that it intends to use 3D Resistive RAM (ReRAM) as storage class memory (SCM) for its future special-purpose ultra-fast SSDs. The company did not reveal any actual timelines for appropriate products, nor their specifications. However, what is important is the fact that Western Digital decided to use SanDisk’s long-discussed ReRAM along with 3D manufacturing tech to build the aforementioned special-purpose SSDs.

The amount of data that the world produces totals several zettabytes per year, which creates two challenges for the high-tech industry: one is to store the vast amounts of data more or less cost-efficiently, another is to process this data efficiently from power consumption point of view. Modern SSDs and HDDs can store plenty of information (10 to 15 TB for top-of-the-range models) and modern CPUs can process a lot of data due to increasing number of cores. However, delivering the right data to those cores poses further challenges: if the necessary data is located on an HDD/SSD, fetching them from there takes a lot of time on computer timescales (e.g., 100,000 – 10,000,000 ns) and consumes a lot of energy. Meanwhile, increasing the amount of DRAM per server is not always feasible from economic point of view.

To address the challenge, the industry came up with idea of non-volatile SCM, which would sit between DRAM and storage devices and deliver much greater performance, endurance and lower latency (e.g., 250 – 5,000 ns) than NAND while costing a lot less than DRAM in terms of per-GB prices. Historically, different companies demonstrated various types of memory, which could be used as SCM (originally, this class of devices was classified as a replacement tech for NAND flash), including conductive-bridging RAM (CBRAM), phase-change memory (PCM), magnetoresistive RAM (MRAM), resistive RAM (ReRAM) and some others. All of these technologies have their own peculiarities like performance and costs (and none of them could beat NAND in terms of per-GB cost), but SanDisk has been working for years on bringing ReRAM to the market.

Fundamentally, ReRAM (also sometimes called RRAM) works by changing the resistance across a dielectric material by electrical current (which is why 3D XPoint is considered as a proprietary implementation of ReRAM). The resistance can be measured and considered as “0” or “1”. On paper, the technology enables higher performance and endurance when compared to NAND flash, but finding the right materials and architecture for ReRAM has taken engineers many years.

Without making any significant announcements this week, Western Digital indicated that it would use some of the things it has learnt while developing its BiCS 3D NAND to produce its ReRAM chips. The company claims that its ReRAM will feature a multi-layer cross-point implementation, something it originally revealed a while ago.

Perhaps, the most important announcement regarding the 3D ReRAM by Western Digital is the claim about scale and capital efficiency of the new memory. Essentially, this could mean that the company plans to use its manufacturing capacities as well as its infrastructure (testing, packaging, etc.) in Yokkaichi, Japan, to make 3D ReRAM. Remember that SCM is at this point more expensive than NAND, hence, it makes sense to continue using the current fabs and equipment to build both types of non-volatile memory so ensure that the SCM part of the business remains profitable. IMFT does the same thing with its SCM: it uses its fab in Lehi, Utah, to produce 3D XPoint memory, but does not reveal specifics about the process technology (just like Western Digital). Of course, Western Digital could re-use some of the fundamental technologies, materials and process architecture both for ReRAM and NAND, but the company does not any particular details on the matter just now.

Quite naturally, WD’s 3D ReRAM will scale in terms of per-IC densities with the increase of the number of layers, though, we do not know how many layers will initial 3D ReRAM ICs from Western Digital incorporate. However, the company seems to be very optimistic about scaling of its SCM and believes that over time it will close the gap in terms of per-GB cost with BiCS NAND and will thus widen the gap with DRAM, which will make it more economically feasible.

Finally, the manufacturer said that its 3D ReRAM is already supported by the ecosystem, which means that the first SSDs based on the technology will probably use industry-standard interfaces (e.g. NVDIMM), which is not surprising. Perhaps, it also means that Western Digital is also already working with software developers to ensure that applications can take advantage of SCM in general, but we cannot confirm this at this time.

To sum up, Western Digital has finished development of ReRAM, which SanDisk has been discussing for several years now. The company plans to release actual products based on ReRAM in the foreseeable future (12 – 24 months from now, call it a guess) and to use the same fab and equipment to build ReRAM and NAND ICs. Western Digital’s ReRAM has a roadmap for the future. What remains to be seen is what is going to happen to the joint development of SCM announced by SanDisk and HP in October, 2015.

Source: Western Digital

Comments Locked

38 Comments

View All Comments

  • BrokenCrayons - Friday, August 12, 2016 - link

    It almost feels like we're taking a step back in order to move forward. The thought of adding yet another layer of memory/storage between the CPU and the data it needs to process to gain performance sort of runs counter to the idea of just making storage itself faster and cheaper. I realize SCM is thought of as needed to address the disparity between CPU speed and storage speed, but its a cumbersome and inelegant approach to mitigating the problem. In fact, it highlights the number of band-aid solutions the computer industry has had to put into place over the years to deal with cost-sensitive buyers and aggressive competitors.
  • DanNeely - Friday, August 12, 2016 - link

    The history of computer evolution can be seen as the continual adding of more cache layers because from the very beginning it has been easier to scale compute performance than storage latency. Occasionally we shed the slowest storage layer off the bottom of the stack; as happened to tape in the consumer market during the 80s. (Some 70s/early 80s computers used audio cassettes as storage. High density tape lasted a lot longer as an archival storage/backup system for very large enterprises.) The same thing is happening with hard drives today, with SSD only solutions taking ever larger portions of the mainstream storage market pushing HDDs to backups, bulk data storage, and the most cost sensitive segments of the market. Give it a few more years and even boxmart special laptops will probably be SSD only; with consumer HDD use limited to NASes and large capacity USB backup devices.
  • FunBunny2 - Friday, August 12, 2016 - link

    -- The thought of adding yet another layer of memory/storage between the CPU and the data it needs to process to gain performance sort of runs counter to the idea of just making storage itself faster and cheaper.

    I forget? what was the consensus when the cpu makers added L1 then L2 then L3 caches? to the extent that a cpu is doing real multi-user or multi-tasking or multi-programming (each having a balance of compute and I/O), then adding caching along the way is win-win. we will only know, of course, when such cpu become compute bound (hard) or stalled.
  • BrokenCrayons - Friday, August 12, 2016 - link

    Don't misunderstand my comment. I'm very much in favor of more performance even though the solution appears to be another tier of memory. It doesn't excuse the fact that doing so will add complexity and cost when really, a better answer would be improving storage performance so a few of these additional layers are unnecessary. boeush's comment below suggestion some form of SCM such as 3D ReRAM eventually replacing current SSD technologies is just the sort of thing that'd be more sensible in the long run.
  • djayjp - Friday, August 12, 2016 - link

    They'll release it this year. It says right there in the slide above: "fast storage: 2016"
  • boeush - Friday, August 12, 2016 - link

    First step toward mass adoption: hybrid SSDs with SCM buffer -- akin to the hybrid HDDs of yore.

    Next step, wholesale replacement of NAND with SCM -- initially at lower capacities and higher prices, but then asymptotically approaching parity - just as is now happening with SSDs vs HDDs.

    All this talk of an extra and additional layer between DRAM and NAND is, in my opinion, naive. Simplicity and convenience always win in the end.
  • FunBunny2 - Friday, August 12, 2016 - link

    -- Simplicity and convenience always win in the end.

    only if the bean counters permit. they've stopped 450mm wafers for the better part of a decade, fur instance. faster only matters if the user notices. in the past, we had the Wintel monopoly (really, the Wintel symbiosis), under which M$ built ever more bloated code, demanding more cycles from the cpu. Intel was happy to oblige with ever faster cpu on ever smaller nodes.

    now, for 99.44% of users (even, so called, Enterprise), a Pentium and a small SSD really is fast enough. in those old days, running the Office programs made the symbiosis viable. that whole field has been plowed to exhaustion, which is important, since the current source of demand for faster is at least an order of magnitude smaller, i.e. gamers, video editing, and ??

    what matters these days is innterTubes bandwidth. not Intel's or the memory makers' bailiwick.
  • boeush - Friday, August 12, 2016 - link

    Your thinking reminds me somewhat of those people who used to think nobody would ever need more than 1 MB of RAM...

    Part of the reason PC performance stagnated, was indeed a lack of a performance-hungry killer app. I believe that the advent of VR is about to fix that. There is never such a thing as too much performance headroom, where high-quality VR is concerned. Bigger and more complex AIs will push memory/CPU performance as well, and when it comes to neural networks for instance, they'll be performance-starved essentially forever. And they'll be everywhere before long: in self-driving cars, in home robots, in games, etc.
  • Murloc - Saturday, August 13, 2016 - link

    your example kinda prove the point that PCs have peaked in performance since your examples could all do without a x86 traditional PC, especially the car and IoT, but also the VR stuff (if it could be untethered, much the better!).
  • jjj - Friday, August 12, 2016 - link

    The PC is dead anyway, there is no point in even considering the PC in their vision.
    In server perf matters, in glasses latency is crucial and so is power. In robots ,including cars , power and latency also matter a lot. In IoT there is no good solution yet and there is an acute need for something better as both DRAM and NAND don't fit the purpose.

Log in

Don't have an account? Sign up now