Products

During the event, Intel and Micron made it clear that this week's announcement is solely about the underlying 3D XPoint technology. Products based on this new technology will follow sometime next year and the companies were quite tight-lipped when it came to details, but they did give away a few hints. First of all, the co-operation between Intel and Micron only exists at the memory technology level and both companies are developing their own 3D XPoint based products, similar to how the two have operated in the SSD/NAND business. Technically this means that the two will be competing against each other, although it's possible that each company will take a unique approach to utilizing 3D XPoint in an end product.

One take away from the presentation and Q&A was Intel's emphasis on NVMe. Intel has been a strong advocate of the technology ever since its inception, and as a matter of fact Intel was the first SSD vendor to ship NVMe SSDs in high volume with the introduction of the DC P3700 and its derivatives last year. While NVMe has mostly been associated with NAND so far since it is mainstream non-volatile memory, the core architecture was built to scale with future memory technologies with even lower latencies (after all, NVMe stands for Non-Volatile Memory Express). Given that software interfaces tend to stick around for at least a decade, it's obvious that NVMe had to be designed with more than just NAND in mind.

With NVMe it's certain that we will see 3D XPoint based PCIe SSDs. Whether these will be add-in cards or 2.5" drives remains to be seen, but I'm more inclined to say add-in cards (at least initially) because of the connector limitations. U.2 (former SFF-8639) supports only four PCIe 3.0 lanes, resulting in effective real world bandwidth of about 3.2GB/s. NAND is already capable of saturating that for read operations, so even though 3D XPoint would improve write and random IO performance, the full potential would ultimately go unused without a higher bandwidth interface. An add-in card doesn't share the limitations of U.2 and could support up to 16 lanes with over 10GB/s of bandwidth available, but the downside would more limited serviceability since add-in cards can't be front-loaded like 2.5" drives can. As the enterprises have used add-in cards in the past (Fusion-io never made anything but add-in cards), I don't see serviceability being a major hurdle for the companies that really need 3D XPoint for their workloads. On the other hand, I wouldn't be surprised to see Intel pushing for an 8-lane U.2-like standard, but it really needs industry-wide support to get air under the wings.

With Intel being the other party in the joint-venture, it's guaranteed that 3D XPoint will get all support and love it needs on the platform side. Intel can integrate more PCIe lanes and/or accelerate the development of PCIe 4.0 for its upcoming platforms to create the necessary bandwidth and push for 3D XPoint if needed, which is something that no other memory vendor could do.


AgigA's DDR4 NVDIMM: A Future 3D XPoint Form Factor?

While Intel will clearly be pursuing the storage aspect of 3D XPoint through NVMe, I suspect Micron might take a more memory-like approach since it's a memory company as much as it's a storage company. It was made clear that 3D XPoint can be used in memory and storage applications because the technology is bit-addressible and can work in a similar fashion as DRAM. Bringing 3D XPoint closer to the CPU and connecting it through a DDR4 interface would obviously yield the best performance and eliminate any bottlenecks that PCIe has. There are already NAND-based products that do this, such as SanDisk's ULLtraDIMM, and a couple of months ago JEDEC paved the way by releasing a standard for DDR4 NVDIMMs, a new standard set to fill the gap between DRAM and SSDs. While NVDIMMs will require driver work due to the lack of standardized software interface like NVMe, I do believe 3D XPoint is the right technology for bringing NVDIMMs to the market and it would make sense for Micron to do so.

Applications

Section by Ryan Smith

The use cases for 3D XPoint are potentially significant in number and Intel/Micron believe that it will open the doors for all sorts of new applications. Overall the computing industry has had access to high speed non-volatile memory technologies before – magnetic core memory is the traditional poster child here – so there is some precedence here and some fundamental research into the field from the early days of computing. However with magnetic core memory having become outmoded before the majority of our readers were even born, the modern computing industry has developed around the current paradigm of fast DRAM and slow permanent storage. As a result while the potential applications are numerous, it’s still in many ways an uncharted area in computer science.

The most immediate application of 3D XPoint based products will be as a layer of storage between DRAM and SSDs. Over the history of computing the number of layers between storage and processors has continued to build – multiple layers of on-die caches, off-die caches, caching SSDs, etc – and 3D XPoint memory would further fit into that heiarchy as a storage medium that bridges the gap between DRAM and the current fastest non-volatile storage. By treating 3D XPoint memory as another layer of cache, 3D XPoint can be used to further speed up applications that are currently bound by either memory capacity or storage latency.

<
The Traditional Memory Heiarchy (Image Source: Tommy MacWilliam, Harvard)

Given the costs of 3D XPoint, the first such applications are expected to be on the enterprise side. Enterprise users make heavy use of storage at all layers in order to balance performance needs against the relatively small capacity of DRAM. Database servers in particular adapt well to caching, and it’s easy enough to imagine a next-generation database system using 3D XPoint to backstop DRAM. Since 3D XPoint is non-volatile, it can even be an exclusive cache – that is, its contents don’t need to be in lower layers as well – which eliminates a good deal of overhead. A database system in this context would only need to write contents to SSDs and other, lower layers of storage when data gets expelled from the 3D XPoint cache, an occurrence that may be particularly rare with the properly tuned database.

Many of these benefits of a cache layer are applicable to other types of storage-heavy servers as well, though I expect databases will be the king. Perhaps the more interesting aspect – and certainly more relatable to the public at large – will be what 3D XPoint-backed servers are used for. Intel and Micron are eager to point out the “big science” uses for the technology; projects and systems such as the Large Hadron Collider and Oak Ridge’s Titan supercomputer can generate a massive amount of data, and while processing all of that data is first and foremost a processor issue, feeding that data for processing is a big problem as well. Any kind of analysis that could benefit from individual processors having RAM-like access to an SSD-sized pool of data could benefit.

The catch is that there’s still a lot of research that’s needed into figuring out what the best uses may be. This kind of shift in access times and capacity doesn’t just make computers faster, but it can change the fundamentals of what algorithms are best. Just as how GPUs required scientists to figure out how to spread out their work in a massively parallel (and high latency) nature, putting 3D XPoint to its full use will require newer algorithms that are capable of effectively utilizing direct access to so much data at once.

Meanwhile I would be surprised if the financial industry didn’t jump on this early, as they are prone to jumping on major technologies in order to try to get an edge in a highly competitive and lucrative field. In this aspect it’s not so much that 3D XPoint would improve processing speed – such work is already offloaded to large RAM pools when possible – but rather it would enable traders and analysts to run simulations against much larger datasets much more effectively.

As for the consumer space, the same principles about an additional cache layer would apply, but I’m not so sure we’d see consumers pick it up in the same manner. Much of this has to do with what the eventual costs and capacities of 3D XPoint products would be, as consumers are much more price sensitive than professional users. In the consumer space we’ve seen sporadic use of NAND-backed hard drives, for example, but by and large consumers have stuck with discrete SSDs and HDDs. Consumers either don’t want to pay the premium for SSDs, or have enough money to just buy large SSDs outright, leaving little of a middle ground.

That said I’ve seen some interesting pitches for 3D XPoint in the gaming space that have some merit, as games are something of a special case for consumer workloads. By and large we want fast access to game resources since those resources are accessed on-demand and are needed to progress in a game’s execution, but the assets themselves aren’t volatile. Only a small part of the working set for a game is volatile data – player positions, AI decision trees, game state, etc – while the rest of it is static data such as models, world geometry, and textures. 3D XPoint in turn would be fast enough that it could be used as a replacement for RAM in holding these assets, but as the data is non-volatile it wouldn’t thrash 3D XPoint P/E cycles very much, and any write speed disadvantage compared to DRAM would be immaterial.

But again, this is going to depend on the cost of the technology; if it were to become cheap enough that 50-100GB could be thrown in a game console or gaming PC, then you could store the entirety of most games in 3D XPoint memory, which would reduce load times to the time required to process the data and setup the game state. This is more important in consoles which currently store their games on a mechanical drive, who then could recall data rather quickly on first boot or adjust for large amounts of memory swapping for more detailed titles. High end PCs with large amounds of DRAM can already use RAMDisks perhaps nullifying a point there.

Last but not least of course are the implications for 3D XPoint as a wholesale replacement for DRAM. The more limited lifetime of 3D XPoint relative to DRAM certainly poses some challenges in this respect, but I suspect the bigger issue will be overall bandwidth. By the time 3D XPoint becomes available in bulk, DRAM technology should be to the point where faster-generations of DDR4 are available and HBM is widely deployed. Given that future generations of HBM are targeting 1TB/sec or more of memory bandwidth, it’s unlikely that 3D XPoint is going to be able to match the bandwidth of contemporary high-bandwidth DRAM solutions. So any rumors of the impending death of DRAM are likely premature.


IoT & Embedded, A Good Fit For 3D XPoint?

But with that said, while 3D XPoint isn’t likely to replace DRAM in a wholesale manner for all applications, there is clearly room for it to replace DRAM in some situations where DRAM is used primarily for its bandwidth and latency versus solid state storage. Replacing DRAM with 3D XPoint in embedded applications for example would be very practical – many embedded uses don’t need high bandwidth or low latency as much as they just something better than traditional NAND – and I wouldn’t rule out smartphones here either, at least to an extent. If individual 3D XPoint chips can be produced small and cheap enough, then the most lucrative use for the tech as a DRAM replacement may be in the vast legions of low-performance devices, rather than in high-performance hardware that actually needs the full speed and latency of DRAM.

Estimating 3D XPoint Die Size & What Happens to 3D NAND Final Thoughts
Comments Locked

80 Comments

View All Comments

  • Ian Cutress - Saturday, August 1, 2015 - link

    Most likely they're wanting to protect their investment and not let the cat out of the bag for others to copy. Keeping IP close to the chest and industry secrets is part of the game is important, especially if there's 10 years of funding behind it. That's why we don't get any insights at all into things like Qualcomm's Adreno graphics and such - to them they want us to consider it a black box and that's all they're willing to speak on the issue.

    There may be something legal too. Can't discount that for sure.
  • jjj - Saturday, August 1, 2015 - link

    Don't confuse the public with the competition. Why they hide from the public, ask their IR and marketing.Their competitors know a lot more and a lot sooner than you imagine. When corporations claim"competitive reasons" it's a flat out lie 99.99% of the time. Here once they start sampling there is nothing to hide anymore and they'll do that soon enough although there have been rumors about the tech and some might be working on controllers for the thing already so the relevant competitors might have all the info they need- Samsung has been involved in plenty of scandals over the year, Toshiba is in the middle of one right now so don't imagine for a minute that big corporations have any kind of ethics and they won't do what they need to do to obtain info.
    Micron has it's summer analyst day on August 14 and they will disclose more then, remains to be seen how much.
  • Tunnah - Saturday, August 1, 2015 - link

    This post literally gave me a headache.

    Damn I wish I was smarter. Although from what I could...grasp (and I use that term so incredibly loosely) it looks awesome.

    One question I had though, if it's faster by a large margin than NAND, and more reliable, does that mean the introductory pricing will push enterprise SSD costs down, or simply be artificially inflated as to not damage the profit margins from that sector ?
  • Ian Cutress - Saturday, August 1, 2015 - link

    It's difficult to say at this point as it depends on what product segment will exploit XPoint the most. If we're looking at an intermediary for database applications, it might need a change on the hardware level and certainly at a software level, and be sold different to storage. If it's acting as an SSD replacement, you'll most likely see it being sold at a premium against 3D NAND technologies and the market will adjust accordingly. There's also the aspect of competition too, and if anyone else will have something in this space soon.
  • Kristian Vättö - Monday, August 3, 2015 - link

    Just to add to Ian's comment, there was also a private "Meet the Architect" Q&A after the webinar with Micron's VP of R&D and one of Intel's Senior Fellows and the two went into great detail of how PCM never ended up being viable to replace DRAM due to scaling issues.
  • witeken - Friday, July 31, 2015 - link

    How about PCMS? This very informed article a number of weeks ago predicted it would be PCMS. He makes a very strong case, and that was before the announcement.

    http://seekingalpha.com/article/3253655-intel-and-...
  • name99 - Friday, July 31, 2015 - link

    Obsessing about this is idiotic.
    Intel/Micron is avoiding certain language because that language has an unfortunate past (cf Windows Vista becomes Windows 7 --- "is it Windows Vista? No no no, Completely new OS"...)

    Whether it's phase change or not (or whether changing the material from one state to another counts as a phase change) is utterly irrelevant to anyone except the manufacturer. It's like if Intel announced 3D-NAND and the question everyone felt worth asking was what color the masks are.

    The questions that DO matter are the user-facing questions --- performance (read and write), power, cost, reliability, form factor.
  • Ian Cutress - Saturday, August 1, 2015 - link

    As an end-user, yes it doesn't ultimately matter what the underlying technology is.
    As an analyst interested in the science behind the industry, or if you were a financial investment agent looking into the market to see which technologies are keeping which companies in growth figures with potential market share adjustments, it's an absolute must-know.

    User facing questions are about how the product is used. Business facing questions are about how the product fits in, and the technology behind it. Research related questions are about exploiting fundamental laws of physics in different ways, regardless of the name. All of these questions matter, even if you're not involved in the latter two segments.
  • Refuge - Monday, August 3, 2015 - link

    This is Anandtech right? I didn't click on the wrong link?

    I thought this site existed solely because we all obsess over the latest tech, and appreciate knowing how the nitty gritty's all work together. ;)
  • KateH - Friday, July 31, 2015 - link

    This seems like it would compliment HBM well. If an APU was made with "only" 4-8GB of on-package memory, but could use swap space on a XPoint partition, the performance hit from paging could be pretty minor.

Log in

Don't have an account? Sign up now