Who is the Optane SSD 900P for?

With a price per GB a little over twice that of the the fastest flash-based consumer SSDs, the Optane SSD 900P is an exclusive high-end product. For most desktop usage, drives like the 960 PRO are already fast enough to make storage no longer a severe bottleneck. The most noticeable delays due to storage performance on a 960 PRO are when moving around large files, and the Optane SSD doesn't offer any significant improvement to sequential transfer speeds. Random writes can be a challenge for flash-based SSDs, but volatile write caches and SLC caches allow them to handle short bursts with very high performance.

The unprecedented random read performance of the Optane SSD 900P is its biggest strength on paper, but not one that will often lead to a proportional speedup in overall application performance. Too many programs and filesystems are still designed with mechanical hard drive performance in mind as the baseline, and further increases to SSD performance serve mainly to shift the bottlenecks further onto the CPU, RAM, network, and even the user's own reaction time.

The scenarios where a drive like the Optane SSD 900P can offer meaningful and worthwhile performance improvements can be broadly categorized as as situations where the Optane SSD can help with one of two problems:

1. Storage is too slow

About the only time a desktop could challenge the sequential access performance of a high-end PCIe SSD (based on flash or 3D XPoint) is when dealing with high resolution uncompressed video. The Optane SSD doesn't help much here because of its limited capacity, and the PCIe 3 x4 link itself is a bottleneck at the highest refresh rates and bit depths. For video work, flash-based SSDs are definitely a better choice, and RAID arrays of cheaper SATA SSDs may be a better option than PCIe SSDs. Desktop workloads that require extremely high sustained random write performance are very rare, and SLC caching on a flash-based SSD nicely takes care of most realistic quantities of random writes.

That said, there are some situations where higher random read performance can be quite noticeable. Searching through a large volume of data is a common case, such as searching through a video, but it usually presents enough opportunities for parallelization that the drive's queue depth will climb up to the range where flash-based SSDs come close to the Optane SSD. Game level load times can in theory benefit greatly from faster read speeds, but in practice decompressing the assets after loading them into RAM quickly becomes the bottleneck. Most of the other situations where the performance advantage of the Optane SSD will really help are better described as a different kind of problem:

2. RAM is too small

In the workstation market, there are abundant examples of compute tasks with a memory working set that doesn't fit in RAM. Almost any simulation or rendering task will have a parameter for mesh density or particle count that can very quickly scale the memory requirements from a few GB to tens or hundreds of GB. An Optane SSD is far slower than four to eight channels of DDR4, but 16GB DIMMs are least 6-7 times more expensive per GB than the Optane SSD 900P, and putting more than 128GB of DRAM in an ATX motherboard is even more expensive.

Intel PR provided an example of using SideFX Houdini to render a high-resolution animation that included a 1.1 billion particle water simulation. Their test used a machine with a 10-core CPU and 64GB of RAM, and compared the 512GB Samsung 960 PRO against the 480GB Optane SSD 900P. The total memory requirements (DRAM+swap) of the rendering job were not disclosed, but the resulting 2.7x speedup is very plausible for a task that absolutely hammers the swap device. With a sufficiently high thread count to keep the queue depth high, that margin could be narrower (especially with the fastest 2TB 960 PRO), but then context switch overhead would become problematic. With the Optane SSD 900P, the random read latency is low enough that it would be hard to host more than two swap-limited threads per core without context switch overhead wasting more time than waiting on the SSD.

Star Citizen Bundle

Even though gaming isn't the ideal workload for the Optane SSD 900P to show off its performance, Intel is marketing the 900P to gaming enthusiasts. They're bundling a code for the game Star Citizen with the 900P, and including a new in-game spaceship variant as an exclusive item for Optane SSD customers. Intel has partnered with Star Citizen developer Roberts Space Industries (RSI) to hold a launch event for the 900P at CitizenCon 2017 today, which they are streaming live on Twitch and YouTube. Attendees will have the chance to playtest the Intel-exclusive Sabre Raven ship, but it is still undergoing final QA and will not be immediately available to Optane SSD 900P customers. The web page for redeeming the Star Citizen game code had not gone live as of the time of writing, so I was unable to attempt any testing with the game. (ed: I remember when AMD was offering a Star Citizen bundle in 2014 as well. The game still hasn't shipped.)

At the media briefing for the 900P, an RSI representative said they are exploring ways to optimize the Star Citizen experience on Optane SSDs, but not many specifics were provided. One approach under consideration is using less compression for some game assets, freeing up CPU time but relying on high storage performance. It didn't sound like this work was close to release. In the game's current state, RSI claims they've seen load times improve by 20-25%, but they didn't specify what other storage device they were comparing against.

Introduction Drive Features
Comments Locked

205 Comments

View All Comments

  • Kevin G - Monday, October 30, 2017 - link

    The software side does lag behind hardware by a substantial amount of time. However, the ground work is already being done. The first initial wave of support will be mundane as a 'RAM disk' but with firmware/hypervisor/OS support so that only Optane DIMMs are utilized for this functionality. Software overhead would still exist for the file system but legacy support would be maintained. I think patches already exist for this level of functionality in Linux, though I'm unsure if they're been rolled into the mainstream kernel.
  • Kevin G - Monday, October 30, 2017 - link

    Intel still has this potential on their road maps for 2018 with the Cascade Lake Xeons. Supporting Optane as memory requires some changes in the memory controllers and Intel is only targeting their Xeon lineup with such support. This was initially to arrive with Sky Lake-EP but was cut at the last minute due to apparently some bugs found in testing. This is what there are a few Sky Lake-EP motherboards out there with an extra memory slot that can't be used: it was only for the unreleased Optane DIMMs.

    The other thing is that Optane DIMMs were NEVER hyped to be faster than commodity DRAM. Intel never set that expectation and from all accounts, Optane is genuinely slower. However, byte addressability is there as is a strong increase in endurance for it to function in such a role, if slower. Any sort of performance gains will stems form various ideas that you mention, like the removal of a traditional storage stack etc.

    The other side of storage is capacity which Intel has really yet to demonstrate. Their talk of Optane DIMMs were to hit 1 TB per DIMM eventually but the sizes here point toward capacities in DIMM format roughly the same as traditional DIMMs (128 GB right now in servers with 256 GB on the horizon). I know of a few big data guys that dream of a system that could easily support 96 TB (1 TB DIMM per slot, 12 DIMM slots per socket, 8 sockets total) that would permit their entire cluster to be run on a single node and in-memory. At this point having the Optane DIMMs being slower than DRAM wouldn't matter as it would eliminate traditional bulk storage and networking overhead which are slower still. The potential is huge at the highend if Intel can get the technology out in the right form factor and at the capacities they need.

    Only reason Intel is launching like this now is that they need to get the technology out there and ramp up production. If it weren't for the Sky Lake bug, they would have launched the DIMM format by now.
  • 4shrovetide - Sunday, October 29, 2017 - link

    If someone picks up one of these and doesn't play games or just doesn't want the Star Citizen code, would you mind sending it to me? 4shrovetide@gmail.com Thank you in advance to anyone who helps out!
  • AnnonymousCoward - Sunday, October 29, 2017 - link

    What a copycat name, 900P! Like 950 Pro.
  • Kevin G - Sunday, October 29, 2017 - link

    I'm disappointed over all.

    The latency advantage is genuinely there as is random performance (which latency is factor in itself) but sequential performance falls short of the hype. What is disappointing as well is that only the 280 GB drive is going to be offered in U.2 format and capacities top out at 480 GB even of the add-in card model. The real ugly factor is power consumption which to Intel's credit wasn't hyping up prior to launch is high relative to other SSD solutions.

    The biggest promise of 3D Xpoint/Optane is in DIMM form factor with byte addressability. Intel delayed that last minute with Sky Lake-EP and told people to expect that with Cannon Lake-EP. It looks like Cannon Lake-EP is being delayed due 10 nm issues into 2019 so we're getting a 2018 refresh of Sky Lake-EP called Cascade Lake with the missing Optane DIMM support back-in. The hype of Optane was that while radically slower than DRAM, you do get nonvolatile support and a massive capacity increase, everything else being equal. The performance equation does change as operating system and applications are adapted to an all in-memory centric view (i.e. the concept of long term disk storage is removed, everything is seen as 'in-memory'). It isn't that Optane magically becomes faster but simply that a chunk of software necessary to work in today's view of fast volatile memory and slow persistent memory is no longer needed. It is simply an opportunity to gain in algorithmic efficiency by not having a traditional storage stack. This effect can be seen again if Optane DIMM sizes are well beyond that of DDR4 DIMMs and used in conjunction with large socket (think 8 or more) that could replace some clustered workloads and removing the networking stack from the performance equation.

    The really big disappointment is this launch doesn't point toward living up to the remaining hype at all. The lack luster capacity today certainly implies that the DIMM sizes necessary to threaten DRAM may not happen. 128 GB registered ECC DDR4 LR-DIMMs are out there today and 256 GB models are on the horizon. From the looks of it, Optane could make the 256 GB capacities in DIMM form and certainly come in cheaper but that wouldn't be as large of a game changer. Sure, the software changes for a pure in-memory system could still happen but it wouldn't enable any new workload that couldn't be done via current software and memory capacities. Tried and proven will win out even if it is more expensive because it is known quantity that works.
  • "Bullwinkle J Moose" - Monday, October 30, 2017 - link

    Any thoughts on Advanced Persistent Threats that will be lingering around when 3D Xpoint/Optane is in DIMM form factor ?

    Seems no-one has yet been willing to address the issue

    I would never consider persistent memory for just that reason alone

    And, swapping boot drives or restoring backups won't help it seems

    Any comments on the issue?
  • Kevin G - Monday, October 30, 2017 - link

    While a threat may persist in non-volatile memory, it still needs to be executed which is invoked from the host system. Cleansing an Optane DIMM maybe as simple as putting it into a system that is programmed to immediate wipe said Optane DIMM. There will always need to be a means to do some initial configuration/initialization which would be embedded at the firmwire level. In other words, the DIMMs don't have to be running an OS for them to be securely erased.

    Similarly, moving a DIMM from one system to another system is also possible, though the default should actually be to do nothing by default. As weird as it is, there exists the possibility of moving a running application from system to system by this method. This goes to your point about security. Thus the default for any system capable of hot swap or detecting a newly installed DIMM after power cycle, should not actually access the contents of that DIMM until given instructions to access it.
  • "Bullwinkle J Moose" - Monday, October 30, 2017 - link

    But wiping the DIMM defeats its very purpose for existing..............PERSISTENCE!

    Kaspersky found out how bad malware can be when it only runs in system memory and never touches a disk, networking the infected systems added persistence to the threat

    If you need to wipe the DIMM or disconnect from any networked machines, you eliminate any tiny perceived benefit this technology "could" give you over the tech we already have

    I say "Tiny" benefit only as it relates to the "massive" threat it can create from being persistent
  • regis440 - Monday, October 30, 2017 - link

    Faster then SDRAM PC133. Sign of the time :)
  • jrs77 - Monday, October 30, 2017 - link

    General purpose storage starts at a very minimum of 1TB these days. 2TB would be more appropiate with the ever growing filesizes of high resolution video and image-files.

    480GB is filled up with a handfull of game-installations allready these days. So these SSDs are only usable as OS/software disks and for that the price is way too high.

    Call me again when SSD-prices drop to ~ $100 / TB. Then we're talking usability as general storage drives.

Log in

Don't have an account? Sign up now