POST A COMMENT

77 Comments

Back to Article

  • Eden-K121D - Sunday, March 19, 2017 - link

    Meh. A huge downgrade from their initial presentation Reply
  • Billy Tallis - Sunday, March 19, 2017 - link

    I think the problem is mainly that Intel didn't do enough to emphasize the difference between the performance of the memory cell itself versus the performance achievable from an entire drive with a controller and with protocol overhead. The PCIe bus doesn't really allow for a 1000x improvement in latency over existing NVMe SSDs. We'll have to wait for the 3D XPoint DIMMs before we can conclusively declare that they've missed that performance goal.

    It does look like the endurance is falling well short of their projections, but the density has pretty much hit the target.
    Reply
  • eSyr - Sunday, March 19, 2017 - link

    > PCIe bus doesn't really allow for a 1000x improvement in latency over existing NVMe SSDs

    Bullshit. PCIe 3.0 has a round-trip latency less than 300 ns and sustained bandwidth more than 11 GB/s for 64-bit reads and more than 15 GB/s for 4K reads, it's more than sustained bandwidth of single DDR4 channel.
    Reply
  • IntelUser2000 - Sunday, March 19, 2017 - link

    Yes, the hardware latency might be low, but it's the software part that slows it down. Even Intel's own presentations show this.

    https://www.google.ca/search?q=3d+xpoint&clien...

    Besides, if there truly was no difference between having it on a DIMM format and PCI Express, why not simplify things and put DDR4 on PCI Express as well?
    Reply
  • ddriver - Sunday, March 19, 2017 - link

    PCIE is really serial bulk transfer, whereas RAM is... well, Random Access. Plus the additional interface overhead, rather than having the MC directly connected to the CPU it will have to go through the PCIE protocol too. Plus eating up from the CPUs IO bandwidth to the point of starving the rest of the system.

    It will likely be faster in DIMM mode, not but that much because it avoids the bottleneck of PCIE per se, but because of the increased parallelism and the direct connection to the CPU via the MC.
    Reply
  • Bullwinkle J Moose - Sunday, March 19, 2017 - link

    All I want is long term storage

    Give me a 20TB external X-Point backup drive with a write protect switch and enough onboard RAM to handle temporary writes while in READ ONLY mode

    ....and I want it for less than $200

    Otherwise, just give me a PCIe 4.0 X 16 X-Point Drive for gaming with 30GB/sec load speeds for the same price

    Using this for persistent RAM makes no sense to me
    persistent malware will be there forever

    Latencies should improve "somewhat" over Flash based SSD's for gaming (regardless of interface) but I'm looking for better load times and LONG TERM storage!

    AT A MUCHHHHH LOWER PRICE!
    Reply
  • kawmic - Monday, March 20, 2017 - link

    I'm with you man! Way too expensive. Reply
  • clamor - Tuesday, March 28, 2017 - link

    If X-Point is so close in speed to RAM, why not eliminate RAM altogether? No loading needed, just access it directly off the storage device. Reply
  • Samus - Sunday, March 19, 2017 - link

    Serial memory interfaces have come and gone (Rambus) but HBM is serial. The bus is just ultra wide to make up for it. However, latency is definitely slower and that needs to be compensated for by the memory controller (currently, GPU's)

    I think Intel's approach here won't be limited by PCIe. Not in the least. Current Xpoint is turning out to be pretty low performance from what we were all (I think) speculating. It turns out it's benefits are durability (30 DWPD!) and low latency due to a simplified indirection table to read/write architecture that writes to cells not pages. This is very cool, but is simply out of the realm of most platforms. Even PCIe SSD's are overkill for most applications because unless you are hosting a database, or manipulating large files (large random IO) a SATA3 SSD already performs well enough to host an OS and applications. After all the entire Windows 10 OS can be transferred over SATA3 in 6 seconds, and you only need to read about 1GB into memory to complete a boot to login.

    Most large game maps are 2-3GB total (textures to load into VRAM) and they can only be done 3-5 seconds faster by the fastest mainstream NVMe drive over the fastest mainstream SATA3 drive (1800MB/sec vs 560MB/sec)

    That's the most reasonable scenario a desktop user is going to notice a large enough difference to justify the cost of NVMe.

    This will change in the future as programs get larger but the cost to NVMe still isn't justifyable for the majority of applications.
    Reply
  • TelstarTOS - Sunday, March 19, 2017 - link

    "pretty low performance" has to be seen yet. If the numbers at QD1 and QD2 are about half those announced at QD16 it will be awesome in a real world scenario. Reply
  • beginner99 - Monday, March 20, 2017 - link

    Exactly. Especially also for a client PC. First time in a long time one will actually be able to see a difference between SSDs. But yeah, obviously not worth the price yet.

    The real deal for consumers as far as we know will be 32-64 GB cache drives. But looks like that software needs some serious work for this to happens as UEFI for sure is a must on consumer devices.
    Reply
  • beginner99 - Monday, March 20, 2017 - link

    The advanatge of a 3D XPoint over NAND and a traditional SATA drive for the consumer would be the much better latency at low QD. SATA drives usually reach their max throughput at QD32 which never ever happens on a client PC were QD1 and maybe QD2 performance matters ad here 3D Xpoint supposedly should shine just like DRAM. Albeit not having numbers yet is a bit suspicious. Reply
  • Bullwinkle J Moose - Monday, March 20, 2017 - link

    "Most large game maps are 2-3GB total (textures to load into VRAM) and they can only be done 3-5 seconds faster by the fastest mainstream NVMe drive over the fastest mainstream SATA3 drive (1800MB/sec vs 560MB/sec)"
    -------------------------------------------------------------
    You mean most large "current" maps
    This tech should be fast enough by the time 200 Gigabyte Photo Realistic game maps for 4-8K VR headsets arrive

    It may not be PCIe when that happens but it should be MUCH faster than current tech

    It needs to be built for the future, not the present
    Reply
  • Bullwinkle J Moose - Monday, March 20, 2017 - link

    Last reply was to Samus but Noscript is causing replies to end up in wrong spot

    An Edit or delete post function would be pretty sweet
    Reply
  • prisonerX - Sunday, March 19, 2017 - link

    It's already there, it's called DMA. Reply
  • ddriver - Sunday, March 19, 2017 - link

    DMA stands for direct memory access, but it only means that the device has the capability to directly accessing the memory, it doesn't mean it is as fast as working with memory.

    The point of DMA is to improve performance by avoiding unnecessary copies. That's all it really saves.

    USB 3.1 supports DMA, but that doesn't make magically make it as fast as memory is, it is still limited by its physical interface and its protocol overhead. Thunderbolt is still much faster, both in terms of bandwidth and latency, because it is defacto PCIE, and as such is not bottlenecked by narrow PHY and protocol overhead.
    Reply
  • prisonerX - Tuesday, March 21, 2017 - link

    Yes, but I wasn't agreeing with him, just noting that memory is on the PCIe bus like everything else.

    PCIe is the future universal bus.
    Reply
  • fangdahai - Sunday, March 19, 2017 - link

    Power consumption? DDR4 channel cannot provide enough power to XPoint. Reply
  • alysdexia - Sunday, March 19, 2017 - link

    was when? Reply
  • Krysto - Sunday, March 19, 2017 - link

    > The PCIe bus doesn't really allow for a 1000x improvement in latency over existing NVMe SSDs.

    No, the issue from the beginning was that Intel compared 3D Xpoint with the slowest possible "nand chip" (think microSD, rather than SSD) on the market. And from that point of view, it was actually kind of correct. But Intel knew very well it was misleading everyone this way, because they knew nobody would actually think Intel is referring to the slowest possible microSD chip in the market that existed in 2015 or whenever Intel announced this.

    All Intel does these days is mislead as much as possible to make themselves look innovative.
    Reply
  • woggs - Monday, March 20, 2017 - link

    Right. Billions in investment and many years of RnD is just misleading. Reply
  • witeken - Sunday, March 19, 2017 - link

    What? If this was a regular SSD, it would get praised into heaven. Go look at Ars Technica, they have some more info and slides. Reply
  • close - Sunday, March 19, 2017 - link

    Well everything is compared to the expectations. And they were really built up by Intel. So it looks like an awesome product that just falls short of what Intel hyped. Reply
  • ddriver - Sunday, March 19, 2017 - link

    Oh wow, I was 100% right about my skepticism while folk like you were going in admiration frenzy. Who would have thought :)

    As time passes by and intel reveal more technological details, I expect my technical predictions about the source of the improvements will turn out to be true as well, namely that the actual storage process is nothing too fancy, and the superior throughput, endurance and latency come from more of the good old parallelism, over-provisioning and caching.

    But I guess when you are right you are right. About it being an awesome product, even if barely incremental, mediocre and not worth the money, because it is an intel product, and as such it is intrinsically awesome in the eyes of folk like you ;)
    Reply
  • prisonerX - Sunday, March 19, 2017 - link

    I bet their codename for this drive was Yawnsville. Reply
  • investlite - Sunday, March 19, 2017 - link

    Lol, not really right at all. This is gen 1 of this technology. What were the benchmarks of gen 1 NAND? As they clean up the manufacturing processes and refine the product I expect it will blow NAND out of the water. Don't celebrate prematurely, you'll end up looking like the fool you think everyone else is. Reply
  • ddriver - Sunday, March 19, 2017 - link

    The two are not proportional. You can expect to see the same kind of improvement over time. 1st gen optane already incorporates a lot of already developed and available technology. 2nd gen will certainly be only a minor increment, as gen 1 is already based on mature technology with only a tiny subset of it that is really "new".

    It is people like you who celebrate prematurely, and keep hoping even in the face of the hype crashing down hard.

    I am not saying optane could not be made much better. On the contrary, it easily could, but so could flash. Yet in both cases, it will not be a product of technology maturing, but about increasing in complexity and capability. Technology is never really as good as it can be, it is only as good as the industry needs it to be to make the most money on it. There is no point for the industry to get ahead of itself, as neither it, much less consumers need it, they are much better making barely incremental upgrades, milking every step as much as possible before moving onto the next.

    Enough time has passed since the initial silicon for intel to make several iterations of the process, so I doubt we will see huge improvements there. Besides what makes it fast is in all likelihood not the medium but the controller. And I guess the reason why they are so secretive about the medium is not because it is something exceptional, but exactly because it is not. It would kill the magic to it and destroy the hype revealing it is nothing that much special than an improved controller.
    Reply
  • alysdexia - Sunday, March 19, 2017 - link

    will -> shall
    fast -> swift
    Reply
  • ddriver - Sunday, March 19, 2017 - link

    It seems that the storage medium is somewhere between SLC and MLC.

    30 DWPD over 3 years is about 30k P/E cycles, 50k for 5 years. SLC is 100k P/E cycles.

    Latency is in the range of 50 to 200 microseconds for the controller + medium. In comparison, SLC for the medium alone is like 100-200 nanoseconds. Granted, that's just the media, but also that number is like a 1000 times better, so even if we factor the delay of the controller, a sufficiently advanced controller + SLC could go much lower than optane.

    I personally would love to see the industry churn out stacked SLC modules at a larger process node for optimal endurance. SLC is good enough to annihilate xpoint and is tried and true technology, whose only disadvantage is low density, which can be overcome by vertical stacking, and besides, judging by what intel has for xpoint at 20 nm, their density isn't anywhere near their claims of superiority, and it wouldn't be even if they scale it down to 10 nm either.
    Reply
  • melgross - Monday, March 20, 2017 - link

    Some of you guys are seriously shortsighted. If you really believe the nonsense you're spouting, that's really surprising.

    This is different enough from NAND to enable far better performance over time. NAND is nearing its performance limit, as industry experts keep stating. It's just a stop on the road.
    Reply
  • ddriver - Monday, March 20, 2017 - link

    Nah, we are just competent, whereas the clueless chumps like you buy the 1000x better hype. The 1000x better turned out to be about 2x better than pre-existing tech 1/5 the price.

    2x better at 5x the price. Groundbreaking. Upcoming gen SSDs will easily match that at much better prices, therefore offering far superior value.
    Reply
  • drajitshnew - Monday, March 20, 2017 - link

    "Because it is Intel". They should be in India. Go to amazon.in, you will find that intel 600p costs MORE than Samsung 960 evo and twice as much as MydidigitalSsd BPXe. All for the same 512 GB Reply
  • wyewye - Monday, March 20, 2017 - link

    Indeed what a disappointment. They promised a revolution, instead we got a minor increment. At cheap price. For a revolutionary product I would had expect way higher prices than 1500$.

    It looks that Intel is learning from AMD's strategy. Its sad.
    Reply
  • Chaitanya - Sunday, March 19, 2017 - link

    Looks like hype train has seriously crashed on the whole Optane technology. I thought it was going to end up a Vaporware. Reply
  • Drumsticks - Sunday, March 19, 2017 - link

    Has it really though? Sure, the performance of an actual drive is well short of the actual memory, but for 3-4x more than an SSD, you're getting better endurance, a DRAMish mode, and SIGNIFICANTLY better consistency and latency.

    It may not be the showstopper they wanted, but it certainly has a place, and 3D Xpoint v2 will likely be even more impressive.
    Reply
  • willis936 - Sunday, March 19, 2017 - link

    It's nothing without intel dedicating pins to a new bus without the PCIe limitations. Everything else is a gimped cheap imitation of their initial promise. Reply
  • ddriver - Sunday, March 19, 2017 - link

    "you're getting better endurance, a DRAMish mode, and SIGNIFICANTLY better consistency and latency"

    But do you really?

    It is quite easy to create benchmarks that focus on showcasing particular uarch improvements, but that rare translates in real world improvements, if ever.

    It would be foolish to expect that intel's marketing benches will reflect real world scenarios, even more so after they got cough over-hyping and over-pimping it.

    Or maybe when intel meant "better than SSD" they meant "better than our pathetic p600". As I have mentioned repeatedly, NAND based SSDs based on existing process, technology and implementation have tremendous potential for performance increase that is yet untapped, simply because there is no need to. If the industry chooses to, it could easily make an SSD 10 times faster than the best we have today, and they could make it tomorrow, and it won't cost as much as intel is charging for Hypetane.
    Reply
  • Drumsticks - Sunday, March 19, 2017 - link

    I like to think that if an SSD that was 10 times faster than what we have today, costing less than Optane (3-4x pricier than other Enterprise), one of the many competitors in the world probably would have done it.

    Internet comments are often worth more than the billions of R&D spent by Intel, Micron, Samsung, and countless others. You should let them know they're leaving 10x more performance on the table and they can have it tomorrow for <4x more price. I bet they'd give you royalties, too.
    Reply
  • ddriver - Sunday, March 19, 2017 - link

    They are not leaving it on the table. They are leaving it so they offer it drop by drop over the course of many years. 10 times faster would be easy, and would have negligibly higher cost per gigabyte. You could do a 100x just as well. But why bother, nobody can utilize that, there is no hardware to support it, and the enterprise is still capable of getting that and more by means of parallelism and increased availability.

    It is quite foolish to believe the industry puts the best it has, or could do on the table. Barely incremental is their usual game, and they rarely take a break from it.
    Reply
  • woggs - Monday, March 20, 2017 - link

    Uhg. Companies deliver their best and then charge for it unless there is NO competition. Nobody holds back. Reply
  • ddriver - Monday, March 20, 2017 - link

    What a gullible fella you are :) Reply
  • eddman - Monday, March 20, 2017 - link

    What, you're now a NAND ans optane expert/engineer too?

    What do you base your "10 times faster" claim on and why do you claim at the same time that optane cannot improve just as much? How are you so 100% sure?
    Reply
  • ddriver - Monday, March 20, 2017 - link

    Technology is technology you fool. Brands and monikers are irrelevant. If you know how tech works, you know how tech works, regardless of who makes it and what silly label they slap on it.

    "why do you claim at the same time that optane cannot improve just as much" - evidently you need to work on skills as basic as reading and paying attention. I've said at least several times both technologies can easily be improved significantly. The point was not that optane could not be improved, the point was the kind of improvements that give it its puny edge can easily be applied to nand SSDs at a better cost. And that optane has no intrinsic technological advantage because of xpoint, as SLC nand is already far superior in terms of latency and endurance. All we need is stacked SLC modules and improved controllers, and optane is obsolete.
    Reply
  • eddman - Tuesday, March 21, 2017 - link

    "You fool"; classic. Keep it up.

    So you know EXACTLY how xpoint works and what its capabilities are. I suppose the entire industry is stupid for thinking that NAND is near its limits and they keep working on new storage tech.

    You know nothing.
    Reply
  • ewitte - Tuesday, March 28, 2017 - link

    Both sata and name band Max out at about 10-12000 iops at qd1 which is where this shines I believe octane can reach 90000. This will make it FEEL like a bigger difference than just bandwidth alone. Reply
  • drajitshnew - Monday, March 20, 2017 - link

    There is a HELL of a difference between 3x and 1000x. At least that's the maths taught to me. Reply
  • bug77 - Sunday, March 19, 2017 - link

    Promising, but everybody keep in mind this is a first-generation product. So yes, there's caveats. At the same time, if intel can lower the latency to DRAM levels, this will be such a game changer, the industry will need the better part of a decade to adapt. A lot of X-Point's future depends on execution though. So the waiting is still not over. Reply
  • haukionkannel - Sunday, March 19, 2017 - link

    Good! First products are coming to the market. Could be really useful in data centers and other high write intensive operations! Reply
  • YazX_ - Sunday, March 19, 2017 - link

    so as usual they lied, it does have a write limit similar to SSDs, but the most important question is 1520$ for 375 GB, not sure what the hell they are smoking but with that price you can 2 TB SSDs and let them burn after 10 years. Reply
  • A5 - Sunday, March 19, 2017 - link

    The companies this is targeted at aren't getting 10 years out of an SSD though. And they're willing to pay for the extra performance. Reply
  • KAlmquist - Sunday, March 19, 2017 - link

    Agreed. In addition, the fact that they are only releasing the smallest capacity drive now suggests that the manufacturing capacity for XPoint memory is very limited right now. I expect that once they are able to ramp up manufacturing, they will drop the price in order to generate more sales, but for now a high price makes sense because if they had a lower price they'd probably get more orders than they could fill. Reply
  • drajitshnew - Monday, March 20, 2017 - link

    Now, that's insightful. Reply
  • zodiacfml - Sunday, March 19, 2017 - link

    Based on the price, it looks to me that performance is right where they want it to be. Reply
  • Krysto - Monday, March 20, 2017 - link

    25x slower than RAM? Reply
  • lilmoe - Sunday, March 19, 2017 - link

    I'm sure lots will do a good job bashing the hyperbole surrounding Intel's claims for the product, so I won't be going there. That said, the memory drive part was rather interesting. I'm wondering...

    - How many CPU cycles will the whole operation eat up?
    - If it's faster than conventional paging, will it be possible to dedicate a portion of the drive for Memory Drive operation, while using the rest for normal storage (assuming there will be future consumer m.2 form factors)?
    - Why the heck is it a paid add-on? Intel should supply the software as a package with the drive.
    - Would it allow for better future hybernation? I'm assuming since all system memory is stored on the drive, and the active portion is being cached to actual DRAM, wouldn't that mean that memory state is constant regardless of power failure?

    Guess we'll have to wait for quite a while before everything is clear.
    Reply
  • alysdexia - Sunday, March 19, 2017 - link

    hibernation Reply
  • lilmoe - Sunday, March 19, 2017 - link

    The no edit button thing should be known here, so I couldn't be bothered to correct.... Reply
  • alysdexia - Sunday, March 19, 2017 - link

    their !-> is; has !-> they; 1 != 2, dolt.
    fastest -> swiftest
    fast:free::swift:slow::quick:qualm::hasty:laggy::speedy:idle::fleet:laden
    purely -> sheerly
    larger -> greater
    large:rare::great:small:big:lite::mickel:littel
    Reply
  • fanofanand - Monday, March 20, 2017 - link

    Congratulations on proofreading the article AND all of the comments! Now have a cookie and go quietly to the corner. Reply
  • sor - Sunday, March 19, 2017 - link

    Wow, based on the comments it seems Intel has an uphill battle on their hands in getting consumers to understand this product. This is a huge leap in performance, I'd expect Anantech readers to realize this based on Anandtech generally doing a good job of showing that SSDs have a hard time hitting their advertised IOPs without throwing tons of workload and can barely keep 10k IOPs when they're busy. Being able to do an order of magnitude more IOPs with light loads, and keep that performance when run for more than a few minutes is quite valuable.

    Although, I guess the target market is not really consumer at this point, so it kicks the problem down the road a bit. Still, I'd think that increasing high IOPs at low queue depths would get desktop users excited, SSDs have basically hit a wall for consumer workloads.
    Reply
  • jabber - Sunday, March 19, 2017 - link

    Love how so many here think Intel just spend tens of millions of dollars to make stuff for guys living in their parents basements to do benchmarks on all day. Reply
  • cocochanel - Wednesday, March 22, 2017 - link

    +1! Reply
  • LordanSS - Sunday, March 19, 2017 - link

    Sorry to disappoint you but for gaming this won't bring you any advantage whatsoever.

    I've posted this in the past, but I've already tested playing games from a RAMDrive compared to my 850 EVO. I didn't notice any difference at all in loading times, altho I didn't have a stopwatch with me to see if tenths of a second would be shaved, heh.

    In the end games are not optimized for this sort of storage. SSDs are a decent boost over mechanical HDDs, but beyond that we don't really see anything (yet).
    Reply
  • dealcorn - Sunday, March 19, 2017 - link

    This drive is not targeted at consumers and their reaction is not very relevant. I am curious about the level of market demand from the data center. Reply
  • Meteor2 - Monday, March 20, 2017 - link

    I imagine it will be very popular for big databases like Oracle and SAP. Presumably it'll mainly end up in those big multi-socket servers which use Xeon E7s. Reply
  • Anonymous Blowhard - Monday, March 20, 2017 - link

    > I am curious about the level of market demand from the data center.

    Allow me to summarize:

    HOLY CRAP WE NEED THIS WHERE CAN WE BUY IT WHAT THE HELL DO YOU MEAN IT ONLY COMES IN 375GB BIGGER FASTER MORE MORE MORE

    That's about how we're reacting to it.
    Reply
  • vladx - Monday, March 20, 2017 - link

    And you shouldn't have expected improved load times, the bottleneck is usually the CPU who is quickly left behind the rest. Reply
  • kawmic - Monday, March 20, 2017 - link

    Hmmmmm. What lower price?? Reply
  • BrokenCrayons - Monday, March 20, 2017 - link

    Well, the endurance estimates are nice, but still a fair bit lower than I was expecting. It's obviously priced too high for me to bother with for a client system (and in a form factor I can't really use) so at this point, the endurance doesn't really matter. Reply
  • Guspaz - Monday, March 20, 2017 - link

    So the endurance is worse than enterprise flash SSDs (only 33,000 writes based on the rated endurance/warranty), the throughput is worse than flash SSDs, the price is dramatically higher than flash SSDs, and the IOPS are on par with enterprise NVMe flash SSDs. What's the use case for 3D XPoint? Unless you really need byte-level adressability, it seems to generally be on par with or worse than flash in every respect. Reply
  • vladx - Monday, March 20, 2017 - link

    There aren't many enterprise SSDs even that are guaranteed for 30 DWPD. Reply
  • Guspaz - Monday, March 20, 2017 - link

    But not far off. Intel's P3700 is rated for 17 DWPD, and when you normalize the warranty length to match the Optane drive, that becomes 28.33 DWPD, which is pretty damned close to 30. Reply
  • vladx - Tuesday, March 21, 2017 - link

    You did read that the warranty will extend to 5 years when the enture lineup is released, right? Reply
  • Guspaz - Tuesday, March 21, 2017 - link

    Not on the products sold before that point, and considering the price premium, you could still get comparable endurance and performance for less than the price of the Optane SSD. About the only advantage the Optane drive has is absolute latency, and I really doubt that matters for regular storage uses when we're talking about such small latencies to begin with.

    My point is that we were promised DRAM-like write endurances, and yet they're rating these things at endurances that are worse than SLC, and are barely beating out MLC.
    Reply
  • Shadowmaster625 - Monday, March 20, 2017 - link

    4uS? Ugh. Why arent these on a DIMM?? Reply
  • timbotim - Monday, March 20, 2017 - link

    Maybe the enterprise chaps are happy with this, I wouldn't know. But it does seem to have been over-hyped. 2x the performance of the SSD750 where it counts (QD1 read and sequential, shame about the QD1 write) for most of us (desktoppers) I would guess at 3x the price. I'll wait a generation or 2... Reply
  • jabber - Tuesday, March 21, 2017 - link

    Thing is Intel didn't design this for you to play Battlefield or CoD on. Reply

Log in

Don't have an account? Sign up now