The Samsung 983 ZET and related Z-NAND drives are meant to deliver higher performance than any other flash-based SSD currently available. Thanks to the innate benefits of SLC NAND and Samsung's further efforts to optimize the resulting Z-NAND for reads and writes, the company has put together what is undoubtedly some of the best-performing NAND we've ever seen. But is this enough to give the company and its Z-NAND-based drives and edge over the competition, both flash and otherwise?

Compared to other flash-based enterprise SSDs, the 983 ZET certainly provides better performance than is otherwise possible for drives of such low capacity. The random read performance is unmatched by even the largest and most powerful TLC-based drives we've tested so far. But Z-NAND offers little advantage for sustained write performance, so the small capacity and low overprovisioning ratio of the 983 ZET leaves it at a disadvantage compared to similarly priced TLC drives. However, even when its throughput is unimpressive, the 983 ZET never fails to provide very low latency and excellent QoS that no other current flash-based SSD can beat.

While the 983 ZET is an excellent performer by the standards of flash-based SSDs, those aren't its primary competition. Rather, Intel's Optane SSDs are, and In almost every way the 983 ZET falls short of the Optane drives that motivated Samsung to develop Z-NAND. Samsung wasn't really aiming quite that high with their Z-SSDs, so the more important question is whether the 983 ZET comes close enough, given that it is about 35% cheaper per GB based on current pricing online. (Volume pricing may differ significantly, but is not generally public information.)

Whether the 983 ZET is worthwhile or preferable to the Optane SSD DC P4800X is highly dependent on the workload. The Optane SSD provides great performance on almost any workload regardless of the mix of reads and writes, and latency is low and consistent. Comparatively, the Samsung 983 ZET's strengths are very narrowly concentrated: it is basically all about the random read performance, and its maximum throughput is significantly higher than the Optane SSD while still being attainable with reasonably low latency and queue depths. Otherwise there are some massive TLC-based enterprise SSDs that also get close to 1M random read IOPS, but only with extremely high queue depths. The 983 ZET also offers better sequential read throughput than the Optane SSD, but there are far cheaper drives that can do the same.

The biggest problem for the 983 ZET is that its excellent performance only holds up for extremely read-intensive workloads; it doesn't take many writes to drag performance down. This is because Z-NAND is still afflicted by the need for wear leveling and complicated flash management with very slow block erase operations. On sustained write workloads, those background processes become the bottleneck. Intel's 3D XPoint memory allows in-place modification of data in fine-grained chunks, which is why its write performance doesn't fall off a cliff when the drive fills up. It would be interesting to see how much this performance gap between Z-NAND and 3D XPoint can be alleviated by overprovisioning, but there's not a lot of room to add to the BOM of the 983 ZET before it ends up matching the price of the Optane SSD DC P4800X.

Power efficiency is usually not a big concern for use cases that call for a premium SSD like the 983 ZET or an Optane SSD, but the Samsung 983 ZET does well here, thanks in part to the Samsung Phoenix controller it shares with Samsung's consumer product line. The Phoenix controller is designed to work within the constraints of a M.2 SSD in a battery-powered system, so it uses far less power than most high-end enterprise-only SSD controllers. The 983 ZET does consistently draw a bit more power than the TLC-based 983 DCT, but both still have competitive power efficiency in general. On the random read workloads where the 983 ZET offers unsurpassed performance, it also has a big power efficiency advantage over everything else, including the Intel Optane SSDs.

In the long run, Samsung is still working to develop their own alternative memory technologies; they've publicly disclosed that they are researching Spin-Torque Magnetoresistive RAM (ST-MRAM) and phase change memories, so Z-NAND may end up being more of an interim technology to fill a gap that will hopefully be better served by a new memory in a few years. But in the meantime, Z-NAND does have a niche to compete in, even if it's a bit narrower than the range of use cases that Intel's Optane SSDs are suitable for.

Mixed I/O & NoSQL Database Performance
Comments Locked


View All Comments

  • jabber - Tuesday, February 19, 2019 - link

    I just looked at the price in the specs and stopped reading right there.
  • Dragonstongue - Tuesday, February 19, 2019 - link

    Amen to that LOL
  • FunBunny2 - Tuesday, February 19, 2019 - link

    well... if one were to run a truly normalized RDBMS, i.e. 5NF and thus substantially smaller footprint compared to the common NoSQL flatfile alternative, this could be quite competitive. but that would require today's developers/coders to stop making apps just like their COBOL granddaddies did.
  • FreckledTrout - Tuesday, February 19, 2019 - link

    I have no idea why you are talking coding and database design principles as it does not apply here at all. To carry your tangent along, if you want to make max use of a SSD's you denormalize the hell out of the database and spread the load over a ton of servers, ie NoSQL.
  • FunBunny2 - Tuesday, February 19, 2019 - link

    well... that does keep coders employed forever. writing linguine code all day long.
  • FreckledTrout - Tuesday, February 19, 2019 - link

    Well it still is pointless in this conversation about fast SSD's. What spaghetti code has to do with that I have no idea. Sure they can move cloud native way of designing applications using micro services etl al but what the hell that has to do with fast SSD's baffles me.
  • FunBunny2 - Tuesday, February 19, 2019 - link

    " What spaghetti code has to do with that I have no idea. "

    well... you can write neat code against a 5NF datastore, or mountains of linguine to keep all that mound of redundant bytes from biting you. again, let's get smarter than our granddaddies. or not.
  • GreenReaper - Wednesday, February 20, 2019 - link

    They have at least encouraged old-school databases to up their game. With parallel queries on the back-end, PostgreSQL can fly now, as long as you give it the right indexes to play with. Like any complex tool, you still have to get familiar with it to use it properly, but it's worth the investment.
  • FunBunny2 - Wednesday, February 20, 2019 - link

    "They have at least encouraged old-school databases to up their game. "

    well... if you actually look at how these 'alternatives' (NoSql and such) to RDBMS work, you'll see that they're just re-hashes (he he) of simple flat files and IMS. anything xml-ish is just another hierarchical datastore, i.e. IMS. which predates RDBMS (Oracle was the first commercial implementation) by more than a decade. hierarchy and flatfile are the very, very old-school datastores.

    PG, while loved because it's Open Source, is buggy as hell. been there, endured that.

    anyway. the point of my comments was simply aimed at naming a use-case for these sorts of devices, nothing more, since so many comments questioned why it should exist. which is not to say it's the 'best' implementation for the use-case. but the use-case exists, whether most coders prefer to do transactions in the client, or not. back in your granddaddies' day, terminals (3270 and VT-100) were mostly dumb, and all code existed on the server/mainframe. 'client code' existed in close proximity to the 'database' (VSAM files, mostly), sometimes in the same address space, sometimes just on the same machine, and sometimes on a tethered machine. the point being: with today's innterTubes speed, there's really no advantage to 'doing transactions in the client' other than allowing client-centric coders to avoid learning how to support data integrity declaratively in the datastore. the result, of course, is that data integrity is duplicated both places (client and server) by different folks. there's no way the database folks, DBA and designer, are going to assume that all data coming in over the wire from the client really, really is clean. because it almost never is.
  • GruffaloOnVacation - Thursday, March 18, 2021 - link

    FunBunny2 you sound bitter, and this is the sentiment I see among the old school "database people". May I suggest, with the best of intentions for us all, that instead of sneering at the situation, you attempt to educate those who are willing to learn? I've been working on some SQL in my project at work recently, and so have read a number of articles, and parts of some database books. There was a lot of resentment and sneering at the stoopid programmers there, but no positive programme of action proposed. I'm interested in this subject. Where should I go for resources? Which talks should I watch? What books to read? Let's build something that is as cool as something you described. If it really is THAT good, once it is built - they will come, and they will change!

Log in

Don't have an account? Sign up now