Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

The Optane SSDs put even Samsung's best NVMe SSDs to shame on the mixed random I/O test. The 800p is a little more than half as fast as the 900p, which is plenty to put it far out of reach of the flash-based SSDs.

Mixed 4kB Random Read/Write (Power Efficiency)

The Optane SSD 800p takes first place for power efficiency on the mixed random I/O test, with the 58GB model having a slight advantage over the 118GB due to the lower power consumption of operating half as many 3D XPoint dies. The flash-based SSDs come close to matching the efficiency of the Optane SSD 900p, but are far behind the 800p.

With no write buffering, the Optane SSDs show a steady decline in performance as the proportion of writes increases, with no spike in performance at the end as is typical of flash-based SSDs with aggressive write combining. The 800p shows a more pronounced reduction in performance than the 900p, while the 900p's power consumption climbs more.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

With only a PCIe x2 interface and sequential write speeds that only use a fraction of that bandwidth, the Optane SSD 800p is actually performing pretty well on the mixed sequential I/O test. The 800p has the same average performance as Samsung's fastest TLC SSD, and offers more than two thirds the performance of the Samsung 960 PRO.

Mixed 128kB Sequential Read/Write (Power Efficiency)

The power efficiency of the Intel Optane SSD 800p is second only to that of the Samsung 960 PRO. The 800p's efficiency score is far above the 900p and the low-end NVMe SSDs.

As with the mixed random I/O test, the Intel Optane SSD 800p shows a steady decline inn performance as more writes are added to the mix. The decline is steeper than the one shown by the 900p. Power consumption increases very slightly over the course of the test but still stays within the rated maximum.

Sequential Performance Power Management
Comments Locked

116 Comments

View All Comments

  • Reflex - Saturday, March 10, 2018 - link

    I also think people forget how crappy & expensive gen1 and 2 SSD's were.
  • Drazick - Friday, March 9, 2018 - link

    We really need those in U2 / SATA Express form.
    Desktop users shouldn't use M2 with all its thermal limitations.
  • jabber - Friday, March 9, 2018 - link

    Whichever connector you use or whatever the thermals, once you go above 600MBps the real world performance difference is very hard to tell in most cases. We just need SATA4 and we can dump all these U2/SATA Express sockets. M.2 for compactness and SATA4 for everything else non Enterprise. Done.
  • Reflex - Friday, March 9, 2018 - link

    U2 essentially is next gen SATA. There is no SATA4 on the way. SATA is at this point an 18 year old specification ripe for retirement. There is also nothing wrong with M.2 even in desktops. Heat spreaders aren't a big deal in that scenario. All that's inside a SATA drive is the same board you'd see in M.2 form factor more or less.
  • leexgx - Saturday, March 10, 2018 - link

    apart from that your limited to 0-2 slots per board (most come with 6 SATA ports)

    i agree that a newer SATA that support NVME over it be nice but U2 be nice if anyone would adopt it and make the ports become standard and have U2 SSDs
  • jabber - Friday, March 9, 2018 - link

    I am amazed that no one has decided to just do the logical thing and slap a 64GB Flash cache in a 4TB+ HDD and be done with it. One unit and done.
  • iter - Friday, March 9, 2018 - link

    They have, seagate has a hybrid drive, not all that great really.

    The reason is that caching algorithms suck. They are usually FIFO - first in first out, and don't take into account actual usage patterns. Meaning you get good performance only if your work is confined to a data set that doesn't exceed the cache. If you exceed it, it starts bringing in garbage, wearing down the flash over nothing. Go watch a movie, that you are only gonna watch once - it will cache that, because you accessed it. And now you have gigabytes of pointless writes to the cache, displacing data that actually made sense to be cached.

    Which is why I personally prefer to have separate drives rather than cache. Because I know what can benefit from flash and what makes no sense there. Automatic tiering is pathetic, even in crazy expensive enterprise software.
  • jabber - Friday, March 9, 2018 - link

    Yeah I was using SSHD drives when they first came out but 8GB of flash doesn't really cut it. I'm sure after all this time 64GB costs the same as 8GB did back then (plus it would be space enough for several apps and data sets to be retained) and the algorithms will have improved. If Intel thinks caches for HDDs have legs then why not just combine them in one simple package?
  • wumpus - Friday, March 9, 2018 - link

    Presumably, there's no market. People who buy spinning rust are either buying capacity (for media, and using SSD for the rest) or cheaping out and not buying SSDs.

    What surprises me is that drives still include 64MB of DRAM, you would think that companies who bothered to make these drives would have switched to TLC (and pseudo-SLC) for their buffer/caches (writing on power off must be a pain). Good luck finding someone who would pay for the difference.

    Intel managed to shove this tech into the chipsets (presumably a software driver that looked for the hardware flag, similar to RAID) in 2011-2012, but apparently dropped that soon afterward. Too bad, reserving 64GB of flash to cache a harddrive (no idea if you could do this with a RAID array) sounds like something that is still usefull (not that you need the performance, just that the flash is so cheap). Just make sure the cache is set to "write through" [if this kills performance it shouldn't be on rust] to avoid doubling your chances of drive loss. Apparently the support costs weren't worth the bother.
  • leexgx - Saturday, March 10, 2018 - link

    8GB should be plenty for SSHD and there currant generation have cache evic protection (witch i think is 3rd gen) so say a LBA block is read 10 times it will assume that is something you open often or its a system file or a startup item, so 2-3GB of data will not get removed easily (so windows, office, browsers and other startup items will always be in the nand cache) the rest of the caching is dynamic if its had more then 2-4 reads it caches it to the nand

    the current generation SSHDs by seagate (don't know how others do it) its split into 3 sections so has a easy, bit harder and very hard to evict from read cache, as the first gen SSHDs from seagate just defragmenting the drive would end evicting your normal used stuff as any 2 reads would be cached right away that does not happen any more

    if you expect it to make your games load faster you need to look elsewhere, as they are meant to boost commonly used applications and OS and on startup programs but still have the space for storage

    that said i really dislike HDDs as boot drives if they did not cost £55 for a 250gb SSD i put them in for free

Log in

Don't have an account? Sign up now