Application Benchmarks

With a complex multi-layer storage system like the Intel Optane Memory H10, the most accurate benchmarks will be tests that use real-world applications. BAPCo's SYSmark 2018 and UL's PCMark 10 are two competing suites of automated application benchmarks. Both share the general goal of assigning a score to represent total system performance, plus several subscores covering different common use cases. PCMark 10 is the shorter test to run and it provides a more detailed breakdown of subscores. It is also much more GPU-heavy with 3D rendering included in the standard test suite and some 3DMark tests included in the Extended test. SYSmark 2018 has the advantage of using the full commercial versions of popular applications including Microsoft Office and Adobe Creative Suite, and it integrates with a power meter to record total system energy usage over the course of the test.

The downside of these tests is that they cover only the most common everyday use cases, and do not simulate any heavy multitasking. None of their subtests are particularly storage-intensive, so most scores only vary slightly when changing between fast and slow SSDs.

BAPCo SYSmark 2018

BAPCo's SYSmark 2018 is an application-based benchmark that uses real-world applications to replay usage patterns of business users, with subscores for productivity, creativity and responsiveness. Scores represnt overall system performance and are calibrated against a reference system that is defined to score 1000 in each of the scenarios. A score of, say, 2000, would imply that the system under test is twice as fast as the reference system.

BAPCo SYSmark 2018 Scores
Creativity Productivity Responsiveness Overall

The Kaby Lake desktop and Whiskey Lake notebook trade places depending on the subtest; sometimes the notebook is ahead thanks to its extra RAM, and sometimes the desktop is ahead thanks to its higher TDP. These differences usually have a bigger impact than choice of storage, though the Responsiveness test does show that a hard drive alone is inadequate. The Optane Memory H10's score with caching on is not noticeably better than when using the QLC portion alone, and even the hard drive with an Optane cache is fairly competitive with the all-solid state storage configurations.

Energy Usage

The SYSmark energy usage scores measure total system power consumption, excluding the display. Our Kaby Lake test system idles at around 26 W and peaks at over 60 W measured at the wall during the benchmark run. SATA SSDs seldom exceed 5 W and idle at a fraction of a watt, and the SSDs spend most of the test idle. This means the energy usage scores will inevitably be very close. The notebook uses substantially less power despite this measurement including the display. None of the really power-hungry storage options (hard drives, Optane 900P) can fit in this system, so the energy usage scores are also fairly close together.

BAPCo SYSmark 2018 - Energy Consumption

The Optane Memory H10 was the most power-hungry M.2 option, and leaving the Optane cache off saves a tiny bit of power but not enough to catch up with the good TLC-based drives. The Optane SSD 800P has better power efficiency than most of the flash-based drives, but its low capacity is a hindrance for real-world use.

 

UL PCMark 10

PCMark 10 scores
Subscore:

The Optane cache provides enough of a boost to PCMark 10 Extended scores to bring the H10 into the lead among the M.2 SSDs tested on the Whiskey Lake notebook. The Essentials subtests show the most impact from the Optane storage while the more compute-heavy tasks are relatively unaffected, with the H10 performing about the same with or without caching enabled.

Test Setup Cache Size Effects
Comments Locked

60 Comments

View All Comments

  • SaberKOG91 - Monday, April 22, 2019 - link

    Nothing special about my usage on my laptop. Running linux so I'm sure journals and other logs are a decent portion of the background activity. I also consume a fair bit of streaming media so caching to disk is also very likely. This machine gets actively used an average of 10-12 hours a day and is usually only completely off for about 8-10 hours. I also install about 150MB of software updates a week, which is pretty on par with say windows update. I also use Spotify which definitely racks up some writes.

    I can't speak to the endurance of that drive, but it is also MLC instead of TLC.

    I would argue that it means that the cost per GB of QLC is now low enough that the manufacturing benefit of smaller dies for the same capacity is worth it. Most consumer SSDs are 250-500GB regardless of technology.

    I'm not referring to a few faulty units or infant mortality. I can't remember the exact news piece, but there were reports of unusually high failure rates in the first generation of Optane cache modules. I also wasn't amused when Anandtech's review sample of the first consumer cache drive died before they finished testing it. You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW. It's also worth noting that you may accelerate drive death if you exceed the rated DWPD.
  • RSAUser - Tuesday, April 23, 2019 - link

    I'm at about 3TB after nearly 2 years, this with adding new software like android etc. And swapping between technologies constantly and wiping my drive once every year.
    I also have Spotify, game on it, etc.

    There is something wrong with your usage if you have that much write? I have 32GB RAM so very little caching though, so could be the difference.
  • IntelUser2000 - Tuesday, April 23, 2019 - link

    "You're also assuming that they only factor in the failure of a drive is write endurance. It could very well be that overheating, leakage buildup, or some other electrical factor lead to premature failure, regardless of TBW."

    I certainly did not. It was in reply to your original post.

    Yes, write endurance is a small part of a drive failing. If its failing due to other reasons way before warranty, then they should move to remedy this.
  • Irata - Tuesday, April 23, 2019 - link

    You are forgetting the sleep state on laptops. That alone will result in a lot of data being written to the SSD.
  • jeremyshaw - Sunday, July 14, 2019 - link

    Or they have a laptop with the "Modern Standby," which is code for:

    Subpar idle state which goes to Hibernation (flush RAM to SSD - I have 32GB of RAM) whenever the system drains too much power in this "Standby S3 replacement."
  • voicequal - Monday, April 22, 2019 - link

    "Optane has such horrible lifespan at these densities that reviewers destroyed the drives just benchmarking them."

    What is your source for this comment?
  • SaberKOG91 - Monday, April 22, 2019 - link

    Anandtech killed their review sample when Optane first came out. Happened other places too.
  • voicequal - Tuesday, April 23, 2019 - link

    Link? Anandtech doesn't do endurance testing, so I don't think it's possible to conclude that failures were the result of worn out media.
  • FunBunny2 - Wednesday, April 24, 2019 - link

    "Since our Optane Memory sample died after only about a day of testing, we cannot conduct a complete analysis of the product or make any final recommendations. "

    here: https://www.anandtech.com/show/11210/the-intel-opt...
  • Mikewind Dale - Monday, April 22, 2019 - link

    I don't understand the purpose of this product. For light duties, the Optane will be barely faster than the SLC cache, and the limitation to PCIe x2 might make the Optane slower than a x4 SLC cache. And for heavy duties, the PCIe x2 is definitely a bottleneck.

    So for light duties, a 660p is just as good, and for heavy duties, you need a Samsung 970 or something similar.

    Add in the fact that this combo Optane+QLC has serious hardware compatibility problems, and I just don't see the purpose. Even in the few systems where the Optane+QLC worked, it would still be much easier to just install a 660p and be done with it. Adding an extra software layer is just one more potential point of failure, and there's barely any offsetting benefit.

Log in

Don't have an account? Sign up now