BAPCo SYSmark 2014 SE

BAPCo's SYSmark 2014 SE is an application-based benchmark that uses real-world applications to replay usage patterns of business users in the areas of office productivity, media creation and data/financial analysis. In addition, it also addresses the responsiveness aspect which deals with user experience as related to application and file launches, multi-tasking etc. Scores are meant to be compared against a reference desktop (the SYSmark 2014 SE calibration system in the graphs below). While the SYSmark 2014 benchmark used a Haswell-based desktop configuration, the SYSmark 2014 SE makes the move to a Lenovo ThinkCenter M800 (Intel Core i3-6100, 4GB RAM and a 256GB SATA SSD). The calibration system scores 1000 in each of the scenarios. A score of, say, 2000, would imply that the system under test is twice as fast as the reference system.

SYSmark scores are based on total application response time as seen by the user, including not only storage latency but time spent by the processor. This means there's a limit to how much a storage improvement could possibly increase scores. It also means our Optane review system starts out with an advantage over the SYSmark calibration system due to the faster processor and more RAM.

SYSmark 2014 SE - Office Productivity

SYSmark 2014 SE - Media Creation

SYSmark 2014 SE - Data / Financial Analysis

SYSmark 2014 SE - Responsiveness

SYSmark 2014 SE - Overall Score

In every performance category the Optane caching setup is either in first place or a close tie for first. The Crucial MX300 is tied with the Optane configuration for every sub-test except the responsiveness test, where it falls slightly behind. The Samsung 960 EVO 250GB struggles, partly because its low capacity and the low degree of parallelism that implies means it often cannot take advantage of the performance offered by its PCIe 3.0 x4 interface. The use of Microsoft's built-in NVMe driver instead of Samsung's may also be holding it back. As expected, the WD Black hard drive scores substantially worse than our solid-state configurations on every test, with the biggest disparity occurring in the responsiveness test: The WD Black hard drive will force users to spend more than twice as much time waiting on their computer than if it has a SSD.

Energy Usage

SYSmark 2014 SE also adds energy measurement to the mix. A high score in the SYSmark benchmarks might be nice to have, but, potential customers also need to determine the balance between power consumption and the efficiency of the system. For example, in the average office scenario, it might not be worth purchasing a noisy and power-hungry PC just because it ends up with a 2000 score in the SYSmark 2014 SE benchmarks. In order to provide a balanced perspective, SYSmark 2014 SE also allows vendors and decision makers to track the energy consumption during each workload. In the graphs below, we find the total energy consumed by the PC under test for a single iteration of each SYSmark 2014 SE workload and how it compares against the calibration systems.

SYSmark 2014 SE - Energy Consumption - Office Productivity

SYSmark 2014 SE - Energy Consumption - Media Creation

SYSmark 2014 SE - Energy Consumption - Data / Financial Analysis

SYSmark 2014 SE - Energy Consumption - Responsiveness

SYSmark 2014 SE - Energy Consumption - Overall Score

The peak power consumption of a PCIe SSD under load can exceed the power draw of a hard drive, but over the course of a fixed workload hard drives will always be less power efficient. SSDs almost always complete the data transfer sooner, and they can enter and leave their low-power idle states far quicker. On a benchmark like SYSmark, there are no idle times long enough for a hard drive to spin down and save power.

With an idle power of 1W, the Optane cache module substantially increases the already high power consumption of the hard drive-based configurations. It does allow for the tests to complete sooner, but since the Optane module does nothing to accelerate the compute-bound portions of SYSmark, the total time saved is not enough to make up the difference. It also appears that the Optane caching is not being used to enable more aggressive power saving on the hard drive—Intel's probably flushing writes from the cache often enough to keep the hard drive spinning the whole time. What this adds up to is a difference that's quite clear but not big enough for desktop users to be too concerned with unless their electricity prices are high. The Optane Memory caching configuration is the most power-hungry option we tested, while the second-place performing Crucial MX300 configuration was most efficient, using about 16% less energy overall.

For mobile users, the power consumption of the Optane plus hard drive configuration is pretty much a deal-breaker. Our Optane review system is not optimized for power consumption the way a notebook system would be, so for a mobile user the Optane module would account for an even larger portion of the total battery draw, and battery life will take a serious hit.

Testing Optane Memory Random Access Performance
Comments Locked

110 Comments

View All Comments

  • ddriver - Tuesday, April 25, 2017 - link

    Yeah, daring intel, the pioneer, taking mankind to better places.

    Oh wait, that's right, it is actually a greedy monopoly that has mercilessly milked people while making nothing aside from barely incremental stuff for years and through its anti-competitive practices has actually held progress back tremendously.

    As I already mentioned above, the last time "intel dared to innovate" that resulted in netburst. Which was so bad that in order to save the day intel had to... do what? Innovate once again? Nope, god forbid, what they did was go back and improve on the good design they had and scrapped in their futile attempts to innovate.

    And as I already mentioned above, all the secrecy behind xpoint might be exactly because it is NOTHING innovative, but something old and forgotten, just slightly improved.
  • Reflex - Tuesday, April 25, 2017 - link

    Axe is looking pretty worn down from all that grinding....
  • ddriver - Wednesday, April 26, 2017 - link

    Also, unlike you, I don't let personal preferences cloud my objectivity. If a product is good, even if made by the most wretched corporation out there, it is not a bad product just because of who makes it, it is still a good product, still made by a wretched corporation.

    Even if intel wasn't a lousy bloated lazy greedy monopolist, hypetane would still suck, because it isn't anywhere near the "1000x" improvements they promised. It would suck even if intel was a charity that fed the starving in the 3rd world.

    I would have had ZERO objections to hypetane, and also wouldn't call it hypetane to begin with, if intel, the spoiled greedy monopolist was still decent enough to not SHAMELESSLY LIE ABOUT IT.

    Had they just said "10x better latency, 4x better low depth queue performance" and stuff like that, I'd be like "well, it's ok, it is faster than nand, you delivered what you promised.

    But they didn't. They lied, and lied, and now that it is clear that they lied, they keep on lying and smearing with biased reviews in unrealistic workloads.

    What kind of an idiot would ever approve of that?
  • fallaha56 - Tuesday, April 25, 2017 - link

    OMG when our product wasn't as good as we said it was we didn't own-up about it

    and maybe you test against HDD (like Intel) but the rest of us are already packing SSDs
  • philehidiot - Saturday, April 29, 2017 - link

    This is what companies do. Your technology is useless unless you can market it. And you don't market anything by saying it's mediocre. Look at BP's high octane fuel which supposedly cleans your engine and gets better fuel efficiency. The ONLY thing that higher octane fuel does is resist auto-ignition under compression better and thus certain high performance engines require it. As for cleaning your engine - you're telling me you've got a solvent which is better at cutting through crap than petrol AND can survive the massive temperatures and pressures inside the combustion chamber? It's the petrol which scrubs off the crap so yes, it's technically true. They might throw and additive or two in there but that will only help pre-combustion chamber and if you actually have a problem. And Yes, in certain, newer cars with certain sensors you will get SLIGHTLY higher MPG and therefore they advertise the maximum you'll get under ideal conditions because no one will but into it if you're realistic about the gains. The gains will never offset the extra cost of the fuel, however.

    PC marketing is exactly the same and why the J Micron controller was such a disaster so many years ago. They went for advertised high sequential throughput numbers being as high as possible and destroyed the random performance, Anand spotted it and OCZ threw a wobbler. But that experience led to drives being advertised on random performance as well as sequential.

    So what's the lesson here? We should always take manufacturer's claims with a mouthful of salt and buy based on objective criteria and independent measurements. Manufacturers will always state what is achievable in basically a lab set up with conditions controlled to perfection. Why? Because for one you can't quote numbers based on real life performance because everyone's experience will differ and you can't account for the different variables they'll experience. And for two, if everyone else is quoting the maximum theoretical potential, you're immediately putting yourself at a disadvantage by not doing so yourself. It's not about your product, it's about how well you can sell it to a customer - see: Stupidly expensive Dyson Hairdryer. Provides no real performance benefit over a cheap hairdryer but cost a lot in R&D and is mostly advertising wank for rich people with small brains.

    As for Intel being a greedy monopoly... welcome to capitalism. If you don't want that side effect of the system then bugger off to Cuba. Capitalism has brought society to the highest standard of living ever seen on this planet. No other form of economic operation has allowed so many to have so much. But the result is big companies like Intel, Google, Apple, etc, etc.

    Advertising wank is just that. Figures to masturbate over. If they didn't do it then sites like Anandtech wouldn't need to exist as products would always be accurately described by the manufacturer and placed honestly within the market and so reviews wouldn't be required.

    I doubt they lied completely - they will be going on the theoretical limits of their technology when all engineering limitations are removed. This will never happen in practice and will certainly never happen in a gen 1 product. Also, whilst I see this product as being pointless, it's obviously just a toe dipping exercise like the enterprise model. Small scale, very controlled use cases and therefore good real world use data to be returned for gen 2/3.

    Personally, whilst I'm wowed by the figures, I don't see how they're going to improve things for me. So what's the point in a different technology when SLC can probably perform just as well? It's a different development path which will encounter different limitations and as a result will provide different advantages further down the road. Why do they continue to build coal fired power stations when we have CCGTs, wind, solar, nukes, etc? Because each technology has its strengths and weaknesses and encounters different engineering limitations in development. Plus a plurality of different, competing technologies is always better as it creates progress. You can't whinge about monopolies and then when someone starts doing something different and competing with the established norm start whinging about that.
  • fallaha56 - Tuesday, April 25, 2017 - link

    hi @sarah i find that a dead hard drive also plays into responsiveness and boot times(!)

    this technology is clearly not anywhere near as good as Intel implied it was
  • CaedenV - Monday, April 24, 2017 - link

    I have never once had an SSD fail because it has over-used its flash memory... but controllers die all the time. It seems that this will remain true for this as well.
  • Ryan Smith - Tuesday, April 25, 2017 - link

    And that's exactly what we're suspecting here. We've likely managed to hit a bug in the controller's firmware. Which to be sure, isn't fantastic, but it can be fixed.

    Prior to the P3700's launch, Intel sent us 4 samples specifically for stress testing. We managed to disable every last one of them. However Intel learned from our abuse, and now those same P3700s are rock-solid thanks to better firmware and drivers.
  • jimjamjamie - Tuesday, April 25, 2017 - link

    Interesting that an ad-supported website can stress-test better than a multi-billion dollar company..
  • testbug00 - Tuesday, April 25, 2017 - link

    based on what? Have they sent you another model?

    A sample dying on day one, and only allowing testing via remote server doesn't confidence build.

Log in

Don't have an account? Sign up now