Memory Subsystem: Latency Measurements

There is no doubt about it: the performance of modern CPUs depends heavily on the cache subsystem. And some applications depend heavily on the DRAM subsystem too. We used LMBench in an effort to try to measure latency. Our favorite tool to do this, Tinymembench, does not support the POWER architecture yet. That is a pity, because it is a lot more accurate and modern (as it can test with two outstanding requests).

The numbers we looked at were "Random load latency stride=16 Bytes" (LMBench).

Mem
Hierarchy
IBM POWER8 Intel Broadwell
Xeon E5-2640v4
DDR4-2133
Intel Broadwell
Xeon E5-2699v4
DDR4-2400
L1 Cache (cycles) 3 4 4
L2 Cache (cycles) 13 12-15 12-15
L3 Cache 4-8 MB(cycles) 27-28 (8 ns) 49-50 50
16 MB (ns) 55 ns 26 ns 21 ns
32-64 MB (ns) 55-57 ns 75-92 ns 80-96 ns
Memory 96-128 MB (ns) 67-74 ns 90-91 ns 96 ns
Memory 384-512 MB (ns) 89-91 ns 91-93 ns 95 ns

(Note that the numbers for Intel are higher than what we reported in our Cavium ThunderX review. The reason is that we are now using the numbers of LMBench and not those of Tinymembench.)

A 64 KB L1 cache with 4 read ports that can run at 4+ GHz speeds and still maintain a 3 cycle load latency is nothing less than the pinnacle of engineering. The L2 cache excels too, being twice as large (512 KB) and still offering the same latency as Intel's L2.

Once we get to the eDRAM L3 cache, our readings get a lot more confusing. The L3 cache is blistering fast as long as you only access the part that is closest to the core (8 MB). Go beyond that limit (16 MB), and you get a latency that is no less than 7 times worse. It looks like we actually hitting the Centaur chips, because the latency stays the same at 32 and 64 MB.

Intel has a much more predictable latency chart. Xeon's L3 cache needs about 50 cycles, and once you get into DRAM, you get a 90-96 ns latency. The "transistion phase" from 26 ns L3 to 90 ns DRAM is much smaller.

Comparatively, that "transition phase" seems relatively large on the IBM POWER8. We have to go beyond 128 MB before we get the full DRAM latency. And even then the Centaur chip seems to handle things well: the octal DDR-3 1333 MHz DRAM system delivers the same or even slightly better latency as the DDR4-2400 memory on the Xeon.

In summary, IBM's POWER8 has a twice as fast 8 MB L3, while Intel's L3 is vastly better in the 9-32 MB zone. But once you go beyond 32 MB, the IBM memory subsystem delivers better latency. At a significant power cost we must add, because those 4 memory buffers need about 64 Watts.

Memory Subsystem: Bandwidth Single-Threaded Integer Performance: SPEC CPU2006
Comments Locked

124 Comments

View All Comments

  • abufrejoval - Thursday, August 4, 2016 - link

    I believe "heavily threaded" is somewhat imprecise here: Knights Landing (KNL) is really more about vectorized workloads, or one very loopy and computationally expensive problem, which has been partitioned into lots of chunks, but has high locality. Same code, related data, far more computational throughput than data flowthrough.

    Power8 will do better on such workloads than perhaps Intel, but never as good as a GPU or KNL.

    However it does evidently better per core on highly threaded workloads, where lots of execution threads share the same code but distinct or less related datasets, less scientific and more commercial workloads, more data flowing through.

    Funnily KNL might even do well there, beating its Xeon-D sibling in every benchmark, even in terms of energy efficience.

    But I'm afraid that's because most of the KNL surface area would remain dark on such workload while the invests would burn through any budget.

    KNL is an odd beast designed for a rather specific job and only earn its money there, even if you can run Minecraft or Office on it.
  • Kevin G - Friday, July 22, 2016 - link

    I do think comparison with Xeon Phi is fair since it can run/boot itself now with Knight's Landing. Software parity with the normal x86 ecosystem is now there so it can run off the shelf binaries.

    I am very curious how well such a dense number of cores perform for workloads that don't need high single threaded performance.

    Another interest factor would be memory bandwidth performance as Xeon Phi has plenty. The HMC only further enhances that metric and worth exploring it as both a cache and main memory region for benchmarks.
  • Ratman6161 - Thursday, July 21, 2016 - link

    Will you be addressing virtualization in a future article. I ask this because you are saying the lower cost Power8 systems are intended to compete with the Dell's, HP's, Lenovo etc x86 servers. But these days, a very high percentage of x86 work loads are virutalized either on VMWare or competing products. In 2009 Gartner had it at about 50% and by 2014 it was at 70%. I didn't find a number for '15 or '16 but I expect the percentage would have continued to rise. So if they want to take the place of x86 boxes, they have to be able to do the tasks those boxes do...which tends to largely be to run virtual machines that do the actual workloads.

    And, what about all the x86 boxes running Windows Server or more commonly Windows Server Virtual machines? Windows Server shops aren't likely to ditch windows in favor of Linux solely for the privilege of running on Power8?

    One last thing to consider regarding price. These days we can buy quite robust Intel based server for around $10K. So, supposing I can buy a Power8 system for about the same price? Essentially the hardware has gotten so cheap compared to the licensing and support costs for the software we are running that its a drop in the bucket. If we needed 10 Intel servers or 6 Power 8's to do the same job (assuming the Power8's could run all our VM's), the Power8's could come out lower priced hardware wise, but the difference is, as I said, a drop in the bucket in the overall scheme of things. Performance wise, with the x86 boxes, you just throw more cores at it.
  • aryonoco - Friday, July 22, 2016 - link

    KVM works well on POWER.

    No idea about proprietary things like VMWare. But that would be up to them to port.
  • Ratman6161 - Friday, July 22, 2016 - link

    Near as I can tell, there is a PowerKVM that runs on Power 8 but that doesn't allow you to run Windows Server VM's - seems to support only Linux guests.
  • Zetbo - Saturday, July 23, 2016 - link

    Windows does not support POWER, so there is no point of using POWER if you need Windows!
  • utroz - Thursday, July 21, 2016 - link

    AMD should have used IBM's 22nm SOI to make cpu's so that they would not have been totally dead in the performance and server cpu market for years. GF now owns this process as they "bought" IBM's fabs and tech. I think that 22nm SOI might be better for high speed cpu's than the 14nm LPP FinFet that AMD is using for ZEN at the cost of die size.
  • amagriva - Thursday, July 21, 2016 - link

    How much you payed your cristal ball?
  • spikebike - Thursday, July 21, 2016 - link

    So a single socket Power8 is somewhat faster than the intel chip. But is being compared in a single socket configuration where the intel is designed for a two socket. Unless the power8 is cheaper than an intel dual socket seems most fare to compare both CPU as they are designed to be used.
  • SarahKerrigan - Friday, July 22, 2016 - link

    Power is designed for systems up to 16 sockets (IBM E880.) One socket is just the entry point.

Log in

Don't have an account? Sign up now