Intel's Benchmarks

Since time constraints meant that we were not able to run a ton of benchmarks ourselves, it's useful to check out Intel's own benchmarks as well. In our experience Intel's own benchmarking has a good track record for producing accurate numbers and documenting configuration details. Of course, you have to read all the benchmarking information carefully to make sure you understand just what is being tested.

The OLTP and virtualization benchmarks show that the new Xeon E7 v3 is about 25 to 39% faster than the previous Xeon E7 (v2). In some of those benchmarks, the new Xeon had twice as much memory, but it is safe to say that this will make only a small difference. We think it's reasonable to conclude that the Xeon E7 is 25 to 30% faster, which is also what we found in our integer benchmarks.

The increase in legacy FP application is much lower. For example Cinebench was 14% faster, SPECFP 9% and our own OpenFOAM was about 4% faster. Meanwhile linpack benchmarks are pretty useless to most of the HPC world, so we have more faith in our own benchmarking. Intel's own realistic HPC benchmarking showed at best a 19% increase, which is nothing to write home about.

The exciting part about this new Xeon E7 is that data analytics/mining happens a lot faster on the new Xeon E7 v3. The 72% faster SAS analytics number is not really accurate as part of the speedup was due to using P3700 SSDs instead of the S3700 SSD. Still, Intel claims that the replacing the E7 v2 with the v3 is good for a 55-58% speedup.

The most spectacular benchmark is of course SAP HANA. It is not 6x faster as Intel claims, but rather 3.3x (see our comments about TSX). That is still spectacular and the result of excellent software and hardware engineering.

Final Words: Comparing Xeon E7 v3 vs V2

For those of us running scale-up, reasonably priced HPC or database applications, it is hard to get excited about the Xeon E7 v3. The performance increases are small-but-tangible, however at the same time the new Xeon E7 costs a bit more. Meanwhile as far as our (HPC) energy measurements go, there is no tangible increase in performance per watt.

The Xeon E7 in its natural habitat: heavy heatsinks, hotpluggable memory

However organizations running SAP HANA will welcome the new Xeon E7 with open arms, they get massive speedups for a 0.1% or less budget increase. The rest of the data mining community with expensive software will benefit too, as the new Xeon E7 is at least 50% faster in those applications thanks to TSX.

Ultimately we wonder how the rest of us will fare. Will SAP/SAS speedups also be visible in open source Big Data software such as Hadoop and Elastic Search? Currently we are still struggling to get the full potential out of the 144 threads. Some of these tests run for a few days only to end with a very vague error message: big data benchmarking is hard.

Comparing Xeon E7 v3 and POWER8

Although the POWER8 is still a power gobbling monster, just like its older brother the POWER7, there is no denying that IBM has made enormous progress. Few people will be surprised that IBM's much more expensive enterprise systems beat Intel based offerings in the some high-end benchmarks like SAP's. But the fact that 24 POWER8 cores in a relatively reasonably priced IBM POWER8 server can beat 36 Intel Haswell cores by a considerable margin is new.

It is also interesting that our own integer benchmarking shows that the POWER8 core is capable of keeping up with Intel's best core at the same clockspeed (3.3-3.4 GHz). Well, at least as long as you feed it enough threads in IPC unfriendly code. But that last sentence is the exact description of many server workloads. It also means that the SAP benchmark is not an exception: the IBM POWER8 is definitely not the best CPU to run Crysis (not enough threads) but it is without a doubt a dangerous competitor for Xeon E7 when given enough threads to fill up the CPU.

Right now the threat to Intel is not dire, IBM still asks way too much for its best POWER8 systems and the Xeons have a much better performance-per-watt ratio. But once the OpenPOWER fondation partners start offering server solutions, there is a good chance that Intel will receive some very significant performance-per-dollar competition in the server market.

HPC Watts per Job
Comments Locked

146 Comments

View All Comments

  • kgardas - Thursday, May 21, 2015 - link

    @Kevin G: thanks for the correction about load/store units doing simple integer operations. I agree with your testing that single-threaded POWER8 is not up to the speed of Haswell. In fact my testing shows it's on the same level like POWER7.
    So with POWER8 doing 6 integer ops in cycle, it's more powerful than SPARC64 X which is doing 4 or than SPARC S3 core which is doing just 2. It also explain well spec rate difference between M10-4 and POWER8 machine. Good! Things start to be more clear now...
  • patrickjp93 - Saturday, May 16, 2015 - link

    No, just no. Intel solved the cluster latency problem long ago with Infiniband revisions. 4 nanoseconds to have a 10-removed node tell the head node something or vice versa, and no one builds hypercube or start topology that's any worse than 10-removed.
  • Brutalizer - Sunday, May 17, 2015 - link

    @patrickjp93,
    If Intel solved the cluster latency problem, then why are not SGI UV2000 and ScaleMP clusters used to run monolithic business software? Question: Why are there no SGI UV2000 top records in SAP?
    Answer: Because they can not run monolithich software that branches too much. That is why there are no good x86 benchmarks.
  • misiu_mp - Tuesday, June 2, 2015 - link

    Just to point out, 10ns at the speed of light in vacuum is 3m, and signalling is slower than that because of the fibre medium (glass) limits the sped of light to about 60% of c and on top of that come electronic latencies. So maybe you can get 10ns latency over 1-1.5m maximum. That's not a large cluster.
  • Kevin G - Monday, May 11, 2015 - link

    I am not uninformed. I would say that you're being willfully ignorant. In fact, you ignored my previous links about this very topic when I could actually find examples for you. ( For the curious outsider: http://www.anandtech.com/comments/7757/quad-ivy-br... )

    So again I will cite the US Post Office using SGI machines to run Oracle Times Ten databases:
    http://www.intelfreepress.com/news/usps-supercompu...

    As for the UV 2000 not being a scale up sever, did you not watch the videos I posted? You can see Linux tools in the videos clearly that indicate that it was produced on a 64 socket system. If that is not a scale up server, why are the LInux tools reporting it as such? If the UV 2000 series isn't good for SAP, then why is HANA being tuned to run on it by both SGI and SAP?

    HP's Superdome X shares a strong relationship with the Itanium based Superdome 2 machine: they use the same chipset to scale past 8 sockets. This is because the recent Itaniums and Xeons both use QPI as an interconnect bus. So if the Superdome X is a cluster, then so is its Itanium 2 based offerings using that same chipset. Speaking of, that chipset does go up to 64 sockets and there is the potential to go that far (source: http://www.enterprisetech.com/2014/12/02/hps-itani... ). It won't be a decade if HP already has working chipset that they've shipped in other machines. :)

    Speaking of the Superdome X, it is fast and can outrun the SPARC M10-4S at the 16 socket level by a fatctor of 2.38. Even with perfect scaling, the SPARC system would need more than 32 sockets to compete. Oh wait, if we go by your claim above that "you double the number of sockets, you will likely gain 20% or so" then the SPARC system would need to scale to 512 sockets to be competitive with the Superdome X. (Source: http://h20195.www2.hp.com/V2/getpdf.aspx/4AA5-6149... )

    And if you're dead set on an >8 socket SAP benchmark using x86 processors, here is one, though a bit dated:
    https://www.vmware.com/files/pdf/partners/ibm/ibm-...
  • 68k - Tuesday, May 12, 2015 - link

    You know, those >8 socket systems are flying of the shelf faster than anyone can produce them. That is why Intel only got, according to the article, 92-94% of the >4-sockets market... It seem pretty safe to state that >8 socket is an extreme niche market, which is probably why it is hard to find any benchmarks on such systems.

    The price point of really big scaled-up servers is also an extreme intensive to think very hard about how one can design software to now require a single system. Some problems absolutely need scale-up, as you pointed out, there are such x86 systems available and have been for quite some time.

    Anyone know what the ratio between the 2-socket servers vs >4-socket in terms of market share (number of deployed systems) look like?
  • 68k - Tuesday, May 12, 2015 - link

    No edit... Pretend that '>' means "greater or equal" in the post above.
  • Arkive - Tuesday, May 12, 2015 - link

    You guys are obviously not idiots, just enormously stubborn. Why don't you take the wealth of time you spend fighting on the internet and do something productive instead?
  • kgardas - Wednesday, May 13, 2015 - link

    Kevin, I'll not argue with you about SGI UV. It's in fact very nice machine and it looks like it is on part with latency to Sun/Oracle Bixby interconnect. Anyway, what I would like to note is about your Superdome X comparison to SPARC M10-4S. Unfortunately IMHO this is purely CPU benchmark. It's multi-JVM so if you use one JVM per one processor, you pin that JVM to this processor and limit its memory to the size (max) of memory available to the processor, then basically you do have kind of scale-out cluser inside one machine. This is IMHO what they are benchmarking. What it just shows that current SPARC64 is really not up to the performance level of latest Xeon. Pity, but is fact. Anyway, my point is, for memory scalability benchmark you should use something different than multi-jvm bench. I would vote for stream here, although it's still bandwidth oriented still it provides at least some picture: https://www.cs.virginia.edu/stream/top20/Bandwidth... -- no Superdome there and HP submitted some Superdome results in the past. Perhaps it's not memory scalability hero these days?
  • ats - Tuesday, May 12, 2015 - link

    So that must be why SAP did this for SGI: https://www.sgi.com/company_info/newsroom/awards/s...

    You don't normally recognize partners for innovation for your products unless they are filling a need. AKA SGI actually sells their UV300H appliance.

    And SAP HANA like ALL DBs can be used both SSI or clustered.

    And the SAP SD 2-Tier benchmark is not at all monolithic. The whole 2-tier thing should of been a hint. And SAP SD 3-Tier is also not monolithic.

    Scaling for x86 is no easier nor no harder than for any other architecture for large scale coherent systems. If you think it is, its because you don't know jack. I've designed CPUs/Systems that can scale up to 256 CPUs in a coherent image, fyi. Also it should probably be noted that the large scale systems are pretty much never used as monolithic systems, and almost always used via partitioning or VMs.

Log in

Don't have an account? Sign up now